Archive for the ‘Science/Research’ Category

In the mobile fashion-oriented gacha game Love Nikki, there’s a regularly-held contest in which you submit an outfit based on a prescribed theme. Entries are presented pairwise to other players, who must select one of the two entries to vote for, with an algorithm that standardizes the number of times an entry appears for voting. While the appearance of any given entry for judging is random, the paired entry will often share certain characteristics to make direct comparison more refined. Votes are tallied and the entry is assigned a percentile ranking, with a lower percentile being better (if in the top 1%, approximately the top 1000, then an absolute numeric ranking is provided instead). At the end of a contest cycle, in-game currency is awarded based on the percentile, heavily weighted towards the highest echelons (i.e. top 20%). The currency is then spent on new outfits that may be used for future contests.

This mechanic is actually very similar to applying for grants and fellowships from the NIH, where a researcher submits a research proposal in hopes of obtaining funding. These applications are reviewed by an independent committee (formally a “study section”), in which similarly themed applications are rated and then assigned a percentile ranking, again with a lower percentile being better. Funding is then awarded to applications ranked below the payline percentile (typically the top 10-20% depending on specific institute, grant mechanism, and other factors). The funding is then used to conduct research that produces results that then serve as the preliminary data for the next grant application.

1. The grant that’s guaranteed to never be funded is the one that’s never submitted at all.
It’s good to be detail-oriented, but delaying an application because of fear that the idea isn’t perfected is likely only to result in a missed opportunity in the end. This low-effort entry (using essentially a pre-curated outfit) adequately addresses the theme, and further refinement of the entry would definitely not have improved upon its ranking of <1%:

2. Sometimes, reviewers just won’t get what you were going for, but it doesn’t mean it was a bad idea. Though maybe this should be that clever pet project that’s happening in the background because it’s just too “out there” to ever bring in the $$$. (This entry really took a lot of effort to exploit the loopholes to create the illusion of 8 limbs)

3. And sometimes, you submit something you just know is the real deal, and it’s just what the reviewers were looking for. What a rare but satisfying event!

A few more just for fun (most screenshotted before voting occurred):

Two months ago, I was reading some quartets with a fellow violinist for fun.  At some point, I brought up my concept of “subway karma” – basically, that there was some quantity of good fortune I could build up (by waiting long times), and that it would dissipate at some point in the form of the train arriving immediately when I got to the station, ideally – and usually – when I really needed to get somewhere quickly.  I also threw out concept that when in a rush, the occurrence of either just missing or just catching a train was much higher than usual, and that this was somehow tied to some subway spirit’s judgment of me.

He of course told me this was ridiculous and that if I were really a scientist, I should prove it with data.  So I’ve done just that – I’ve recorded the majority of subway rides on the Red Line that I’ve taken in the past two months, notating the amount of time I waited (note that 0 = the train is right there as I enter the station and I’m able to board it; -1 = the train pulls out just as I’m entering the station – I did record the actual wait times for these, but those data are not presented here).

Here are the wait times for all the rides (n = 78), in chronological order.  I did not write down exactly what time I took each ride.

While it’s impossible to prove whether there’s a subway spirit or not, this looks pretty convincingly .. random.  No pattern of build-up and dissipation or anything like that.  But what about being in a rush?  Being in a rush was defined for these purposes as me having to get someplace at a set time, and that I was not leaving more than 5 or 10 minutes of extra wiggle-room in the estimated trip duration.  n=28 for being in a rush, n = 50 for not being in a rush.

What’s interesting here is how much more likely it is that I will catch a train just as it’s arriving at the station when I’m in a rush.  Most likely it’s something like the increased frequency – and increased stopping times – of trains during times when I’m likely to be in a rush (near or just after rush hour).  And it’s confounded by the fact that I can see people walking out of the station, or even hear the train pulling into the station, before I even enter the station, and make my own judgment about whether or not to book it.  But it’s interesting that my perception reflected a dimension of reality in this case, even if the magical interpretation is nonsensical.

Other notes: I never actually realized before how frequent the red line trains are, especially in comparison to the very long times I was accustomed to waiting on the green line.  My average wait time over this period was 2.6 minutes (here I’ve used the real wait times in place of the “-1” notation for missing a train), which makes sense given that I travel about half during rush hour (every 9 minutes per line, but I don’t care whether it’s Ashmont or Braintree so it’s every 4.5 minutes) and half during other times (every ~12 minutes per line, so 6 minutes per train) – the expected value of waiting would be 5.75/2 or ~2.9 minutes.  Weekend frequencies are lower, but I did not notate a lot of the weekend trips, because they were rarer and the “wait time” tickers are often shut off, so that it’s harder to keep track of how long I wait.  Overall, 62% of the time, I waited 2 minutes or less for the train.  Not too bad!

 

My biggest enemy is my inability to finish projects that I start.  My professor just pointed this out to me recently, but I’ve known this my entire life.

Why does this happen?

The common theme is that I lose interest, but there are so many underlying reasons for this loss of attention, including:

1. The impetus for beginning the project is no longer relevant.  For instance, the emotions behind the main theme of a song or a story no longer matches how I am feeling.

2. My skills improve over the course of working on the project, and the initial work is no longer satisfying and would need to be redone to a higher standard.  This especially applies to visual arts, because my skill continues to change significantly day to day in contrast to my compositional style or writing style.

3. The project was too ambitious, and it turns out to be beyond my skill level to complete the project as originally envisioned.  Sometimes, time can correct this issue.

4. Something new is more interesting and distracts attention.

5. The project involves a lot of repetitive actions which get boring.  This is particularly true of art projects, where tasks like drawing forests or large bodies of water can simply become tedious.

6. Completion anxiety.  I don’t know how to really describe this, but this is a fear of finishing a chapter of my life, of closing off a story thread and letting it slowly drift into the past.

Why is it so hard to go back and finish things?

The task of resuming an abandoned / on-hiatus project becomes more and more difficult with time.  In general, it’s easiest to resume a writing-only project, provided that a few pages have been written: first, the software doesn’t change rapidly, and text / document files are generally openable in a number of redundant programs; second, while writing style varies from story to story, the language and vocabulary are essentially the same; and third, I tend to leave enough character sketches and descriptions, and plot pointers, to suggest what needs to be done.

Resuming a musical project is much more difficult.  In this case, there are frequently software compatibility issues.  I have used Rhapsody, Finale 2003, Finale 2008, and Finale 2012 to compose, and in addition to not being back-compatible, forward-compatibility is also limited.  Rhapsody files can be read in 2k3 and 2k8, but with incomplete information – tempo changes, dynamic changes, playback channels, and some transpositions (for transposing instruments) are lost – and into a Finale format with missing expression palettes.  Combined with the need to reassign all channels and convert to VST playback, this can be almost as much work as starting input from scratch.  Rhapsody files can’t be read into Finale 2012 at all.  There are also stylistic issues: my musical preferences change rapidly, and I do not necessarily retain the skills for writing in a particular style after moving on.  This makes for a high chance of a disjointed transition from old to new material, for instance in how the development of a theme unfolds.

Resuming an artistic project suffers primarily from rapid growth in skill.  Use of the wrong brush settings in an old artwork can be very frustrating to overcome, because touching up line-art is as difficult if not more difficult than creating new line-art.  Old poses and expressions can also be off on second glance, and rotating a face by 5 degrees in three dimensions is nearly impossible without complete re-painting.  While overall skill level improves, there may be missing information about past settings (e.g. the method used to create meshes in 3D, or a particular custom brush used to create an artistic effect or pattern) which prevents continuation of an incomplete portion of a larger region in the work.

Finally, resuming a scientific project is plagued with incomplete information – missing documentation of previous experiments with important data; experiments performed by people no longer in the lab; lack of reagents from the same batch or even complete inavailability of the reagent altogether.  It is often difficult to restart pipelines to plug in holes that become apparent in hindsight. Use of proprietary software formats can make data impossible to retrieve and re-analyze if done improperly the first time around, and programming code can be hard to decipher if poorly commented or written by someone else.

What I want to do about it

Leaving masses of unfinished work takes a psychological toll – each “open file” requires a working memory of present status, and an internal to-do list provides automatic reminders.  This becomes unbearable when the magnitude gets too large and a significant portion of ‘brain cycles’ are consumed for reminders and maintaining a fresh memory of works in progress.  Flitting from one idea to the next means that nothing will ever get done, as attention and time become divided into unsustainably small quanta that aren’t large enough to make any progress.

The solution is to release such unfinished tasks, either by definitively abandoning them or completing them.  My priorities for finishing are the following:

Writing:

I have countless unfinished stories.  But ones with only a page or two and generally unsalvageable – there’s not enough plot and characterization to go off of.  The two most important stories that I’d like to finish are the episodic “La Petite Princesse,” a series of tales in the same style as “The Little Prince,” but exploring modern allegories; and second, “Andromeda,” a story near and dear to my heart about a boy who wants to change, as told from the point of view of his twin sister.

Art:

Nisuna and Faxuda portrait in front of a tree (very nearly completed)

Andromeda and Irene’s portrait (3/4 completed)

Angel’s portrait (draw it again – 2/3 completed)

Music:

Trio for Flute, Violin, and Piano (3/4 movements completed)

Violin Concerto No. 60, 2nd movement (tutti and main solo theme written)

Games:

Tales of Graces f (now completed)

Final Fantasy XIII (final chapter)

Disgaea 4 (final stage, I believe)

Atelier Totori (final arc – building the ship to find Totori’s mother)

Scientific Projects:

Communicating nanoparticle project

Lung tissue engineering project

Anticoagulant nanoparticle project

You know the saying – “He’s worth his weight in salt.”  But salt’s not all that valuable these days, except to pizzerias and Chinese take-outs.  So what would you like to be worth your weight in?  Personally, I’d like to be worth my weight in chicken-derived monoclonal antibodies.

There’s a reason why we have different units of measure around, but I thought it’d be fun to even out the playing field, so to speak.  So, how much does stuff cost per pound, anyway?  And don’t complain about lbs. vs. kg – the orders of magnitude here makes the 2.2 conversion factor a piddling quantity.

Item Unit price per pound
Water, Tap $0.00018
Acetaminophen, Tylenol Extra Strength, from Costco $0.04
Rice, commodity $0.12
Salt, Morton table salt, 25 lb. bag $0.31
Gasoline, average New England (@ 6.2 lbs./gal) $0.44
Juice, Orange (@ 8.3 lbs./gal) $0.95
Water, Aquafina 20 oz., vending machine $1.20
Silicon, metallurgical grade $1.45
Rice, Kokuho Rose 5 lb. bag $2.00
Copper $3.68
Ford F-150 $5.50
Book, American Heritage Dictionary $6.05
Jeans, Levi’s 501 $19.44
Maple Syrup, Butternut Mountain Farm $26.76
Playstation 3 $27.18
Kitten, adoption $50.00
Cigarettes, Marlboro, MA minimum price $143.77
Fetal Bovine Serum, Hyclone, 500 mL bottle $191.78
PVC figurine, Senjougahara Hitagi 1/8 scale $378.87
Printer ink, Canon CLI-8 black $488.57
iPhone 4, 32 GB $990.85
Violin, decent quality $10,000.00
Gold $19,697.42
Immunoglobulin, from human plasma $27,240.00
Cocaine, street retail $29,510.00
Cocaine, research grade free base $230,178.00
Diamond, natural, 1 carat, decent quality $22,700,000.00
Human Umbilical Vein Endothelial Cells, 500k ampoule $43,991,408.11
Lucentis (ranibizumab, anti-VEGF), wholesale $590,200,000.00
Anti-VEGF antibody from chicken, research grade $2,151,960,000.00
Antimatter $28,375,000,000,000.00

From studying anatomy today … yeah, top part has a weird spot because the head was too small, so I just enlarged it.  Haha.  Pen smudges EVERYWHERE.  Why?  First Aid has glossy paper, that’s why.  Don’t blame me …

Whatever it takes to make studying "interesting" ....

Whatever it takes to make studying "interesting" ....

A significant portion of my studying (which of course isn’t saying very much, since I can’t focus on studying very well with all these problems) is devoted to figuring out what words mean. Once you get a word, it often saves you the trouble of having to memorize a separate definition.

Today’s focus: “chole,” courtesy of respective Wikipedia articles and Dictionary.com etymologies.

“Chole” is the root meaning “bile.” Therefore, cholic acid is a component of bile. Cholesterol (the alcoholic solid in bile) is so-named because it can precipitate to form a particular type of gallstone (“cholelith”), found amidst the bile stored within the gallbladder.

The gallbladder is a bile-containing pouch and therefore has the prefix “cholecyst-” (where a cyst is an enclosed sac, corresponding to “bladder” in the instances of the bladder, “cyst-” and gallbladder, “cholecyst-“). A cholecystectomy (cutting of the bile-containing sac) is the surgical removal of the gallbladder. The hormone CCK, short for cholecystokinin (movement of the bile-containing sac), is a hormone produced by duodenal and jejunal I cells which is involved in bile let-down from the gallbladder as well as pancreatic enzyme secretion.

However, “cholic” (pertaining to bile) has nothing to do with “colic” (pertaining to the colon). When a person has “colicky” pain, it relates to pain of the colon, not to bile acid.

As an aside, the punctuation mark called the colon (:) is derived from Greek kwlon (omega first) while the organ called the colon is derived from Greek kolon (omicron first). So, they do not have a common etymological root.

The most familiar solution to the oxygen-transport problem in larger organisms is hemoglobin, a collection of four-subunit (usually two pairs of polypeptides) proteins which utilize porphyrin heme rings containing a single iron atom.  However, the use of hemoglobin, carried within cell carriers (erythrocytes), is limited to vertebrates.  Invertebrates have independently developed different oxygen-transporting proteins.

Hemocyanin

Mollusks (e.g. octopus, horseshoe crabs, etc.) use copper rather than iron to carry oxygen, incorporated into a protein of the hemocyanin family.  Hemocyanin, which uses two copper atoms to bind each O2 molecule, is dissolved in the hemolymph (blood analogue).  Hemocyanin is translucent gray when deoxygenated and sky blue when oxygenated.  Although hemocyanin is generally less efficient than hemoglobin, it is advantageous in certain environments unique to the underwater milieu.  Hemocyanin, unlike hemoglobin, is designed to aggregate.  When hemoglobin subunits aggregate, for instance in the thalassemias, it is disastrous because the erythrocytes containing the aggregates are destroyed.  However, hemocyanin (MW ~400kDa) can form aggregates in the millions of daltons (see the Keyhole Limpet Hemocyanin (KLH) article on Wikipedia).

Erythrocruorin

Annelids such as earthworms present another solution, which is erythrocruorin, which like hemoglobin contains heme and a single iron atom carrier.  Although erythrocruorin is also not bound in cells, it is notable for forming exquisite protein structures composed of 180 subunits, with a macrostructure consisting of two stacked hexagonal rings (see http://www.uta.edu/biology/arnott/classnotes/5365/Erythrocruorin%20micrograph.jpg).  The total molecular weight of each dodecameric complex is 3.5 megadaltons, which includes 144 “hemoglobin” subunits and 36 linker subunits.  The PNAS paper publishing the structure posits that the giant size is a way of maintaining high oxygen tension and harnessing cooperativity while incurring minimal osmotic cost.  What is amazing in this case is the fidelity of protein assembly on such a grand scale.

Hemerythrin

Hemerythrin uses iron to carry oxygen, like hemoglobin, but uses two irons per oxygen and does not contain a heme ring.  Used by various marine invertebrates, hemerythrin is colorless when deoxygenated and violet-pink when oxygenated.  Each hemerythrin subunit is composed of four alpha helices, and the subunits join in trimers.  While hemerythrin does not exhibit cooperativity, it does exhibit a greater affinity for oxygen than for carbon monoxide.

Beyond nature

A recent Nature article from earlier this year detailed a UPenn group’s bioengineering feat of devising a new oxygen-carrying protein based on design principles.  Apart from heme groups, the rest of the protein was invented through rational design.  The resulting molecule, which is advantageous for being able to deliver oxygen faithfully in the presence of CO (unlike human hemoglobin), shows that the “hemoglobin fold” is far from mandatory.

This paragraph just flung out of my mind, and although it sits oddly in the paper, I think I like it.

Metastasis is a daunting subject, probably because of its immense scope which requires the expertise of so many different branches of biology – and because despite valiant efforts, it is hard to conceive of a reasonable method of treating metastatic disease.  As horrific as classic warfare was, it was at least “winnable;” the latter half of the last century saw the rise of guerilla warfare and terrorism, blurring the lines between civilian and military and making millenia of military tactics and technological extravagance decidedly irrelevant.  The parallel between this new battlefield which knows neither boundaries nor conventional rules and the campaigns to halt and cure metastatic cancer is considerable.  In both cases, our capacity to destroy far outstrips our capacity to renew; given that curing metastasis through the death of the patient is highly unacceptable, it is clear that any solution must diplomatically engage both destruction and renewal, thus requiring the full knowledge of life as we do and do not understand it, from embryogenesis to apoptosis and necrosis.

The entire body of human knowledge has long since surpassed the point where any one person could ever hope to understand any significant proportion of it. This is apparent from the greater and greater levels of specialization in people’s everyday lives, despite surface appearances that the general populace is becoming more well-rounded in its knowledge. The truth is, people individually do not know much more (in quantity) than people really have; the knowledge that is gained of new technologies and such replaces knowledge that is perhaps less useful. What passes as broad knowledge is actually the accessibility to that knowledge, not the possession of it: humans are learning more methods of archiving and subsequently finding information, rather than better methods of retaining it within the mind.

The externalization of knowledge is hardly a new innovation. As I have often mentioned in conversations, I believe that evolution works in self-similar stages – think of the zooming-out sequence in Men in Black or, if you’ve seen it, the narrated preview for the upcoming game “Spore.” Basically, single cells, each of which used to know everything about day-to-day cell life, came together in a cooperative society, and the repository of knowledge turned into the ganglion or brain of the larger animal. The ganglion cells themselves – the brain cells – have no intrinsic knowledge of the knowledge that they store; the other cells access and use this information through signals, but they do not hold the knowledge within themselves.

Likewise, multi-organismal society is now at a point where it can no longer be like a brainless jellyfish. What is arising naturally are large repositories of knowledge, such as libraries, succeeded now by the larger internet. Knowledge is not only a sitting body but a dynamic conversation that exists on a time-scale and size-scale so large that people may view it as being a fixed entity.

The current methods of information exchange are very nice, but there is one fatal flaw: no meta-analysis outside of our own selves, the fundamental “cells” of the organism of humankind. The genius of human existence is the ability to take the “wikipedia” of inputs from all the five senses, compounded over many years, and distill out higher-level conclusions and theories.

Current knowledge databases such as, say, PubMed or ISI, which compile more research than any person could ever hope to even click on, not to mention read or understand, are rapidly becoming unwieldy. In the rush to create knowledge, there is not enough sustained effort to remold it. I am confident that many secrets and patterns of humanity and human disease already have enough research put in, if only that research were combined effectively and the correct connections made. And if the published literature is insufficient, then it is the combined knowledge and observations of the researchers themselves that would hold the answers. The “scientific hero” model dating from just a century ago, and epitomized by the Nobel Prize, is completely out-of-date, and the gradual lifting of the proprietary attitude towards science through the greater availability of full text publications, wikis, copyleft / free software, and wide-ranging collaboration confirms that the new era of knowledge will be built not by forefathers upon marble pedestals but by the average Joe.

Who, then, will be charged with the requisite meta-analysis that I alluded to earlier? In my belief, the entity to serve such a function is none other than a computer. Robotics has its triumphs in automated arms and belts that power current manufacturing by rapidly and accurately processing raw materials towards the production of just about every product – this is the only way production has been able to keep up with design and demand. The corresponding state-of-the-art for data which exists virtually rather than physically (virtual information being both encoded in computers and in human minds) is basically only indexing and searching. Wikipedia does not, to my knowledge, try to sift through its cross-references to discover the meaning of human existence or better ways for physics to inform biology or evaluate the best system of government. But the data is already there! For a human being, it takes so many years to write a single dissertation which looks upon a sliver of the pie of knowledge, and in turn is read by only a sliver of the people who ought to be reading it.

What is needed for this project is not artificial intelligence that mimics human thought, per se – the brain does not think in the manner of cells. What is needed is a new paradigm of thought, which is simpler yet more powerful than human thought. Whereas cells are concerned about the minutiae about particles and neighboring cells and fluid flows, the brain ignores most of that and considers the hunger level of the entire body, the status of sheep in the meadow which none of the cells know about, and the relative attractiveness of members of the opposite gender, which certainly cells would not understand at all. Not just the scale, but the nature of the thoughts transcend the capabilities of any contributing member of the knowledge.

The idea of a “brain” for an entire species has been entertained many times before, but usually in the context of some dominating hive-queen. There is nowhere in the description above that suggests that such a central unit would ever have to be dominant in function; nor that it even has to be “alive” in the way we understand it. It is just that, now with humanity trying to deal with problems of the entire body of its billions of people and with the Earth as a whole, there must be a better way of thinking globally than using our feeble minds which have yielded brazenly useless solutions such as the recent agreement to cut emissions by 50% by 2050 (what human can conceive of 42 years of future events? And why is only one small part of the problem being addressed)? Without a guiding mind that can at least put together the crises in fish, bees, the atmosphere, forests, rivers, trash, toxic chemicals, radiation, soil quality, and so forth, in a meaningful way, how will the environmental issue ever be tackled effectively?  All of these issues are interconnected, but people only become interested in them one at a time, or in all of them with no particular plan or comprehensive understanding.  Synthesizing the next layer of knowledge is probably the only way the human race can make peace with the world and with itself.

Yes, the original creator of some work or technology should be protected from other people trying to profit from non-collaborative, unapproved use of that creator’s work. But that being said, I think that the extremes of so-called “intellectual property” in the modern Western society are bizarre and inappropriate.

Unlike my frying pan, which I rightfully own through the trade of money and which represents a tool and not some link to society or the greater human consciousness, my ideas – my songs, drawings, whatever – are not simply purchased through my earnings in work or service. I would not be much of a composer had I not studied Bach and Brahms and Schoenberg. I would not be much of a cartoonist without being exposed to the work of Kiyohiku Azuma or Rumiko Takahashi.

It is arrogant to think that ideas, which stem from the amalgamation and then fusion and recombination of thought and inspiration from past and present, are so easily demarcated and tangible that they may be analogous to a frying pan. This is as absurd as the idea that a human can “own” a cat. Ideas and living things – and they are largely the same thing – owe their existence to a phenomenal amount of sources. In an ideal situation, they belong to everybody.

It is because there are thieves in this world who seek to counterfeit, people who did not contribute to the idea at all, that we have to have laws about copyright and patents. That is all well and good – I certainly would not want someone selling my music under his or her name for his or her profit. But nothing should be so absolute so as to hamper the progression of knowledge and its applications to human benefit, especially with regards to technology that were not all that innovative in the first place.

As an example of this grotesque culture of greed, in which people who have no intellect to think that there might exist a metaphysical significance to objects attempt to levy ownership over what was never theirs in the first place, I’ll give the example of the Bristol-Myers Squibb taxol fiasco. BMS opposed strongly the entrance of generics into the taxol market. Broadly speaking, taxol was discovered by the US government and belongs to the US people. BMS attempt to patent taxol as delivered with castor oil as an injection. Then, it sued a Canadian generic company (after the 5 year exclusivity period) with a patent it never had and never will have.

All of this for a compound that was never created from human intellect. It is absurd to think of paying the yew trees for it, but if that’s so, then it’s even more absurd to pay humans for the idea (paying for the object is of course reasonable, as it costs money to extract and/or to produce). If anyone owns a patent on taxol, it’s God or Gaia or Mother Earth.

Claiming that taxol is your “intellectual property” is tantamount to the Europeans landing in America and claiming that all the land belonged to them.

Art is the same way, and I intended a long time ago to write about derivative works (fanart, doujinshi, etc.) but I never got around to finishing that entry. An art is generally speaking passed down, directly or indirectly, from some master to some apprentice. There is no artist that I know of who can create works without (a) having things around to look at, (b) training through class or self-teaching, or usually both. Both of these are acts of absorbing outside influence.

Human creation comes from two stages: first, the acquisition of source data, then second, inspiration and transformation of those sources into a final product. No creation and no genius exists without precedent. Here are two cases to underline my point:

1. Takashi Murakami of “superflat” fame created the petal-y design for Louis Vuitton (which I happen to like a lot), but he is hardly the first to use four-petal motifs in art design. Louis Vuitton is foolish to sue artist Nadia Plesner over her not-for-profit fundraising t-shirt logo, which neither promotes counterfeiting (no moreso than Evangelion’s famous “Eila” in place of “Fila”) nor is any more alike to the LV design than Takashi Murakami’s own art draws on Doraemon and Ghibli.

2. Bach was astounding, but he had Palestrina before him, and polyphony dates back centuries farther, back in the days when all harmony was chant in parallel fifths. That Mendelssohn could freely quote Handel and that “a theme by Paganini” became a virtuoso piece for piano, of all ironies – that is a testament to the importance of what amounts to “fanfiction” in the musical world. These days, it is the absurd vogue that you can quote dead people’s music but not live ones’. Quoting anyone straight up is just unintelligent, no matter how you look at it. But the worst is when people claim sole copyright on works that include quotes. The notes belong to no one, and a computer given sufficient time could permute notes to produce the melodies of all songs every written with the twelve Western tones and certainly not own all exclusive rights to their use. Heck, I could do that, too, and technically copyright all non-previously-used melodies. And lawsuits based on copying chord progressions – well, I’m not even going to go there.
All in all, I think that the whole concept of IP and copyright as it stands is outdated and needs to be re-examined. They belong to an era of obsessive possession that runs contrary to the modern themes of inclusivity, cultural awareness, and the trading of ideas. Sites such as Wikipedia have taken an important step by championing the person who contributes work without needing to bask in the glory of one-man/-woman heroism.

The 21st century is about global civilization and the power of multidisciplinary collaboration. It is about time to change our conception of the very nature of ideas to catch up to the post-Imperialist, post-world dominance, post-megacorporation society.