A Pop Sci article recently mentioned how only a couple of genes encode the entire brain, which is constructed as a random mass of roughly equal neurons.

Suppose that a particular clump of neurons represents a particular concept (just as a model .. it probably isn’t true). As an isolated idea, it is completely useless. This is analogous to my dictionaries. These dictionaries hold a lot of facts. But as a great blogger pointed out recently, a user manual is useless if it only lists functions: one must know why one would want to use such functions. In the same way, the facts need to have a context, a purpose for accessing.

So, the probably key to linking up the ideas properly is to “wire” them. The problem with computer-style hierarchical data structures (think folders and subfolders) is that it establishes a sort of tiered order that is an artificial construct. Real data is hard to store in such a fashion.

Each concept may have multiple connections and also may be part of a circular loop or whatever. The “neuron net” is clearly the solution to this problem: it allows for logical connections without the restrictions of hierarchy. No one node in the web is “more important” in the sense that it is at the “top” of the logical tree. Instead, nodes may be important due to having many connections, but all nodes are fundamentally equal.

I have not yet totally decided on how to store this “map” as data, but what I will probably end up storing is not the nodes themselves, but the lines that connect them. That is, the “soma” of the neurons are not what matter, but the axons and dendrites. I need to know four pieces of data for each connection:

1 and 2. The start and end nodes.

3. The distance between the nodes.

4. The nature of the connection

In real life, the “nature” may be as simple as excitatory vs. inhibitory.

In my proposed net, connections are allowed of three basic types: positive (excitatory), negative (inhibitory), or neutral. Positive and negative come with three grades of strength: high (greater than), medium (equal to), and low (less than). Complex relationships are created by wiring the destination-node to the start node and to a verb-hub.

I’m not totally set on how to implement the “complex” wiring yet. In any case, the mass of connections would be stored in “ganglia,” which are self-organizing subdivisions of the larger factual database. The idea behind the ganglia is to segregate connections that are “close” to each other into small communities of ideas that are likely to accessed together (eg, keyboard, typing, keys, space bar or green, blue, yellow, red, orange).

The goal of this section of Caroline is to replace the vaguely-defined previous concept of looking up all the words in a sentence input and then amassing a list of related topics from the words’ nature themselves. Instead, Caroline would be inclined to use this set of facts as a faster and more life-like alternative. The intersections of the probing will provide the idea of a “context.”

For example, suppose a sentence has the words “ice” and “melt.” Probing, say, 3 connections deep (that is, collecting connections from the web that are at most 3 nodes away – a humble list), certain “hits” would be more numerous than others. I would imagine that “water,” and the negative-related “hot” and “cold” would show up in each of the probes. This would be the easiest way to establish that the context of “ice” is in relation to its property of being water, and something to do with the ambient temperature.

The “basic brain” is clear – that is, it has no connections. It is wired based on experience. One way to “train” Caroline may be to simply say a lot of sentences to her. Words that seem to keep showing up together will be automatically added as a connection. If this method is pursued, then “distance” will start at a default high value and decrease as Caroline grows confident that there is indeed a connection between the words. Obviously, prepositions and other simple words must be treated differently. I would probably begin with just nouns, adjectives, and verbs.

One beautiful thing about the web system is that it might one day replace my dictionaries altogether. That is, that a tree is green does not have to be defined in a structured dictionary with {‘tree’:{‘attributes’:{‘color’:’green’}}} but rather just as (‘tree’,’green’,1) (‘green’,’color’,1).

The human brain is amazing because it seems to rely only on billions of cells of almost identical build, with exactly the same DNA, and without any external organizer. That is, the brain is the “boss,” but the brain itself has no boss – each neuron does its own thing, making its own connections, dying if it is not important, and somehow, that autonomous action on the large scale produces a highly structured data environment that comprises “intelligence.” The “miracle” I always talk about is still a mystery to me …

I understand now, after writing this, why connections establish knowledge and comprehension. However, what I still don’t understand is how the data has any meaning at all. The data in a computer is of course as simple as the brain’s: just 0 and 1’s. But it has particular rules for understanding those binary digits – ASCII for one, and the low-level functions, etc. But what are the rules for interpreting the brain’s data, which, for one, is a web, not a structured set of files? The fact that it is a web means that it has no beginning and no end; there are certainly areas for speech or for music or whatever, but where is the actual *data*? And what about connections that I suddenly make, that I didn’t have before?

I’ll try to implement a basic version of this web tomorrow evening, but time is really scarce. I need to practice a lot of violin these days to catch up on a LONG time without practicing.

I wonder … are the things I’m talking about in these notes as interesting as I think they are … or am I just stating common knowledge?

Leave a Reply