You should take the following with a grain of salt, because it is not (yet) supported by experimental evidence, but I state it, because it is the only possibility I can see given the theoretical evidence.
Likely what is needed is the complete RNA sequences in all the cells of the brain, most importantly the synaptic RNA, because this is where all the actual computation is happening. Not the RNA for making proteins, not the RNA for regulating them, but RNA that is used purely for thinking.
This RNA must be linked up to the electrochemical network to produce different pulses for different sequences, and it must read out the electrochemical pulses and convert them to sequence. The result is that each cell is an RNA computer with gigabytes of RAM, linked by a 3000 baud modem to a few thousand other cells, which read out the sequence of inputs and link up the computations as such.
This model is new, it is an original idea, you won't read about it anywhere else.
In order to make this work, you need to have a protein RNA complex which is sensitive to action potentials and transcribes neuron signals directly into sequence. It also requires an intracellular RNA-RNA computation, but this required for other reasons.
The resulting mess means that the connections are only sufficient for simulating the communication overhead of the computation, it is missing the bulk of the computation. The amount of bulk computation is order 1 gigabyte per cell, and there are 300 billion cells, so it is staggering, far, far beyond any current machine.
The reasons to predict this (it is a prediction, this hypothesis is not supported by direct evidence) is that brains are able to store memories, initiate action potential sequences that are coordinated, and their processing speeds are not consistent with the overhead processing speed of the communciation between cells. The cellular level models limit the brain's active memory to a number of bits equal to the number of cells, giving C. elegans a memory capacity of 300 bits, which is ludicrous. For Drosophila, it's 100,000 bits, still ludicrous. There is no way to explain why it is ludicrous without getting an intuition for what a 300 bit computer can do, so I can't go further, except urge the reader to experiment with a 300 bit machine until full understanding of the range of behavior comes (it doesn't take long), to see that it doesn't do anything, it's less than one cell nucleus.
"Synaptic RNA"? the "RNA that is used purely for thinking"?
Yes, synaptic RNA is known to exist, the brain is full of RNA, that's the dominant biochemical component, you can read a review by Mattick from 2010 for more detail. In your field, you misinterpret it in a silly way, thinking it is only used for transcription of protein. As the British say, "bollocks".
I am saying that this RNA is directly coupled to the neural stuff, this RNA is the main information carrier in the brain, this is what is used to store the thinking. This is a prediction, it is also a somewhat original prediction, so dude, you really should dig it (fat chance).
There were a few people who said similar things in the 1950s. I didn't know about them until very recently. I didn't even read Mattick's review of RNA in the brain until after formulating this (in conversation with my neuroscientist brother, who had the major inspiration to ask whether RNA is active in brain).
Keep thinking this way and you will end up at panpsychism. Suppose you are correct? Now can't we go one step further and start modeling the individual molecules inside the RNA and their points of chemical interaction with other RNA molecules?
And if you are at the level of using individual RNA molecules as the base level of computation, wouldn't timing feasible as an input as well? At this point we are so low level that quantum effects are relevant. And the brain itself is modifying the very system it is a part of, so now we have chaotic system of differential equations taking inputs from the entire universe around it. That includes every quantum fluctuation, gravitational fields and cosmic radiation from galaxies far away. Where exactly is the line between thinking matter and not?
RNA is a molecule, and it is the smallest level at which you can store information stably. Lower than this, the information is washed out by the noise. Chaotic systems are not able to store information, they are chaotic, they randomize it.
The exact line between thinking matter and not is where the bits describing the matter have some that don't randomize, that can store information stably, and compute with it. The RNA level is all there is, you can't store anywhere else, not even proteins. It must be RNA, because this has the right information density to predict the effects, but it absolutely requires a new undiscovered mechanism to link RNA directly to neural spikes, and this is not something to philosophise about, it's something to go into a lab and look for.
I should point out that this prediction is heckled within neuroscience, nevertheless, it is certainly correct, and such a mechanism will necessarily be found in the next decade or so.
You do realize the logical abstraction of a computer isn't actually real, right?
Say you handed an advanced alien civilization your computer, powered on, as it is right now. The system is booted and energized, but the CPU clock is frozen and not allowed to execute another cycle. How would they go about discovering what exactly is going on inside? (Assume they have no access whatsoever to any other information about our technology, but they do have access to any other theoretical means of analysis consistent with the laws of physics.)
If computers are "real" that means they should be able to predict exactly what will happen next when the clock is resumed. I don't think this is actually possible.
I don't understand what you are saying: the information in RNA is real in the sense that you can measure it easily--- it's the sequence in the RNA, we have machines to read it off. It's transformations are real, they are cutting and splicing, or editing, or complementary binding and unbinding, in the presence of proteins.
The aliens can trace the circuitry easily and figure out what transformation goes from one clock to the next, this is the instruction set. They can then simulate this computer on another one. Yes, they can tell exactly what will happen next, it's simple if you know how the transistors are arranged on the microchip.
I can't understand your position at all.
Not just the transistors, I mean the running computer. Your currently running browser, everything. And there is no other computer for them to compare it to. What machine can scan the arrangement of molecules sitting on your desk and see a data structure in action or output the source code for Microsoft Office?
Point being the line between chaos and repeatable data is always dependent on what level of abstraction the observer is coming from.
You are seeing computation in RNA and I applaud this. But why do you now draw the line and say that it goes no further?
Any machine that can look at the transistor and see where the charge is. It's not rocket science, the computer itself notices where the charge is.
But I know what you are talking about now--- the data that is "computating data" vs. the data that is "structural data" or "irrelevant data" is defined in a teleological way, that involves the future behavior at long times.
I have a precise definition for separating the two, in terms of the information that is stably able to be measured at long times from another system which interacts with the computer.
I am not "seeing" computation in RNA--- I am predicting computation in RNA! RNA can't compute without specific biochemical mechanisms that are required to be discovered in order to transform the information in the sequence reliably. The reason one does not go further is that you can't actually couple any of the other physical bits to a computation that can then communicate with the measurement at long times of the state of the system. There are bits that are important and bits you can throw away. In the case of a brain cell, you can throw everything away except the RNA and a miniscule (in comparison) set of other stuff that does the electrochemistry.
I need to add that core to my argument that computers aren't real is what makes them "real enough" for us to communicate with them right now. And here your transistor reference is relevant.
If I give millions and millions of transistors, you don't have a computer. You only have a computer if those transistors can be forced to pass some sort of unified state between themselves. They must march to the same drummer. This is the clock, which is a quartz oscillator. Because we can count on quartz to oscillate continuously at and can propagate this clock signal through circuits in a manner than makes things happen in order we now have a way to reproduce a the *logical* construct of a running computer.
In other words, there really is no digital data in the universe, unless at the lowest level it all is. Everything we call digital is actually the experience of chaotic analog systems, split by thresholds and yoked to a clock signal.
This is just plain false--- RNA is digital data that operates by binding other RNA and transforming this data into some other stuff. For example, translation of protein. The data is purely digital, there is a sequence of nucleic acids which determines the behavior.
This is a surprise, that digital data can self-organize into a coherent computation withour artificial design. But it is true, because we see biological cells, and this is exactly what they do.
It is also theoretically understood, because computations can spontaneously organize in cellular automata, and will spontaneously organize from a discrete collection of bits in a continuous physical system with appropriate (but not super-specially chosen) interactions. The bits that are chaotically mixed don't matter, the bits that are permanently frozen or locked in cycles don't matter either. The only bits that matter are those that transform regularly and stably depending on the values of other bits, and there is a procedure for extracting this using the long-time limit and an external measuring agent.
The stuff you are getting hung up on is philosophy. I don't care about philosophy. I am trying to explain how the brain works.
OK, but if RNA is just digital data that can spontaneously self-organize and compute, then so are the molecules that make it up. And the quantum particles that make those up.
Anyways, if this were somehow to get back to the OP's question I guess we would have to define how good a model of the brain we want.
This is just not true. Not everything is digital data that can compute, just RNA, to a lesser degree, proteins (about 1/10 or 1/100 information density, but it still computes). There are no "molecules that make it up", RNA is a molecule! There's nothing smaller. The atoms are thermally mixed, they can't store data, the subatomic particles are frozen in a ground state, they can't store data either.
I believe the model I am giving is 100% perfect, it includes every computationally relevant bit of memory and processing in a brain. It is different than the standard model by 10 orders of magnitude of complexity.
Sorry, what I meant by molecules was atoms in the RNA.
Sort of.
Except that RNA is so complex that if you want to model *exactly when* any given reaction will take place, you need to model how all those atoms are positioned. And time does matter. It is more information. Way more. Not to mention protein folding as spatial computation.
And looking back at the whole thread the reason I got into all this philosophical stuff was actually your original answer to the question. The philosophical subtext is apparent.
You don't need any of these bits, they are computationally irrelevant. There is zero information in the atoms, it is washed out by noise. There is zero information in the core electrons, nuclei, or subatomic particles, these are frozen in their ground state.
I am not doing philosophy, I am trying to identify the computation in the brain. I am a positivist, so I just have to define how to extract the bits that are computing.
What "computationally irrelevant" means precisely is that the randomness is the system makes it that whatever you choose for the initial value of these bits, the probability distribution for the final bits after a certain amount of time is exactly the same.
The only bits which are not randomized away are the sequence bits, the complementary binding bits (describing the secondary structure, which RNA is bound to which other RNA) and a very small number of position bits which is usually unimportant.
The number of these bits is small, because the processes randomize, There is a precise definition for this:
consider a set of bits with stochastic evolution, if you have two configurations, you can define the "overlap" of the two configurations using a coupling-walk: consider the two configurations of bits evolving in time, and if they coincidentally end up the same, then they are the same from this point onward, as if they are stuck together when they meet.
Then the number of relevant bits in a computation is the (the log base 2 of the) number of configurations that do not get stuck together in the infinite time limit.
These configurations are labelled entirely by the sequence of the RNA, the types of proteins, their mutual binding of domains, and not at all by the folding (which is determined by the sequence, and is only relevant for determining the interactions), or the exact position (which is randomized by diffusion and so gets stuck quickly, the rough position does matter, to the extent that there is an interaction before there is time to diffuse far enough).