Specifically in the absence of an obvious stimulus. Is it noisy basal level brain activity that gets translated into thoughts once in a while or is it the final result of elaborate subconscious processing that we are not aware of?
While we do not know the answer for sure, and my answer does not appear anywhere in the literature and has zero experimental support, I would like to contradict Paul King's answer, and give an alternate model that is less fanciful than the standard one.
The standard model is fanciful, because it considers the brain's computation as distributed over 300 billion neurons, acting collectively, at about 1000 hz rate of processing. The problem with these models is that there is no way to get coherent computation out of such a slow clock cycle and such a limited amount of RAM, at least nothing more than a reflexive jerk of an arm, or a quick pattern identification that isn't stored or transformed in any complex way.
The issue can be made more stark as follows: when you see a photo of a bicycle, it takes you about 1/10 of a second to identify the bicycle. This means you have at most 100 action potential cycles available. This means that the identification of the bicycle supposedly happens through a magical process, where 300 billion bits, recieving input, organize themselves in 1/10 of a second, or 100 cycles, into a pattern that says "I saw a bicycle" and then somehow maintain this pattern dynamically for a while.
Further, this pattern can be more or less recalled by just saying "think of a bicycle", so somehow this dynamical system is able to magically identify what you are saying, pick out the proper pattern for a bicycle, and arrange itself like so for this pattern, using 300 billion bits at (at most) 1000 cycles.
In the past, 300 billion might have been a big enough number to snow people, but no longer. 300 billion on/off bits are not sufficient, especially with this dinky clock cycle. The paradox is starker for fruit-flies or worms, where the number of brain cells is tiny. It is difficult to explain why it is a paradox if you do not have experience with computers of various sizes and powers.
So I call bullshit on the connectionist model. It is pitifully small, it is wrong. It also has no evidence to support it, beyond "got any better ideas?" I think I do have a much better idea, but it does not come with any evidence, it is a way to resolve the theoretical difficulties described above.
The way a thought arises in the brain is through molecular interactions within a single cell. The cells are coupled computations, with the communication being electrochemical impulses, but the substrate storing the data is RNA molecules. This RNA is active in synapses, in axonic stems, all over the neuron. The computation is by complementary binding, and splicing, rejoining, perhaps copying of RNA with an RNA/RNA polymerase, but perhaps not this, because such a polymerase has not been identified in the human genome.
This RNA computation is required for other reasons, as a self-consistent thing which can modify DNA for sensible mutations. This stuff can control protein expressions by sending out siRNAs, and it is what is transcribed from the non-coding genome. I do not wish to argue for a computing cell-brain made out of RNA, because I think it is too obvious to argue for anymore, it is almost accepted fact (perhaps not yet). The goal here is to argue that this RNA is what is also doing the thinking in the brain brain.
These computations are not independent, but they are linked through neuron activity into a network. The network transmits signals from neuron to neuron through spikes, or action potentials. Since none of this is observed, I need to make up the details, so as to have a model. The true details will probably be different. But I will say that these spikes are read out by RNA modifying membrane proteins, and whenever a spike appears, the proteins write out a certain base on a strand of RNA, so that the RNA is a semi-permanent record of the spike-train in time, stored like a ticker tape. Each a,u,c,g is encoding the time between spikes, or it's aaauaaauauuaaaaauu with u's at every spike. The RNA then takes the ticker tape from the cell membrane, and does munch munch munch compute compute compute, and out comes new RNA that is attached as a ticker tape at the axon, and whrrr... out comes a new signal with a precise set of timings which produce other RNA's in cells down the line. The cells ticker-tape RNA's are protected, catalogued and stored in cell bodies, and in glia, for later retrieval, the glia retrieval is slower, when you are thinking of that thing that is just on the tip of your tongue, your brain needs to search the library of ticker tapes to find the right ones.
When you have a memory that is spontaneous, your RNA's in different cells are sending little spikelets to see if there is some other cell that can do something with the data. Most of the time, nothing. Every once in a while, a cell puts an RNA on the ticker-tape machine, sends out a signal, and gets a whopping amount of feedback, because this pattern matched the next cell. Then the cells start to communicate back and forth, exchanging more RNAs, the whole network is alerted, and you think "Hey, I got an idea!"
The mechanism has a memory capacity with is order 300 billion gigabytes, or about 10^21 bits (or more, you can easily fit 10 gigabytes of RNA in a cell).
The reason to believe this over the standard story:
It actually gives a non-magical explanation for the ability of the brain to compute so deeply. The RNA is 10 orders of magnitude bigger in memory and 8 orders of magnitude bigger in processing speed than the neurons as cells.
It can be coupled to heredity simply, with no layers of translation: the DNA can store instincts in sequences that are transcribed in neurons and directly serve as ticker tapes to each other, thereby encoding a gigabyte of instinct directly. If you have to go through a layer where you translate to proteins, and then direct brain activity only indirectly, it is hard to see how you can encode complex instincts at all, since the amount of instinctive data, after all the steps is close enough to zero to be indistinguishable.
This presents a method for evolving brains, as separate RNA computations in cells are linked up through neurotransmitters. It will explain why the neurotransmitters are present in simple multicellular sea-organisms no bigger than 20 cells, similar to blastocysts. These organisms do not have a nervous system, but still make use of nervous system precursors. There is no evolvability path to a brain in the current ideas: a brain is useless until the cellular components network. In this model, linking up RNA computations is useful even before there are any well-defined nervous system components.
RNA is the major component of the brain, and non-coding RNAs are moved around with apparent purpose all over neurons. This was pointed out by John Mattick in 2010, although I don't know if he would endorse the hypothesis I am presenting here.
The reason to believe the standard story:
Neuroscientists all say so.
That's it. There is no evidence for the standard story, it's just what people were able to observe to date.
There is exactly zero experimental evidence for a ticker-tape in every neuron, or a major RNA computer in every neuron, aside from the standard genetic one. This is why one should look. This model is much better fit to the data even barring any evidence, but the predictions are so many, that it is impossible to miss. With sequencing machines, one should be able to identify the RNA involved in the thinking, or rule it out without any problem.
But you're not going to rule it out, it's correct.
I should add that I think you are right that there is a lot going on computationally inside the neuron, and that the spike is a "summarized" output signal. There has been a lot of work in looking at intraneural computational mechanisms, including individual dendritic branches acting as mini-processors performing a statistical rollup at the branch level, and also that spike timing and spike order has substantial computational effect.
While a central role for RNA seems very unlikely, never say never. There was a time that DNA was thought to be structural and not information bearing. That said, one really needs to follow the evidence in science. There are many ways that cells "could" work and only one way that they actually wrk. Only the evidence can point the way, and right now there is a lot of efidence for electrochemical signal integration using adaptive receptors and ion channesl.
Wow, sorry for being harsh in my comment. That's a very reasoned response. I agree, but honestly, read Mattick's paper before saying it is unlikely, it's from 2010, and it's titled something like "noncoding RNA in brain" or "noncoding RNA in nervous system".
Believing in something strongly is not enough to make it correct. :) But your idea is certainly very imaginative.
First off, the number of neurons in the brain is estimated to be closer to 100 billion (some say 80 billion), not 300 billion. The number of neurons in the cerebral cortex, where people imagine most of thought to be occurring, is 20 - 30 billion. Also, the average spiking rate of a neuron is closer to 10 Hz (estimates are in the range 8 - 12 Hz), but some neurons fire more frequently and others only rarely. You might argue that these lower numbers make your case even better.
It's not clear what "standard model" you're referring to... however making analogies to digital computers is problematic because what is known about the brain is that it works very differently from a digital computer. Neurons, for example, or even spikes do not correspond to "bits". Neurons are tiny statistical analysis machines with billions of moving parts. Also, brains don't have a "clock speed", or even a clock for that matter. Although some believe that local and global oscillations at different frequency bands (theta, alpha, beta, gamma... 4 Hz - 80 Hz depending on band) may function as a distributed synchronization scheme analogous to the function of a clock in a computer.
It's a reasonable question: how could object recognition happen in 150 - 200 ms when neurons spike so infrequently (one could argue "slowly"). Part of the answer is immense parallelism and a very different approach to pattern recognition than is used in a computer. Over 1 billion "processors" (neurons) are being brought to bear in parallel in object recognition. The synapse of each of those neurons is performing calculations similar to multiply and add. So 100 million spikes * 10,000 synapses = 1 trillion math operations in 100 ms.
What is also known about object recognition is that it seems to be primarily a feedforward process organized hierarchically. Each level of the tree performs a massive parallel calculation and forwards the results to the next level. There might be 10 or 20 total synaptic "handoffs" that happen in that 150 ms, so if you want to use the clock cycle analogy, there are 10 - 20 circuit steps to recognizing an object, but each step entails 10s of billions of math calculations in parallel.
Dismissing connectionist models as having "no evidence" is to wave away tens of thousands of neurobiological research experiments and reports on how neural circuits process information. It's too much to summarize here, but the neuron and circuit functioning of networks, including signal integration, what causes spikes, and what their effect is has been mapped out in extensive detail, down to the exact 3D shape of neuroreceptors, the ways they manipulate neuron voltage levels, and how that signal integrates to produce a spike at threshold. Learning and adaptation has also been extensively mapped out, including the chain of specific biochemical mechanisms that leads a neuron to change the multiplicative weight of its synapse via long term potentiation (LTP). Neural pattern tuning has even been measured to change in the visual system in response to visual input in live animals.
Regarding the RNA hypothesis, it's creative for sure. In science, you don't get to trust an idea until there is at least some evidence for it. In this case, this version of how RNA might interact with spiking doesn't fit what is known about spiking and RNA. So while one could imagine that the brain *could* work that way (or in many other different ways), there is the way that the brain has actually been observed to work, and that at least needs to be the basis of models until someone observes a new mechanism experimentally. Feel free to be the first to go looking for it! That's how science works ... someone decided to take it on themselves to test a hypothesis.
Okey dokey. I know the literature in the field reasonably ok, so you don't have to repeat it. I also think I know when I have sufficient theoretical evidence to trust a hypothesis based on theory, so you don't have to condenscend authoritatively about whether an idea is justified. I am saying it is justified. I am something you hardly ever see in biology--- a theorist with balls. I am making a prediction, and I am confident with no definitive experimental data. It's what physicist do in their field, it's what you are supposed to do as a theorist, and I am importing it into biology.
To demonstrate that you know what you are talking about, you need to give your arguments, make your prediction, and declare it publically, before it is observed, and to stake your entire reputation on it, to show, after it is found, that you weren't making a stab in the dark. The goal is to make sure that biologists know that major predictive theory is possible in biology, even though it hasn't really been done so much before (Mattick did some overlapping stuff theoretically, and there were some little things predicted by theorists without direct data before, but not much).
I am not dismissing the prior work. I know that there are electrochemical connections in the brain, and that they even slowly change strength over a time scale of many minutes or hours. But there is no evidence that this slow process is the main memory storage, this is just an unjustified hypothesis. It has no supporting evidence, nobody has ever associated a memory with a synaptic strength change. This dogma actually can be traced back to an earlier theoretical model, the Hebb model in the 1960s, which gave a rule for a neural network to learn. It wasn't a big trick, practically any nonlinear rule allows a neural network to learn.
The reason you can be confident synaptic strength is not the major memory thing is simply because this process is just too slow for short term memory, it takes place over minutes, because it involves translation. That takes a long time. Short term memory operates on a scale of seconds, it cannot involve translation.
The statement that "there are many complications" in the brain, and therefore the information estimates I gave are off, this just isn't true. You can turn the brain model into a digital computer, and no complications can increase the working memory, other than a storage mechanism capable of robustly coupling to the electrochemical network. The working memory in these "thousands of working parts" in a neuron reduces to order 1 bit per cell, maybe 1, maybe 10, depending on the neurotransmitter packets that are sent over the synapse. The actual computation using your numbers, 30 bilion bits at 10hz is terrible, it's 30 billion bits, 4 gigabytes, operating in parallel at 10hz. That's nearly the worst laptop in the world.
Further, you need to remember that this memory is not stable memory, you need to constantly keep it going, because once the electrochemical signals pass, nothing is left of them in the standard story, except future potentiation changes which take minutes to appear (if they even happen all that often). The only way to preserve the information is to cycle the bits around in "resonant circuits", and this means you need at least 100 cells per bit (assuming the average spike comes at 10hz, and the hop takes 1 ms, but the order of magnitude doesn't depend on the details). This means you are spending cellular levels of ATP per bit of information, 100 cells pumping ions like mad to get a single bit stored during the process of just keeping something you saw in your mind. This procedure reduces the active memory even further, by a factor of 10-100, since the cell firings are not independent along the cycle, so you get about 40 megs at 10 hz. Now were talking about a MS-DOS PC.
I am sorry, but there comes a point where you say "Enough. Nonsense." The coupling to another information carrier is required. It must be cellular level, it must bind to electrochemical proteins, and it must take only a few ATP's per bit. The only (reasonable) candidate is RNA. Proteins are not so reasonable because you can't store them linearly, and computation in them is slow. Further, they aren't the main information carrier in the cell, the RNA is, and you can see a clear evolvability path for networking separate RNA computers with chemical signals, that then gradually transform into electrochemical signals, and then into action potentials that travel along a myelinated fiber.
Mattick has compiled enough experimental anomalies in RNA in brain that this gives some initial confirmation to this idea. After reading Mattick's evidence, one must be certain. It is enough to simply considering the sheer amount of RNA in brain, there is no way to avoid the conclusion, it's more evidence than Einstein had for photons.
I would encourage you to think about it before rejecting it, because it really is true. But whether you reject it or not doesn't really matter (except to your future ego), because nature doesn't care about our opinions, and we can check the hypothesis easily today. I'll be involved in doing that soon.
Do you have a specific Mattick paper in mind?
I looked at Page on Pnas from Mattick, which says that the mouse brain has 894 non-coding RNA fragments expressed above "noise". They propose that this RNA is functional (a reasonable idea). However given that a mouse brain has around 100 million neurons and possibly 1 trillion synapses, 894 RNAs is interesting from the perspective of cell function, but seems minimal from a "storage capacity" perspective. This Mattick paper does not propose memory encoding as the RNA function, but maybe another does?
If Mattick didn't propose it great! Then I am proposing it. But, I wasn't born yesterday. He already knows this for sure, but the censorious stupid referees in journals don't allow you to say what you know, they censor you when they feel you "don't have the evidence to support your statement", which basically is every time you make a nontrivial prediction (they are never any good at determining how much evidence is sufficient--- this is a judgement call which is always biased against your competition).
This basically means Mattick is gagged all the time while writing in a journal, and I have to extrapolate to what he knows from what he writes. But from reading his work, and knowing his biases from previous publications, and knowing what would lead one to the investigations he is doing, I am pretty sure that he knows the RNA is doing the thinking. If not, then I know, so it's my own prediction, but either way, it's true.
The paper I was referring to is a 2010 review:Qureshi, Mattick "Long Noncoding RNA in Nervous.System Function and Disease" here: Long non-coding RNAs in nervous system function an... [Brain Res. 2010] And really, it's the first thing that comes up on google, so I don't know why you ask.
hmm... that's a cynical view of referees that misunderstands their role. Referees ensure that only conclusions supported by evidence get into journals. People are free to publish speculative ideas in tech reports or non-refereed websites. No one is preventing the idea from getting out. What they are preventing is the credibility of the journal from being used for proposals that have not yet earned that credibility.
The journals set a high bar to prevent flat-out wrong ideas from getting in, because once they do, they are in the scientific record for decades and it takes a while to get them displaced. Ideas that are now known to be wrong are still cited as facts because they slipped through the referee filter. Still, people can say whatever they want at a conference, in tech reports, on their website, etc.
Regarding the paper you cite, the authors do propose a specific role for non-coding RNA, and it is not information storage. They propose that it serves a function during brain development to differentiate neuronal cell types. Arguably the initial wiring of the brain encodes instincts, so you could make an argument that they are claiming a possible role for ncRNA in innate behavior representation (although that would be reading a lot into what they actually say).
You don't have to lecture me on what referees do, I have been through the process, the cynical view is correct. Einstein published without referees, and they would have rejected photons, and quantum mechanics, and all the other theoretical advances, frankly because they would be TOO STUPID to understand the arguments and why they are conclusive.
This happened in refereed physics journals regarding string theory, regarding quarks, regarding the Higgs mechanism (although that squeaked by review thankfully, after the rejecting editor had a change of heart) regarding quasicrystals, basically regarding everything new and interesting that was a theoretical advance.
Physicists had a trick around it, which was simply to fill up the paper with enough formalism that the referee was intimidated. This worked, but it made the papers have an obscurity singularity starting in 1957 when refereeing became strict and the literature expanded greatly.
The idea that there is some objective way to determine when the evidence supports a conclusion is laughable. It depends on your prior, and when your prior is conditioned by reading literature, and weighing pre-existing opinions, every new idea looks like it has 0% chance of being correct, even when the evidence objectively gives it a near certainty, with reasonable priors.
Mattick is a developmental biology guy, he knows the RNA is important for spatial organization of cells, he figured this out from embryogenesis, that was his path to RNA brain. I came at it from a different perspective, entirely theoretical. But I can't imagine he doesn't know it's the main information carrier.
But if he doesn't, then I DO, then GREAT! I am taking credit.
I have no patience for the mental retardation of referees, they can go fuck themselves. We have an internet now, and they aren't going to shut me or any other theorist up anymore. I will say what is demanded by the theoretical evidence, regardless of what those morons want to let through the door in journals. This will allow the refereed journals to die, and good riddance. Refereeing has been nothing but a pain in the ass since it was introduced in the 50s.
Fascinating, thanks for this. So why don't you ask Mattick to review your thoughts?
Apparently not everyone agrees with Mattick (no surprise, that's what science is all about):
John Mattick on the Importance of Non-coding RNA
But I think Mattick makes some provocative points. One of the time-honoured methods of studying neurophysiology is to create a lesion in the system. I do this everyday when I give a general anesthetic. Stuart Hameroff is the anesthesiologist at U of Arizona who put Penrose onto the QM/microtubule thing, which doesn't appear very testable. However, to examine what the situation is with ncRNA's and general anesthesia seems to me to be a relatively easy thing.
I think I can review myself. I always saw Mattick as a competitor! I am jealous of his priority in this, I also figured out the non-coding RNA in 2003, but that was about 7 years after he first proposed it informally, and 2 years after he published. It was one of the biggest let-downs of my life to read his 2001 paper. His 2010 paper on brain RNA was also a massive let-down. It is difficult to contact a person who keeps scooping you.
I think the only advantage of the things I am saying over Mattick is that there are precise predictions regarding the patterns of complementary binding in the sequences, and the mechanism of the function, so you can predict patterns of complementary ncRNA and different kinds of RNA than genomic and ribosomal.
Doesn't general anasthesia work by preventing electrochemical impulses? This would have very little effect on ncRNA. The RNA you get by sequencing. This stuff doesn't need a specific test, you can't avoid seeing it today, the machines are around and people will find it.
About the ncRNA, Mattick is just plain right, and the theoretical evidence was already overwhelming in 2002, but biologists don't know how to evaluate theory without politics, they don't have the culture to do so. I don't read the opposition, they are not worth reading.
Local anesthesia works by inhibiting transmembrane sodium conductance and thereby membrane depolarization, but the mechanisms of general anesthesia are more mysterious. The current in vogue theory invokes a GABA-ergic mechanism. However, Stuart Hameroff wrote a nice article in Anesthesiology back in 2006 on alternate hypotheses...of course, he focuses on his OrchOR theory, but it's worth a read nonetheless:
Hameroff 2006 Anesth and Consc.pdf
I can't find any research being done on anesthesia and ncRNA's. I will ask our research poobah back in Toronto to see what he says.
Most medical researchers lack the necessary knowledge of programming and math to approach your suggestion. It is definitely worth investigating!
The first experiment I think should be to florescently tag the RNA Genetic encoding of fluorescent RNA ensure... [Trends Biotechnol. 2012] to see if the RNA is detached from membrane under action of the drug. This could be pure ticker-tape inactivation.
Replace "Starlings" with "Neurons" and you will see a thought.
YouTube: Murmuration (Official Video) by Sophie Windsor Clive & Liberty Smith
I'm not just taking poetic license. It's the same thing. As Paul King's answer states, it's a "chaotic attractor of neural activity in the brain."
Chaotic attractors straddle past, future and present. You can't nail down the edges of where they exactly begin or end, and the larger phenomenon can't quite be reduced to the behavior of its components.
A "chaotic attractor of neural activity in the brain" would have only a tiny amount of memory capacity, and would require mountains of ATP per thought. It's the standard story, it's just totally wrong.
We do not know, yet.
Dude, I'm pretty sure I know.
By which definition of knowledge and certainty?
By definition of "I thought about it" and I said "oh, yeah", and then "how likely", and I was like "pretty close to 5 sigma". Ok. I know. Knowledge isn't social, it happens in the individual head. I just happened to be one of the first people to know it. That's kind of nifty.
You could be a computational neuroscientist! Congrats.
Oh no. I would be booted out. Believe me, I told neuroscientists how it works, and with one exception, none of them even considered it remotely plausible! That was funny, because I knew it was right. I like exposing frauds.
A new idea is always punished by a field. To be a computational neuroscientist, you need to listen to thier stupid brain-damage. That's what the internet is for, to get rid of brain damaged experts.
By the way, I described the idea in an answer to this question here. It's not like it's a secret anymore.
I saw. I wonder what Paul King thinks about this.
What difference could it possibly make? The only way to resolve these things is in a lab with a sequencing machine and a computer.