What sparks a thought? OR How does a thought spontaneously arise in the brain? Specifically in the absence of an obvious stimulus.

Specifically in the absence of an obvious stimulus. Is it noisy basal level brain activity that gets translated into thoughts once in a while or is it the final result of elaborate subconscious processing that we are not aware of?

While we do not know the answer for sure, and my answer does not appear anywhere in the literature and has zero experimental support, I would like to contradict Paul King's answer, and give an alternate model that is less fanciful than the standard one.

The standard model is fanciful, because it considers the brain's computation as distributed over 300 billion neurons, acting collectively, at about 1000 hz rate of processing. The problem with these models is that there is no way to get coherent computation out of such a slow clock cycle and such a limited amount of RAM, at least nothing more than a reflexive jerk of an arm, or a quick pattern identification that isn't stored or transformed in any complex way.

The issue can be made more stark as follows: when you see a photo of a bicycle, it takes you about 1/10 of a second to identify the bicycle. This means you have at most 100 action potential cycles available. This means that the identification of the bicycle supposedly happens through a magical process, where 300 billion bits, recieving input, organize themselves in 1/10 of a second, or 100 cycles, into a pattern that says "I saw a bicycle" and then somehow maintain this pattern dynamically for a while.

Further, this pattern can be more or less recalled by just saying "think of a bicycle", so somehow this dynamical system is able to magically identify what you are saying, pick out the proper pattern for a bicycle, and arrange itself like so for this pattern, using 300 billion bits at (at most) 1000 cycles.

In the past, 300 billion might have been a big enough number to snow people, but no longer. 300 billion on/off bits are not sufficient, especially with this dinky clock cycle. The paradox is starker for fruit-flies or worms, where the number of brain cells is tiny. It is difficult to explain why it is a paradox if you do not have experience with computers of various sizes and powers.

So I call bullshit on the connectionist model. It is pitifully small, it is wrong. It also has no evidence to support it, beyond "got any better ideas?" I think I do have a much better idea, but it does not come with any evidence, it is a way to resolve the theoretical difficulties described above.

The way a thought arises in the brain is through molecular interactions within a single cell. The cells are coupled computations, with the communication being electrochemical impulses, but the substrate storing the data is RNA molecules. This RNA is active in synapses, in axonic stems, all over the neuron. The computation is by complementary binding, and splicing, rejoining, perhaps copying of RNA with an RNA/RNA polymerase, but perhaps not this, because such a polymerase has not been identified in the human genome.

This RNA computation is required for other reasons, as a self-consistent thing which can modify DNA for sensible mutations. This stuff can control protein expressions by sending out siRNAs, and it is what is transcribed from the non-coding genome. I do not wish to argue for a computing cell-brain made out of RNA, because I think it is too obvious to argue for anymore, it is almost accepted fact (perhaps not yet). The goal here is to argue that this RNA is what is also doing the thinking in the brain brain.

These computations are not independent, but they are linked through neuron activity into a network. The network transmits signals from neuron to neuron through spikes, or action potentials. Since none of this is observed, I need to make up the details, so as to have a model. The true details will probably be different. But I will say that these spikes are read out by RNA modifying membrane proteins, and whenever a spike appears, the proteins write out a certain base on a strand of RNA, so that the RNA is a semi-permanent record of the spike-train in time, stored like a ticker tape. Each a,u,c,g is encoding the time between spikes, or it's aaauaaauauuaaaaauu with u's at every spike. The RNA then takes the ticker tape from the cell membrane, and does munch munch munch compute compute compute, and out comes new RNA that is attached as a ticker tape at the axon, and whrrr... out comes a new signal with a precise set of timings which produce other RNA's in cells down the line. The cells ticker-tape RNA's are protected, catalogued and stored in cell bodies, and in glia, for later retrieval, the glia retrieval is slower, when you are thinking of that thing that is just on the tip of your tongue, your brain needs to search the library of ticker tapes to find the right ones.

When you have a memory that is spontaneous, your RNA's in different cells are sending little spikelets to see if there is some other cell that can do something with the data. Most of the time, nothing. Every once in a while, a cell puts an RNA on the ticker-tape machine, sends out a signal, and gets a whopping amount of feedback, because this pattern matched the next cell. Then the cells start to communicate back and forth, exchanging more RNAs, the whole network is alerted, and you think "Hey, I got an idea!"

The mechanism has a memory capacity with is order 300 billion gigabytes, or about 10^21 bits (or more, you can easily fit 10 gigabytes of RNA in a cell).

The reason to believe this over the standard story:

  1. It actually gives a non-magical explanation for the ability of the brain to compute so deeply. The RNA is 10 orders of magnitude bigger in memory and 8 orders of magnitude bigger in processing speed than the neurons as cells.

  2. It can be coupled to heredity simply, with no layers of translation: the DNA can store instincts in sequences that are transcribed in neurons and directly serve as ticker tapes to each other, thereby encoding a gigabyte of instinct directly. If you have to go through a layer where you translate to proteins, and then direct brain activity only indirectly, it is hard to see how you can encode complex instincts at all, since the amount of instinctive data, after all the steps is close enough to zero to be indistinguishable.

  3. This presents a method for evolving brains, as separate RNA computations in cells are linked up through neurotransmitters. It will explain why the neurotransmitters are present in simple multicellular sea-organisms no bigger than 20 cells, similar to blastocysts. These organisms do not have a nervous system, but still make use of nervous system precursors. There is no evolvability path to a brain in the current ideas: a brain is useless until the cellular components network. In this model, linking up RNA computations is useful even before there are any well-defined nervous system components.

  4. RNA is the major component of the brain, and non-coding RNAs are moved around with apparent purpose all over neurons. This was pointed out by John Mattick in 2010, although I don't know if he would endorse the hypothesis I am presenting here.

The reason to believe the standard story:

  1. Neuroscientists all say so.

That's it. There is no evidence for the standard story, it's just what people were able to observe to date.

There is exactly zero experimental evidence for a ticker-tape in every neuron, or a major RNA computer in every neuron, aside from the standard genetic one. This is why one should look. This model is much better fit to the data even barring any evidence, but the predictions are so many, that it is impossible to miss. With sequencing machines, one should be able to identify the RNA involved in the thinking, or rule it out without any problem.

But you're not going to rule it out, it's correct.

Replace "Starlings" with "Neurons" and you will see a thought.

YouTube: Murmuration (Official Video) by Sophie Windsor Clive & Liberty Smith

I'm not just taking poetic license. It's the same thing. As Paul King's answer states, it's a "chaotic attractor of neural activity in the brain."

Chaotic attractors straddle past, future and present. You can't nail down the edges of where they exactly begin or end, and the larger phenomenon can't quite be reduced to the behavior of its components.

We do not know, yet.