To what extent does our knowledge derive from the senses?

It's the same question as whether your computer is the software, or the internet connection. Without the software, it wouldn't go online, and without the internet, it would be a very isolated computer, it wouldn't get new software. The issue of which is more central is not a deep question. The issue of what each one is, fundamentally, is completely resolved by a computational understanding of mind, and has been resolved since the days of Turing.

This is not accepted by most philosophers, because they want to own the mind, they don't want the computational people to take over. But its too late.


EDIT: Expanding

The question of whether our knowledge "derives" from our senses or from an internal thing, or from an external Platonic realm, is very difficult to state in a way that is positivistically satisfying. In order to state it properly, to know for sure that it makes sense, one should ask some more meaningful questions:

This is a brain in a vat, minus the inputs, just doing its brain thing. Would such a brain be able to think? I will adress this later. First, consider another positivistic question:

This is not quite positivistic, because "knowledge machine" is not defined. So if you define a knowledge machine as "capable of responding to the input in a sensible way", then one can phrase it as:

If you have a security camera with a control that locks a door every time a strange person passes by, does this security camera understand something?

This is a better question, but there is still the idea of "understanding" in there. I think the best way to give positive meaning to understanding is through queriable internal models.

A system S is said to "understand" a phenomenon P (to some extent) when it has a computational model of the phenomenon working in internal subcomputations that can be externally accessed (by looking inside the computer, or the head, and measuring the memory). This model is said to be an "understanding" when one can identify a map from the world to the model so that the computations in the model reproduce, in a statistical sense at least, some aspects of the behavior of the system.

If we take this (loose) definition of understanding, then the camera, plus software, plus locking mechanism either does or does not have an understanding, depending on how sophisticated the software is. If the software is a simple pattern recognition on a face-parametrization, it isn't a very high level of understanding--- the model is not sufficiently computing. If the software is a full self-evolving computation with a continuous internal processing, and if it able to access the face-recognition module, to have some mechanism for predicting the future if the door is not locked, and if the internal computation is sufficiently complex to encode some prediction as to what would happen if you don't lock the door, in some internal terms, then one gets more understanding, analogous to a cockroach or fruit-fly.

If the software has 10 trillion gigabytes of working memory, each megabyte of which has an independent processor working at between 10 megahertz and 1 gigahertz independently and connected by 10 trillion connecting buses, each transmitting 1 Kilohertz, the result of the software could be an AI that passes the Turing test. If so, then the understanding is human-level. This is more than 10 orders of magnitude more computation than in computers available today, but it should be achievable in the near future.

The following theory is the only reasonable one (meaning: I do not admit other ideas are reasonable, and I never will). The internal computation, running on the internal gigabytes and with the internal clock-speeds defines the understanding and knowledge, it defines the soul in a dualistic conception. I don't believe it is meaningful to ask if it is "truly" dual or not, because I can't identify any positivistic meaning in that, so it is meaningless. The brain and mind/soul are as dual as software and hardware, and this is not an analogy, because that's what they are.

The details of the hardware is largely irrelevant, so long as the (stochastic) algorithm is the same (more or less). A simulation of a mental system is indistinguishable positivistically from the system itself, therefore any arguments to the contrary, like Searle's room, are wrong (and essentially fraudulent, since I don't think Searle is blind to the fact that the argument is wrong).

Identifying mind with software, and brain with hardware, one can ask about the particular software that encodes our understanding and knowledge, and to what extent it is dependent on senses. Senseless consciousness

It is possible to consider the entire Earth as a closed system, so the computation in all the living things on earth only get a meager input from extraterrestrial sources. Nevertheless, evolution can proceed on Earth, and there are subparts which are evolving which have understanding, namely us.

There is no need for external stuff in order to understand internal things. One could imagine a senseless AI computer with internal understanding of mathematics which it can arrive by simulating 1000 mathematicians debating, in its head. The process does not require external input.

But such a computer would be separated from the rest of the world. It wouldn't be able to get inputs to modify its internal evolution of programs, so it helps to have an internet connection. Knowledge, understanding and Senses

If you have senses, your knowledge is derived partly from your senses--- all your knowledge of the outside world comes from the senses. But your knowledge of human faces and how they are supposed to look, or of smiles and frowns, this stuff is hard-wired, it comes in the package, already preprogrammed in the ROM. You don't need any sensory input to know what a smile is.

This kind of knowledge is not developed by external interactions, but it is also sensory knowledge. But there is also hard-wired ethical knowledge.

It is possible to imagine a closed system which can generate several interacting people, with their own sensory environment, limited only by the computational power of the simulation. Such a system can generate internal knowledge.

But there is also external knowledge, which comes from the senses. There is no mystery here, it is not an analogy to a computer, it is a computer.


Comment thread

I copied and pasted all the comments, so that people can see if they merit deletion, or if they are inappropriate in any way:

Is this a seriously meant analogy? Because I don't get it: 1) What has Turing (its Turing machine model?) to do with networked computers? 2) In which sense can the computational theory of mind be used against philosophers? It seems to me that Hilary Putnam and Jerry Fodor, two major proponents of CTM, are philosophers. 3) I see in which way a theory of mind might be used to explicate the notion of rationalism - but what's it got to do with the empiricism/rationalism divide? – DBK Apr 1 at 23:51

You might be largely on target here, but the answer is too cryptic to tell. Could you try to explain the position succinctly and clearly in addition to (or instead of) giving an analogy? Analogies are like humans in that all are imperfect. – Rex Kerr Apr 2 at 1:26

@DBK: The computational theory of mind is created by Turing and logical positivists, and since philosophers today universally reject positivism, they are intellectually bankrupt. You can't credit them with this idea. It isn't an analogy--- the brain is a (stochastic) computer and the mind is the software the brain is running. This is what they are. No analogies. There is internal processing, which can deduce stuff about Plato's realm, the realm of software, and there is external input, which gives you data about the particular computations in the world. – Ron Maimon Apr 2 at 4:38

@RexKerr: It isn't an analogy. An analogy is "the brain is like a switchboard", because it isn't a switchboard, it is like a switchboard. But the brain is a computer, and it is universal up to size limitations (the size of the brain computation dwarfs any digital computer by many many orders of magnitude). – Ron Maimon Apr 2 at 4:39

Well, there's a lot of correct stuff in there, with an unfortunate sprinkling of stuff that is either unfounded or just plain wrong (we don't know what flies compute, so it's an inappropriate example for door-locking; the computation is pulled out of a hat; etc.). Even though it's not succinct, I guess that's worth +1. – Rex Kerr Apr 2 at 5:02

@RexKerr: Nothing in there is wrong. Please stop saying nonsense. If you think something is wrong, tell me what it is, and also, please don't upvote if you think it is wrong. While we don't know exactly what flies compute, we know the approximate size of the computation (and we also know approximately what their cues are and how they respond in real time, due to arena experiments conducted by Dickinson). – Ron Maimon Apr 2 at 5:52

@RonMaimon - Phrases like "internal gigabytes" and "software and hardware" for the brain do not acknowledge the vast difference in implementation that requires different reasoning (mind is software?!); we are more like a self-modifying FPGA, if anything. Also, ten orders of magnitude increase in computational power is either an abuse of the words "near future" or oblivious to physical limits. Passing the Turing test is not equivalent to human-level understanding. You were talking about "some prediction as to what will happen", which we don't know flies do; we know they learn associations. – Rex Kerr Apr 2 at 6:25

@Rex Kerr: It does not require different reasoning, because all computation is universal--- any one implementation can simulate another. Ten orders of magnitude is 60 years of Moore's law, but more realistically, harnassing RNA computation. The only reason the brain is so big is because it has the memories encoded in RNA molecules. The data in the brain is discrete bits, despite superficial ideas that physical systems can compute with continuous quantities (they can't because of noise) and manipulated in discrete stochastic events, so that it defines a self-modifying stochastic computation. – Ron Maimon Apr 2 at 7:34

Passing the Turing test is exactly identical to human level understanding, for a logical positivist, and there is no way to pass a Turing test (with a good interrogator) without this. The Turing test itself is the epitome of positivism regarding the mind. – Ron Maimon Apr 2 at 7:36

@RonMaimon - There is no compelling evidence that memory is stored in RNA as opposed to protein (e.g. extracellular matrix). RNA absolutely doesn't do the computation. Also, if you can't show it a picture and ask, "Who is the most important person in the room," and get it right, the computer is missing a very important component of being human. The test doesn't even cover that. – Rex Kerr Apr 2 at 19:25

@RexKerr: RNA absolutely does do the computation, in coordination with proteins, but the data storage, the RAM is almost 100% RNA. This is not something people other than me believe (at least not since the 1950s), but I have no doubts anymore, after the experimental evidence for RNA function in brain compiled by John Mattick in 2008. Proteins can't store enough data, and their data does not couple easily to electrochemical potentials. The turing test is philosophical--- it allows you to say "get out a pencil and paper and make a 1000 by 1000 grid, and fill out spots in this pattern, etc. – Ron Maimon Apr 2 at 23:57

Even if you are not allowed to do that, you can query internal imagery--- imagine a box on top of the crate.... ok ... What is the color of the crate? ... is it indoors? Is the box bigger than the crate? What color is the box? etc. If the imagery isn't there in the computer's head, you will smell it within 2 minutes. If you ever did an over-the-internet chat, you would see how easy it is to learn another person's internal world from transmitted written data. – Ron Maimon Apr 2 at 23:59

-1 Your expanded question isn't an improvement. 1) You throw into the mix some known proposals and unsubstantiated claims, then you go on to "translate" them into a private philosophical language of yours which is really hard to follow; finally you draw bold conclusions. I would advise you to cite uncontroversial references and adopt more standard terminology. 2) If your sources are controversial (as every novel scientific results should be), please use them as such. To state controversial claims as 'the only possible/sensible view' doesn't make them less controversial. … – DBK Apr 3 at 0:27

… 3) Please moderate the tone of your answers. Also, if your statement "The following theory is the only reasonable one (meaning: I do not admit other ideas are reasonable, and I never will)" is not just a deliberate provocation, you're probably posting in the wrong Q/A site. 4) You use the term "positivistic" alot, and with pride. As someone devoting quite some time to investigate actual logical empiricism, I would advise you to consider a rather important "positivist" insight regarding admissible philosophical problems: "solvable in principle" does not mean "already solved". – DBK Apr 3 at 0:27

PS: My aim is not to discourage you to contribute to this site, your answers are indeed valuable! But please keep in mind that this is a community-driven Q/A platform, not a personal blog. As convinced you might be of your philosophical positions - this does not absolve you from making your case and giving your arguments. Also, your fondness of certain philosophical positions does not make it less probable that other users do have other reasonable positions themselves. – DBK Apr 3 at 0:43

@RonMaimon - Maybe you're confused because RNA is localized to synapses to help maintain proteins required for long-term memory. The initial computation is electrochemical (see "spike-timing dependent plasticity" (STDP)), then protein-based but requiring protein synthesis (see "long-term potentiation" (LTP)), and then finally the nucleus gets involved to help stabilize things (see "immediate early genes" (IEGs)). You are aware, are you not, that the certainty that people profess to have about their beliefs has very little correlation with whether they're true? Y'might want to ponder that. – Rex Kerr Apr 3 at 5:32

@RexKerr: It is not that the RNA is localized in synapse for protein synthesis--- this is just a speculation believed by most researchers. There is protein synthesis, but the memories themselves are stored in RNA or RNA protein complexes, not in proteins or potentiations. This is a personal idea of mine, which I am certain of because it gives correct back-of-the-envelope estimates of memory capacity, unlike anything else. It explains the sheer mass of RNA in the brain, the observation that it is mostly noncoding. I will write a paper about it in coming months, if you need a source. – Ron Maimon Apr 3 at 6:59

@DBK: I am participating in this site only because I feel compelled to defend the neglected positivists, and to expose fraudulent thinking and fraudulent work which dominates the field. Unfortunately, there is no nice way to say "this work is fraudulent, and this position is untenable." You just have to say it, and give a cogent argument. I hope I have done so above. The argument I gave above are mostl completely unoriginal; they just never seem to make headway in the morass of the philosophy literature. – Ron Maimon Apr 3 at 7:01

@RonMaimon - Since we don't know in what form memories are stored--in particular, how compressed they are compared to raw sensory data--any "back of the envelope" computation is either going to be uncertain by many orders of magnitude or be pure fantasy. For example, in Maple I can type 100!; and get the correct answer. Do you know how long the equivalent Turing machine program is? If you don't know which is used in the brain, how can you possibly make sensible estimates of encoding? Also, most RNA is from introns but is rapidly degraded: ncbi.nlm.nih.gov/pubmed/19103666 – Rex Kerr Apr 3 at 14:29

@RexKerr: The degradation rate is not important if it is copied, and nobody knows what the RNA in the brain is--- it is not coding RNA for sure, but "introns" is usually stuff that is cut out of coding RNA, and it is certainly not that. It's the memories. The paper you link is speculating, like most papers in neuroscience. The back of the envelope calculation is for completing a computational task, like identifying a cup, in 1/10 of a second, when the computation rate in neurons is insufficient, because the spike-rate is 1000 hz at best. – Ron Maimon Apr 3 at 17:56

... one can do better with smaller brains. Fruit-flies have brains of about 100,000 cells, many graded responding, and you can estimate the information flows in such a brain and compare with behavior. If you ignore RNA memory, the estimate is wrong. The back of the envelope just gives crude order of magnitude, but the difference in RAM content is order 10^10, or ten orders of magnitude, and you can't mistake one for the other. Please remember--- almost all neuroscientists are full of crap regarding this--- they missed the most important property of the brain. – Ron Maimon Apr 3 at 17:58

@RonMaimon - The fastest object identification in humans is ~200 ms (and ~400 ms is the typical pathway), of which ~50 ms is eaten up by photoreceptor response--so yes, it's surprisingly fast. But it's massively parallel. You don't need many cycles then--see e.g. Hinton's visual processing stuff. You must be making similarly foolish assumptions about flies; the tasks flies do in labs (olfactory avoidance learning, stripe fixation, place learning, etc..) all in principle only require a few kb of memory. See my Maple vs. Turing machine analogy--you have made a major blunder of this type. – Rex Kerr Apr 3 at 18:33

@RexKerr: I have not made any blunder. Parallelism doesn't help, especially in regards to the fact that if you take away the stimulus, the brain still remembers that it saw a cup. Don't argue that RNA is not the memory agent--- this will be definitively shown/disproved by experiments in coming years, there is absolutely no way that we won't know for sure. I don't wait for the results, I predict that it will be RNA, and I am certain in the prediction, because of processing gaps that I can fill. But you shouldn't take my word for it, google RNA memory to see 1957 work which was ignored. – Ron Maimon Apr 3 at 18:55

@RonMaimon - Parallelization is everything. Google "Steinbuch learnmatrix" for examples of parallel memory architectures able to store order n/log n distinct memories (with encoding capacity per memory on the order of (log n)^2 bits) in the structure of synapses between two sets of n neurons (requires one burst of APs to retrieve). I don't doubt that RNA may be useful in maintenance of synaptic strength, but most of the rest doesn't need to wait for experiments; enough relevant ones have already been done. I'm happy to read about more, once they're available. – Rex Kerr Apr 3 at 19:43

@RexKerr: n/log n distinct memories revived for an instant. n/log n distinct states is not enough, it's not even enough to have n/log n distinct bits (2 to the power n/log n states). You need 2 to the power of about $10^9 n$ states, where n is the number of cells (this is about 10^(10^20) distinguishable states). I am initimately familiar with connectionist models, they are completely wrong in both order of magnitude of computation and order of magnitude of response times. Only RNA works, and without RNA, neuroscientists underestimate the number of bits by 10 orders of magnitude. – Ron Maimon Apr 3 at 20:12

@RonMaimon - Why on earth do you need so many states? Those sorts of capacities would allow you to look up what a single photoreceptor did 25 years ago. There's no evidence that we have the capability to do anything like that. Most of us are very good at synthesizing what is important out of masses of irrelevant stuff, and we tend to be bad at synthesizing out one high-apparent-entropy pattern from another. You can elevate computational demands arbitrarily high if you ask for capabilities that we don't actually possess. – Rex Kerr Apr 3 at 20:31

@Rex Kerr: You are wrong--- this number of states is not large. The number of states in your laptop is roughly 10^(10^10), and this is not enough to run the software of 5 years from now. You are getting confused by the logarithmic nature of RAM in bits. You, like everyone else, have very little intuition for the quantity of RAM demanded by a brainlike computation, because the inputs and outputs are in a space with a smaller number of states, you can falsely imagine that the same internal computation can be done with a vastly reduced number of states, on the order of the inputs and outputs. – Ron Maimon Apr 4 at 0:28

@RonMaimon - I know exactly how much memory you need to represent 10^(10^20) states. That's why I chose the example I did. Anyway, given that you've made up what a "brainlike computation is"--you're not using any of the standard ideas, obviously, since you insist that RNA has to do it--I obviously can't refute that that requires oodles of state. I assume you're brute-forcing some computation because you don't believe a acceptable heuristic exists, and then ignoring all the evidence that not only doesn't biology do it that way, but it has found a suitable heuristic. – Rex Kerr Apr 4 at 0:59