With few exceptions, the AI research community completely overlooks the "rest of the body" when it comes to thinking about intelligent systems and focuses too much on the brain.
The amount of computing going on in the peripheral nervous system is staggering - and when you look at HOW and WHERE this computing works with effectors and sensors you realize how much of intelligence is reliant on those systems being there.
Brains are interesting - but they actually don't do all that much when it comes to the majority of how people interact with the real world, and frankly you don't need that much (physical mass of) brain to be intelligent.
And yet I can go a-chopping anywhere but the brain without cognitive deficit, but even a little scraping of the cortex has a notable effect. The brain is fed and maintained by the body, and as such is vulnerable to the body's failures, but such a connection doesn't exactly break down the difference between mind and body.
This is a perfect example of what I'm talking about. You are framing this in terms of "cognitive deficit" which is specifically a (poorly defined) function of a generally high level capability test focused on brain function evaluation.
Intelligence is not simply the theoretical capability to do higher reasoning in a structured test in my opinion - it's the functional capability to actually prove you can do higher reasoning through demonstration.
You can't just go "a-chopping" anywhere any have the same functional capabilities. If I chop off your thumb, you are significantly functionally less intelligence in a practical test: if my intelligence test requires you to zip up your pants as a step, you would do much more poorly without your thumbs than someone with 40 fewer IQ points.
Think differently - if you can't prove you're more intelligent by actually DOING something then you aren't more intelligent.
On one hand, I appreciate the way this perspective smooths out a lot of pointless argument in favor of an observable truth. On the other, I worry it falls prey to the streetlight effect. Sure, capacity to do things in an absolute sense is a useful thing to measure, but we already have lots of words for it; 'capacity to do things' is a bit wordy, but 'ability' as in 'disability' isn't too far off.
Similarly, even if we don't like the words we use to describe 'ability', why steal intelligence? We still need a word for the pretty obvious cluster of things relating generally to cognition, information crunching, and problem solving.
There are many axis on which Stephen Hawking and I differ (pre-death of course). I, for instance, can go jogging, or speak without machine assistance, or zip up my pants. Stephen Hawking could understand advanced mathematics and come up with physics breakthroughs. Am I smarter than Stephen Hawking? Are we even tied?
It seems evident that there are at least two types of ability that are at least somewhat decoupled from each other. Some tasks involve both, some tasks involve one, and some tasks involve the other. Certainly we see correlations between them, and it would be unwise to completely discount one when considering the other, but grouping them under one heading needlessly confuses the issue, especially when that heading is generally understood to refer to one of the types of ability specifically.
It seems clear that the word we use for the axis on which Stephen Hawking had me beat is 'intelligence'. Now that doesn't mean that the definition is set in stone whatsoever. If you want to use the same sequence of letters to describe a kind of fruit, you can do so. But regardless of if you give it a name or not, that axis, that real empirical grouping of ability, still exists. And while that grouping exists, and while a word is commonly used to refer to it, there is little reason to try and use that word to refer to something else except to try and transfer some of the associations with the grouping to that other thing. No matter what it confuses the issue.
Regardless, how we choose to define intelligence actually has little to do with the relevance of the body to the development of AI. Computers already have lots of ways to interact with the world, from a myriad of sensors to motors to screens. There is no problem with the statement "intelligence is highly dependent on the body", except the potential confusion that I noted above. There is, however, a problem with the statement "cognition is highly dependent on the body". The problem with that statement is that it is demonstrably false. Most of the body doesn't do any sort of informational computation except the simple control systems needed to handle the local area. Those control systems are fascinating (every joint has a little PID loop with incredibly clever ways of essentially integrating and deriving!), but hardly beyond the understanding or evaluation of AI researchers. So we shouldn't expect some secret of better AI in the body. We shouldn't expect it in the brain either, but certainly not the body.
edit: Having actually commented I now see how pointlessly long and confusing my comment is. Sorry about that, I'm having trouble actually translating my thoughts into words here.
>And yet I can go a-chopping anywhere but the brain without cognitive deficit, but even a little scraping of the cortex has a notable effect.
Is this not obviously false? Cut out someones heart, lungs, stomach, liver, kidneys, etc. and they will surely have a "cognitive deficit" in the form of death. (Assuming no transplant is used)
A-chopping anywhere but the brain without nontrivial cognitive deficit would have perhaps been better, but a little less pithy. The next sentence admits as much.
I don't understand the meaning of this response. You seem to be suggesting that all memories are in the limbs. I don't think that's what was being suggested here.
Trying to stay within what I think you are saying, don't forget that if you amputate some insect's legs, they will keep flicking around for a while, even though there is no connection to the brain. Surely the insect doesn't suddenly forget how to move it's now missing leg. But the leg does seem to have its own capacity to remember how to jump or whatever even without the brain. Until it runs out of energy, of course.
You can cite your body. Before you get offended, really, just examine it as a system, and try to explain how can you have a conscious experience without any sensory input.
Even with a lame comparison to computers, the machines also need a lot of stuff to put a CPU to work.
Also, if the mind is fully integrated with the body, how you explain seemingly inconsistent states that seems to work just fine. Eg., people with ALS or quadriplegic or severely injured or mutilated. If the mind can perfectly works without a perfectly abled body, where's this mind body connection? Also, where's such connection in a comatose brain with a completely funcional body? Maybe I misunderstood what this mind body connection is supposed to be.
Sensory deprivation is interesting, but you already have a conscious experience when you enter the tank. It even reinforces that your mind is tied to your body.
I think the second sentence is opinion, not something that could be objectively tested. Last sentence is mostly true. Sick people are much less happy than when they are healthy.
Sure but think about what happened. 50+ years ago, researchers figured out some aspects of a neuron, simulated a network of grossly simplified neurons, and found out they could do useful things. Much of modern NN stuff is just following that trajectory.
I don't think many people seriously believe that artificial neurons are in any way comparable to a real neuron, much less believe that an ANN is comparable to what goes on in the human body. Maybe in some very limited cases like the visual cortex, but even then I think most people would admit that it's a poor model valid only to a 1st approximation.
That said, there is still merit in pushing the current approach further while other researchers continue to try to understand how biology implements intelligence and consciousness.
I don't know a single luminary in AI who seriously considers the whole of body approach to their work.
Pretty much everyone talks/reasons specifically only about the brain and never how they work holisitcally sensors and effectors.
For example, in computer vision, all of the biggest work assume the starting point of a 2D (RGB+Grey) matrix as the starting point. They never make any assumption about how that image is generated. Only really in the LIDAR world is the sensor considered, and even then everyone is trying to jam LIDAR return into a 2D matrix.
Most AI tasks that I think of - image labeling, NLP - the majority of that happens in the brain? Do we process language in our peripheral nervous system?
But it's also pretty obviously just a convolution, so not exactly a big unknown. It's super neat, and it makes sense that it would be in the eye, but at the end of the day the interesting processing is done in the brain.
Of course it does - physical phenomena must have a biological transducer to interact with [1]
The structure of these transducers is critically important as they gate/filter the interaction types with the physical world that ultimately are the bounds on what humans can reason about. They are doing transformations and "pre-processing" if you like to compress real world signal into something that can be interpreted by the other systems in the body.
How do you suggest that AI research should incorporate that? Most modern AI research isn't even brain-inspired anymore; the origins of ANNs are brain-inspired, but most SOTA approaches don't really seem to be.
Start from first principles in the physical world. Do more work on what kind of processing we can do at the edge of the sensor and work our way up from there.
For example, build a system that learns to have reflexes. That is, has processing close to or near a sensor and can work collaboratively with other sensors to learn (not explicitly programmed) to take action based on input without a central processing system.
I would argue that if you can build a complex enough physical reflex learning system, then you have enough of the building blocks for a human level system.
The amount of computing going on in the peripheral nervous system is staggering - and when you look at HOW and WHERE this computing works with effectors and sensors you realize how much of intelligence is reliant on those systems being there.
Brains are interesting - but they actually don't do all that much when it comes to the majority of how people interact with the real world, and frankly you don't need that much (physical mass of) brain to be intelligent.