Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mental phenomena don’t map into the brain as expected (quantamagazine.org)
174 points by theafh on Aug 24, 2021 | hide | past | favorite | 95 comments


If you were to look at how a computers' memory is organized, a rough, non-programmer idea of how it would map wouldn't be correct. Big visible things like windows or the start menu aren't put in a single place, etc.

It seems like remarkable that people don't want to imagine there are the rough equivalent of many, many software layers between the level of the neuron and the level of "reason", "emotions", "self", "consciousness" and etc. I think this comes because the "sense of self" is a process that reflexively takes you as primary and indivisible. A framework that view this as generated by lower process violates this.


Or, you know, the analogy of brain as a computer with computerlike memory management simply might not be 100% useful all the time.

> people don't want to imagine there are the rough equivalent of many, many software layers between the level of the neuron and the level of "reason", "emotions", "self", "consciousness" and etc

It's not that people don't want to imagine, it's that you don't need to imagine that. You don't need to force how the brain functions into the model of a modern computer, though it might simplify and be popular. It's often a helpful analogy/metaphor, and for people that know a lot about computers it often allows them to prognosticate and theorize using elaborate analogies built on top of more analogies, but it's equally helpful to not insist that the thing must be the analogy.

For instance, when I talk to a good friend who is a neuroscientist and active in the development of prosthetics with neural interfaces he is far more skeptical (healthily though) of the computational model than any person I speak to in software engineering or the tech industry more broadly. That's likely because the analogy and model can be built upon with further computer analogies, which people who work with computers love. If we were using an automotive model to describe how the brain functions, I'm sure we would have many automotive mechanics theorizing on the nature of cognition too.


It's not that people don't want to imagine, it's that you don't need to imagine that.

I seems like you're hijacking the discussion to your pet issue. My point isn't about how good in general the computer analogy is, it really isn't. You should consider that maps of brain function began quite a while ago, before the start of the 20th (though accelerated by WWI). Here, the analogy was the machine and the mapping of brain followed functional units in machines. And if you consider the point I make (which pretty much echos the article), it's really a counter-example. The multi-layer organization of software show a system doesn't necessarily have to follow naive physical functional units, especially ones we naively perceive. That's it, there's nothing here forcing the computer analogy.


I agree with joe!

Neuroscientist love to reify a chunk of brain as responsible for function X. They have done this for 160 years. Only Karl Lashley’s work called “localization” into question but his work was swept aside in the Montcastle-Hubel-Wiesel era of big neuroscience.

Now we do toy experiments using optogenetics of a single inbred strain of mouse and delude ourselves into thinking that we are achieving understanding of a highly complex system.

I’ve worked in this field for 40 years and we are not even asking the right questions.

It is a pity neuroscientists do not know more about analog computing. Can a neuroscientist understand an op amp? Probably not.

To share the harsh light—can a CS expert in AI understand how to get to general AI? Probably not unless, like D. Hassabis, you have a solid background in neuroscience.


On one level that’s a reasonable take it, on the other simply having enough data is a prerequisite to come up with the right questions. Astronomers collected literally centuries of data to build up ever more complex epicenter models before ellipsis became an clearly better fit to the data.

IMO, neuroscience simply needs that foundational data and current theory is largely pointless.


Current theory isn't pointless. Current theory straight up doesn't exist.

There is no (non crackpot) theory of mind yet. Most high quality research on the mind (from a non computer science angle) comes from where it breaks down (schizophrenia, autism), since that's where the money is.

Research into AI and neural nets and stuff may change that, but as far as I'm aware an actual model for how thoughts exist doesn't really exist.


Modern neuroscience is collectively a theory of the mind. It just isn’t a complete bottom up model.


The hard problem of consciousness is an impassable obstacle given the set of tools that we have and, arguably, ever can have. If we grant that there is some level of physical description (chemical, atomic, sub-atomic, whichever) at which consciousness best adheres, what do we use to make the connection? Symbolic equations and theorems don't cut it, and that's pretty much all we've got. Physical systems are fully described by the collection of their measurable properties (positions, momenta, charges, etc)--there's no way to connect things going on "outside" with subjective experience.

What would an operator which turns a physical state into conscious experience spit out?

C|state> = ????

Can't write down e.g. "The perception of a red apple on a table". Red? Table? Just symbols. Consciousness and word-symbols are just too far apart.


> The hard problem of consciousness is an impassable obstacle given the set of tools that we have and, arguably, ever can have

I've come to the same conclusion, after reading and thinking about it for 20 years. I no longer search for answers in books, papers, or threads. It's worse than not being able to find any new insights. I never found any insight that goes beyond describing the problem, or defending the existence of the problem. I no longer expect to see any progress on it in my lifetime.


It is more complex than just a one word description but that’s an argument from absurdity.

Let’s suppose we build a teleport device which can make identical copies. As in your standing in pad X then someone else is standing on pad Y that looks identical to pad X and you both respond identically.

Now, whatever that machine reads as your mind state is your actual mind state. We don’t need to make an actual copy to do exactly what you said was impossible.


To clarify, you can’t really capture an image with words but a camera can capture an image as symbols. It’s just that a person looking at those symbols can’t see the image. The same is presumably true of a mind.


Well, the brain is such rich source of data that unless you know what sort of data to collect, it seems like you'd be at a loss to understand things.

Perhaps what's needed is data-finding tools along with theory.


*epicycle


Regarding could a neuroscientist understand an op amp, https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...


I was agreeing with you. And we are arguing from the same side. My comment was more directed at how the meta-opinion in Hackernews generally struggles to step outside of the computational model, or into other possible theories of mind (or memory) developed by philosophers like Wittgenstein, Dennett, or Hacker; and, the result is usually forcing the computer analogy in ways that assumes a kind of blunt physicalism, like the discrete parts of a computer, and often nonsense, as you described. There is often disbelief expressed at the idea that you could use anything but software or computer analogies to describe how the brain functions, or the mind. The assumption is so strong that people do feel forced into the analogy simply because they also understand computers.


But brains do, in fact, compute things!

There's no reason to think that they compute things the same way silicon "computers" do -- that they're arranged in a von Neumann architecture, or something. But it is true that they perform computation somehow, and therefore are subject to a similar set of constraints, and share similar goals (efficiency, persistence, etc) with modern silicon hardware. (Considering the differences is also insightful; ie our wetware is a much noisier environment than silicon, and likely requires a different approach to error correction.)

This is a useful perspective, for reasons that GP pointed out. We know of many different physical arrangements that are conducive to computing: e.x. Turing machines, von Neumann machines, RNNs are all Turing complete (in principle), and all look very different. So we should question our assumptions about how the brain is organized. Why should we think that, say, sadness "lives" in some physical location in the brain? Does your email "live" somewhere on your computer? (In some sense yes, in some sense, no...).

Is it not equally plausible that the brain implements a very large RNN? And if it does, should we be surprised that if we try to physically locate, say, "sadness", we might be grasping at straws? In the absence of experimental evidence (and even in the presence of it, if flawed assumptions are driving the sorts of experiments we conduct), both seem plausible to me.

Which is just a long winded way to say, I think there is some value in questioning these assumptions. (Not blindly swallowing others, just pushing on why we have the ones we do.)


> I think this comes because the "sense of self" is a process that reflexively takes you as primary and indivisible

Only if you reduce the sense of self to ego function, and a western one at that.

Self is a much more complicated process. And good models of it already acknowledge functional and content heterogeneity eg subpersonalities, embodiment, aspirations, narratives, personhood, boundary problems etc. Self exists because it solves these problems and doing so was adaptive.

> It seems like remarkable that people don't want to imagine there are the rough equivalent of many, many software layers between the level of the neuron and the level of "reason", "emotions", "self", "consciousness" and etc

This is a false dichotomy. We already know we have a grab bag of specialized accelerator “hardware”, but also soft/firm-ware layers that glue things together. That’s the whole point of the article, you can’t encode autobiographical memories without hippocampus but that’s not the only thing hippocampus does nor it is the only thing required to encode those memories.

And people have already upped the ante on this; Cartesian reductionism of trapping “computation” upstairs is also wrong. Cognition requires an embodied and embedded agent. It is not even a mere “brain-thing”.


This reminds me of this nice paper (https://www.biorxiv.org/content/10.1101/055624v2.full.pdf) "Could a Neuroscientist Understand a Microprocessor?" exploring the (nonsensical) results we would get if we'd apply currently used neuroscience approaches to analyze an Atari console running a simple game.


Similarly, there may be a layer below neurons -- microtubules. Something that many neuroscientists and AI researchers don't want to consider seriously, primarily because we're very limited in our ability to examine, interact, and model that layer.

It also throws off by orders of magnitude the effort which would be required to fully model an organism neural behavior.


There may be that layer, but is there evidence for that layer, and I'm not talking about that certain physicists hypothesis about microtubules and free will.


There is evidence that each neuron performs calculations equivalent to a small neural net: https://www.sciencedirect.com/science/article/abs/pii/S08966....


Sir Roger Penrose was investigating non-computability versus computability in brain behavior. I am not sure he even mentions free will.

This highlights some of the unfortunate tensions in this debate, where people may feel obliged to hold to a literal materialism along with a computational theory of mind, and vigorously shoot down anything that threatens to unsettle that somewhat dogmatic position, for fear of strengthening the position of fundamentalist religious believers. In short, it is primarily a political sentiment, not a scientific one.

We should be able to follow where evidence and rational argument leads without being held hostage to such concerns which are strictly irrelevant to the debate.


You are exactly right. I’m a full time geek neurogeneticist. I find most neuroscience models of brain function too neat and simple. My pinned tweet at @robwilliamsiii is one idea that hackers and CS types may enjoy. In brief—where is the clock? Where are the many levels of the stack that you mention.


>Big visible things like windows or the start menu aren't put in a single place, etc.

Sure they are - in fact it's rendered to linear memory blocks of pixels.


OK, let's strain this analogy even more!

In memory, sure, the image is in a block (although not really, it's composited (not composted) then shipped over a wire).

But the functionality is basically everywhere. Similar in minds. We can see images "trip" localized circuits when they are recognized, but the comprehension and processing of the scene is muuuuuch more complicated.

Similarly, where is the little 8bit block that handles a click on the start button, and why God Almighty is it so _far_ from the start button image!


> it's composted

(This is an amusing idea, but I think "composited" was the word.)


Ha! Thank you


An interesting paper is "Could a neuroscientist understand a microprocessor". The idea is to apply neuroscience-style analysis to the 6502 processor running programs such as Space Invaders and see if you can find out anything interesting about the processor. They tried a bunch of different approaches which gave a bunch of data but essentially nothing useful about the processor's structure or organization. The point is that if these techniques are useless for figuring out something simple and structured like the 6502, they're unlikely to give useful information about the brain.

https://journals.plos.org/ploscompbiol/article?id=10.1371/jo... (Click "download PDF", it's open access.)


Please note that merely seeing some place in the brain activate in a functional MRI task does NOT necessarily mean that that location is either necessary, sufficient or even involved in representing information relevant to that task. Functional MRI amplifies small global signals related to arousal, and if arousal changes during a task then these arousal-related signals can propagate over much of the brain. And even something as simple as an eye movement can be correlated with global changes in arousal. (A similar problem occurs with attention.) Unfortunately many of the most common modeling and analysis methods methods used in fMRI have no way to distinguish these rather uninteresting arousal-related changes from those that are actually informative about task-specific processes. The bottom line is that whenever you read about any fMRI result, you should ask yourself whether that could be a mere artifact of changes in arousal (or attention), and if so you should find out what was done to address this potential confound.


Sorry I should have clarified this comment was addressed at the claims in the article that most of the brain is activated even for trivial tasks...


The article begins by analogy to cartography, in which, the First Law of Geography[1] "everything is related to everything else, but near things are more related than distant things" applies to the space defined by the surface of the Earth. The brain is smaller than a pixel in a typical satellite photo but contains information to build the satellite, put it into orbit, and communicate with it via radio. Why would anyone believe that analogy should hold? Having read D'Amasio[2] ages ago, this appears to be looking for keys under a streetlamp simply because the light is better.

[1] https://en.m.wikipedia.org/wiki/Tobler%27s_first_law_of_geog...

[2] https://www.goodreads.com/en/book/show/125777.The_Feeling_of...


The experimental reason is that damage to most areas of the brain causes consistent and remarkably narrow impairment, ie aphasia.

The reason you should predict localization just from first principles is latency. The bulk of your brain is connective tissue networking the cortex to itself, but even still signals can make many round trips within a small patch of cortex before it can cross the brain. Neurons are large and slow, and the more distant the neighbors they have to interact with the larger they have to be.

Does that mean that the regions of the brain kindly sort themselves into exactly the high-level categories we want them to be in? Absolutely not! Concepts like "attention" or "memory" might not even make sense when looking at how the brain is actually implemented. But regions of the brain probably perform specific tasks.


Probably need to consider I/O too: whatever is doing the front-end processing of vision is probably not too far away from the end of the optic nerves, otherwise you'd have a routing nightmare.


Actually that's sorta not the case, and it is indeed a routing nightmare. Most vision processing is actually done in the far rear, with the signals traveling all the way from the front to the back. However, there are two big reasons why this isn't as bad as you'd think:

- The data from the eyes is already lightly processed. The way the neurons hook into the rods and cones is basically running a few convolutions all at once, and the result is what's sent back

- Most sensory info gets centralized in the LGN before it's routed to the various parts of the brain, so it's about the same hop it would have to go through anyway

We actually still have a lot of questions about what exactly is going on in the middle bit. It seems like structures deep in the brain have duplicate versions of all the sense processing and motor controlling centers. What do they do? Well reflexes probably, and maybe somehow motivation? Reward is tucked in there somewhere too, or so we think.


My favorite writings on the topic of consciousness are by the philosopher Daniel Dennett with his books like "Consciousness Explained". He provide such fun thought experiments, and brings together so much science - it's a treat!


Never read that, I will look into it. One of the books I love is "The Mind's I" by Dennett and Hofstadter. Easily one of the best books I have ever read.


We‘ve known for over thirty years, first based on lesion studies then neuroimaging. For instance, we saw that language didn’t map to one region, but depended on how language was being used and then word meanings relying on sensory vs more abstract brain regions (e.g., cannon vs cannoli vs carve).

Think of features with distributed representations, patterns of connectivity re-using bits in endless ways. Another oversimplification is that neurons can be only representing 1’s and 0’s. The true computational power is every state in-between and strength of connections between processing units.


The original title might have been OK since the gist of the article is that what we call "mental phenomena" may not map at all. Just like somebody can't find "beauty" in an ANNs parameters.


If anyone is interested in the "deep" topics such as consciousness, materialism, quantum physics, religion, etc. I highly recommend the "Closer to Truth" series.

https://www.youtube.com/channel/UCl9StMQ79LtEvlrskzjoYbQ

I recently stumbled across this and it has some fascinating episodes where he interviews many top people in each field. Michio Kaku, Roger Penrose, Paul Davies, religious leaders, etc etc.


I suggest you also watch some videos by Joscha Bach.


Are the neurons outside of the head considered to be part of the brain?


Generally not. Which is why the nervous system of cephalopods seems so alien — their processing is performed in a more distributed fashion.

Sometimes the brain and spinal column are considered together in our case, as some of our survival reactions are governed by neurons in our spine, but that does not really match how cephalopods distribute their processing (something that is presumably necessary, or at least helpful/efficient, for controlling the flexibility of their limbs).


The top level partonomy for the nervous system is usually as follows.

  nervous system
  -> central nervous system
     -> brain
     -> spinal cord
  -> peripheral nervous system


Is "brain" equated to "nervous system"?


Tangential to article topic:

When I first read "The Man Who Mistook His Wife for a Hat" it gave me such a profound understanding of the unpredictability of not only brain injury, but mental illness and the fragility of human nature, that I thoroughly believe it should be required reading for late primary schoolers or early high-schoolers to help guide them through their life's journey of interacting with the full spectrum of humanity they may encounter.

A handbook for empathy.


Mechanistic solutions are never meant to solve high level problems, but they very often make the fundamental bricks more advanced solutions will rely upon, at a later stage.


True. But mechanistic “solutions” can also be used as a crutch that allows us to avoid asking the right hard questions. I started my career as an electrophysiologist studying a big chunk of the thalamus referred to as a visual information “relay”nucleus. In 40 years I have never seen anyone question this “relay” function seriously. I am reminded of the phrase from Princess Bride: “I don’t think that word means what you think it means”. “Relay” is a crutch to mask ignorance. No collection of complex circuitry—one million neurons in this case—is just a relay. It is also an important processor, but probably in a domain invisible to a neuroscientist recording from one or even 1000 neurons simultaneously.

My vote is that the “relay” is actually a timebase corrector for noisy retinal input mosaics that have their own quirky dynamics and temporal noise.


Mindboggling how brains have a blind spot for the thought of being an idea.

Just as mindboggling as the mind's reluctance to think of itself as a brain.


> Just as mindboggling as the mind's reluctance to think of itself as a brain.

A bit of an understatement in my experience...very often, minds get rather emotionally agitated when encountering the idea that the reality they perceive is a representation of reality, implemented by a brain (as opposed to being reality itself).


>implemented by a brain

Which is an idea. Which is a point in the mirror, it avoids to face. Same as its mirror can't face the opposite.

Whereever the difference of them my be, if there even is one.

;-)


> Which is an idea.

An idea which seems to be overwhelmingly axiomatic.

> Which is a point in the mirror, it avoids to face.

Are you thinking of this phenomenon where when people look directly into their own eyes in a mirror, or directly into the eyes of another human being, something "weird" seems to happen?


Ah, no, sorry.

What I meant was that what we call mind has a problem with focusing on itself and consciousness as the object of its studies, questioning itself in the process.

I used the metaphor of the double mirror to make my point about this problem.

A natural-historical anlytic tool used to gather food, hunt prey, find shelter, mating partners and allies.

Not only in a biological environment, but also in a social one. Some argue even the social is the more important.

In short, to survive primarily by modeling its environment in relation to its own goals.

Very late, this tool of the body and its needs presents itself in these models as a problem to be explored.

The tool processes itself.

This is revolutionary because it gives it incredible new possibilities and freedom. And, it is as dangerous as any revolution. Sawing off the branch on which it sits.

That is one mirror, the other is what is called the reality outside of the mind. The problem of what that is that the mind tries to represent in his models.

Personnaly I decided for myself against a dualistic structure of reality and a fundamental difference between mind and matter.

So the origin of that dualism is in the mind. A perspective which leaves two problems.

First, I still do not know what reality outside the model is and second, what daoes it means that with the developement of the mind, the universe became dualistic, because dualism ever since became part of it.

Infinite regress, infinite mirror.


Agreed (assuming I understand you)...although, I've never really properly understood what people mean by dualism.


Not being a native English speaker or writer, I can't judge whether I made myself understood with my ramblings, har har.

HN is something of an experimental or practice field that I've been using lately to express ideas in a foreign language.

Sometimes thought thrive on happy little accidents in communication.

As I understand it, dualism refers to the manichean notion (sic) that mind and matter are two categorically different things that somehow interact.

Being mind the superior or preferable of the two phenomena.


Very D. Hofstadter ;-)


Thank god, I'm not him.

I couldn't stand my haircut in the mirror.


Brain is interested how it thinks so made me click


Much of current cognitive science is neo-phrenology: geography = function. This article shows thats too simple.


The article shows that its harder than you'd think to draw a world map if you don't know the names of the countries. Localization to, at the very minimum, specific circuits isn't really in doubt.


Which basically means the brain doesn’t think the way the brain thinks the brain thinks.

Aside from the horrible phrasing, it's quite funny to think about, leading us to:

My brain thinks it is funny to think about how the brain doesn’t think the way the brain thinks the brain thinks.

In the same way that atoms don’t exactly work the way the atoms think the atoms work since, as Niels Bohr said, "A physicist is just an atom's way of looking at itself."

Reminds me of some comment I heard somewhere, about how the brain and the body don’t come with an instruction manual, and how it makes sense from an evolutionary perspective: we so far apparently haven’t needed to know how they work to use them well enough to survive.


I was annoyed by the phrasing of the submission, which could have been "The way the brain thinks is counter-intuitive" (don't assume I don't know what you are going to tell, or, conversely, that I spent even one second thinking about how the brain thinks), but we probably would not have had the pleasure to read your comment with this title.


I'm glad I'm not the only one annoyed by the "it doesn't work the way _you_ think" phraseology. How dare they assume how I think? :)

Its a pet peeve, I guess.


The title of this post somewhat reminds me of a jira task title


You are speaking about the editorialized title "Mental Phenomena Don’t Map Into the Brain as Expected", right? Well, yes, I would have titled a bug report exactly this way now that you say it.

For reference, the original title is "The Brain Doesn’t Think the Way You Think It Does", this is the one I reacted to.


For me it is more mandane: there were times when people thought of human health in terms of 4 elements. [Much] Later they discovered germ theory that uses more useful concepts.

Using "memory", "perception" to describe how the brain works is like using 4 elements to describe how our bodies function.


Except the four element theory was rendered irrelevant by the germ theory. I doubt that understanding the brain will render memory and perception irrelevant.

Robots literally have memory and perception (we know because we built them), so clearly these are real phenomena that exist in the real world.

It seems to me unlikely that we are totally mistaken in conceptualizing ourselves in terms of these known real phenomena.


My intuition is that it isn't totally mistaken, more so that it is a massive simplification.

Whereas we seem to describe the brain as having "memory", as if that is something statically stored that is simply "retrieved" (somehow), it seems to me that this overlooks that the device that is doing this is also running an entire virtual model of simulated reality, which can not only remember things, but it can replay them, change variables and play them again, play them in reverse, manufacture completely fictional scenarios, run (as its default) a custom modified variation of "actual" reality that it finds more pleasing (this one has gotten lots of news coverage in the last few years), read the contents of realities running inside other minds (tens of millions if it so chooses), see into the future of "actual" reality, all sorts of different things.

"Brains have memory" is a bit of an understatement of what they actually do.


> "Brains have memory" is a bit of an understatement of what they actually do.

...but, I don't think anyone thinks that "brains have memory" is a complete statement of what they actually do. It's not a complete statement of what computers do either.


> ...but, I don't think anyone thinks that "brains have memory" is a complete statement of what they actually do.

Perhaps. This topic is a bit of a hobby for me, and I haven't really encountered much discussion that explicitly gets into the fine details and distinctions of the matter with respect to the human mind, as opposed to the plentiful discussion on the matter when it comes to computers.

A noteworthy distinction between these two is that computers are man made (such discussions are a necessary component of the development process), whereas the mind is made by ~nature, therefore it requires no such discussion.

If you can point me to any in-depth discussions on this distinction, or even any keywords that one might google, I would appreciate it.


Four element theory is on approximately the same epistemological plane as modern personality psychology. In some ways it's more useful than psychology, because it has a straightforward logic in terms of metaphor and relationships between parts, whereas psychology is a mess. Of course, it has very little to do with how the brain works, but psychology also has very little to do with how the brain works (it's more about the use of language to describe and regulate behavior), so it's not a big deal.


We build robots the way that we think things are supposed to work. It may turn out that our design is fundamentally limited.


Sure. But let’s take the average computer user’s analogy from another comment [1].

A random, average smartphone can use their device every day, and accomplish a great number of things without knowing how it actually works at all.

For all they care, it could be powered by tiny fairies running in wheels like hamsters, and the Internet and cell networks could be irascible spirits connecting all smartphones together through a worldwide mycelium network.

Knowing how smartphones and computers is not necessary for them to accomplish what they set to.

The same way, if we actually needed to have a deeper understanding of our own inner workings to gain a noticeable survival edge, evolution would probably have taken care of that, the same way it has endowed nearly all of us with at least a basic survival instinct. By eliminating most individuals who don’t feel any need to eat food or drink water from the gene pool.

[1]: https://news.ycombinator.com/item?id=28292418


Newtons theory of gravity and mechanics and such did explain things, just not as accurately as general & special relativity did. But we still use newtons formulas today as a good enough approximation in many cases.

I have a feeling concepts like memory and perception will still be used in the future, even when we figure out the equivalent of general relativity for the brain in the future too, especially since they are more higher level summation phenomena, like personality, or higher level languages vs. asm.


Funny, I began reading Yudkowsky's "Map and Territory", in the preface there is this sentence:

> For if you can't trust your brain, can you trust anything else?

Which basically means, if the brain can't trust the brain, can the brain trust another brain?

Yes slightly tortured, but only to fit :D


Haha! Nice one!

We are starting to slide into Cartesian doubt here.


For me a key insight is that it's not necessarily possible for a brain to understand itself. Obviously there's a certain "capability floor" that a brain needs to have in order to understand a system of some complexity. A rat's brain is capable of understanding some systems, but it's definitely not sufficiently capable to understand a rat's brain. A human brain can understand various systems that are too complex for rats, but it's also finite in it's capabilities - and it's not totally certain that the inherent complexity of how human brains work is small enough to be understandable by human brains; perhaps it would take something more (e.g. brains augmented with tech or biologically enhanced or something totally alien) to properly understand how our brains work.


I see this a lot, but I don't think it's accurate. It's definitely not possible for a brain to completely understand itself, as in know the state of every neuron and how all of those states will evolve over time, but that absolutely doesn't preclude a very complete understanding.

The more we learn about the brain, the more it seems it is very friendly to a certain degree of abstraction. If it wasn't it wouldn't be able to turn sense data into world models so well!


That’s what models are for. Many systems are too complex to be fully understood as a whole.

But by modelling them, we can still understand why they behaved they they have, and predict how they will behave.

There is no need for any one individual to fully grasp a process for it to be modelled.

It really is the whole point of modelling.


“If our brains were simple enough for us to understand them, we'd be so simple that we couldn't.”


Hmm, but perhaps we could manage to understand the brain and behavior of a fruitfly or even a mouse. The “self-conscious” tier is trivial recursion. Hofstadter has this right in “I Am A Strange Loop”.


The brain is a piece of soggy bacon that lives in a shell of bone, has its own membrane that is like the gut lining, cleans itself by power-washing itself (and shorting itself out, essentially; REM sleep is basically the side effect of this process), depends on glucose for its special mitochondria[1], and your humanity is literally a thin layer of meat-paint on top of millions of years of evolution...

... that is doing scientific research on itself, and then reading it and understanding the strange symbols on the screen.

[1]: All your organs have unique metabolic signatures, but they, generally, can all run on purely fatty acids; the brain requires about 30-40g of dietary glucose to run optimally, as fatty acids do not cross the BBB quickly enough.


Doesn’t require dietary glucose, the liver will manufacture as needed from fats and proteins.


Upper limit of what is considered healthy gluconeogenesis is slightly short of what can be provided. Brain needs ~120g of glucose, you're about 30-40g short of that.

30-40g is almost nothing, could be, say, a cup of frozen berry mix, mixed right into 3/4th a cup of plain real greek yogurt, with some turmeric, ginger, and cinnamon mixed in.


Can you point to some sources on that? I always want to learn more about diet and how things work in the body.


It can also subsist on top of ketone bodies - if fat breakdown is high enough due to a lack of glucose due to fasting or lack of other storage such as glycogen.


Also, it's possible to subsists and survive without food for quite a while.


To me this is the most interesting thing. A bunch of organic matter evolved to the point where it is trying to understand itself. How crazy is that?


I know your ending question is a figure of speech, but since we are imagining it simultaneously, it's more or less by definition not crazy at all.


Poetry for nerds. (Thank you)


Something something diagonalization proof something something.


how much?


> Recent work has found, for instance, that two-thirds of the brain is involved in simple eye movements; meanwhile, half of the brain gets activated during respiration.

Talk about misleading... obviously these two tasks alone don't account for 116% of the brain's power. The neocortex is all the same part of the brain--yes it can loosely be delineated into segments, but it is pretty pointless to say the whole brain is involved when really the neocortex is just distributing its inputs along its length in the process of searching for the appropriate cortical columns.


It's only misleading if you're stuck in a reductionist mindset where every piece only has one job and combines linearly together with other pieces.


I'm confused. That's what the article is about. The quote you chose is 1/4 of the way in and then they spend the rest of the article explaining why those details are misleading. I don't understand what you're objecting to.


You have made it misleading by taking it out of the context of the article which explains how that works.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: