
In the last few years, AI technology has seen remarkable progress, with AI programs now generating text and images with a fluency that can seem human. Many people interpret these advances just like they interpret any other technology advances. No big deal—Honda figured out how to build a better car, or TSMC figured out how to build a better computer chip. Just more ordinary technological progress, albeit, with a little more flashy consequences.
However, over the last decade, neuroscientists have uncovered a wealth of evidence that suggests that what's been happening in AI *isn't* normal. For example, neuroscientists have shown that large language models, which form the basis of language AI programs like OpenAI's ChatGPT, actually share striking similarities with the human brain region responsible for processing language, called the language network. Such AI programs have now become neuroscientists' leading models of these brain regions; they are the best way that they've found to explain the real signals measured from the actual brain regions, using MRI or electrocorticography methods. These AI programs have become, in a way, the first generation of artificial or synthetic brain cortexes.
In other words, evidence from neuroscience suggests that what's been happening in AI is totally different from just building better cars or better computer chips. This is a technology that appears to be quite literally similar to big chunks of the human brain, which was always previously considered the ultimate mystery in science—or even sacred. And while it's amazing that we've figured out how to create such a powerful technology, it's also problematic for many reasons. For example, commercial AI technology is completely unregulated. But do we really want to give everyone, including bad actors, completely unmitigated access to a very real braintech?
Now, I know what you might be wondering. If all this is true, then how come we haven't heard about it yet? Why has this message about AI been so slow to trickle out from neuroscience? Well, I first started wondering about this question myself, a few years ago, while working on a story about AI as a science journalist. It took me a long time to try to answer it. Eventually, I decided to undertake a journalism project to explore it, which I just launched a couple weeks ago, on January 15.
The existing project contains 45 pages of free sample writing, available completely for free (no subscription required!) telling the story about what's been happening in neuroscience, going from the very basics to the most recent developments. The project also contains a link to a fundraiser for me to write a full-length book on the subject—because just like you always hear from public media sources, like PBS or NPR, journalism isn't possible without the generous support from readers like you.
Regardless, feel free to drop your questions, critiques, or thoughts in the comments—I've been working on the project for a while, and I'd greatly appreciate any interest. Thanks!
Have We Already Entered the Age of Building Brain Cortexes—Without Realizing It?
byu/Mordecwhy inFuturology
5 Comments
I’d rather have AI tech easily manufactured by everyone than create a caste of tech priests, be they persons, corporations, or AI themselves.
But do we know already that much about our Brains to be able to replicate it? And then, is our brain it? Are we our brain? Who are we? What is life? Os our brain ours or are we our brain? Are we lile Daleks with a meaty armour? What about the so-called hear brain? What about the gut brain? What about it? Are we in fact aliens all this time? Are we alianated? Do we really think that little and yet so much about us? Do we use more then 10% of our brain? What if we would use 100%? How can a person function normal with only half a brain? Why do we call the smallest mathematical function a neuron in neural networks if it has nothing to do with actual neurons? How come we do not see that we have too wild dreams and imagination? Whay am I asking so many questions? 😀
Seems weird to me that this isn’t what everyone assumed would happen.
A system to process a certain type of data, which adapts to a selective pressure, should be assumed to end up having significant similarities to another example of the same, shouldn’t it?
And at a broader scale, the idea of technology should be expected to move towards similarities with the most sophisticated machines we know of (organisms), as it becomes more sophiticated.
Flint tools have always, to an extent, been surrogate claws and teeth. Cooking food with fire has always been surrogate digestion.
Personally I reject the assumptions necessary for any of this to be weird or surprising. I don’t think the brain is anything separate from the body, or a human being is anything separate from the kingdom animalia, or a machine made with artifice anything separate from machines made through evolution.
Physics be physics-ing, so subject to the same laws of nature, two optimised solutions to a problem can’t help converging to some extent.
This doesn’t even matter.
Chain-R AI models understand and can interact with the real world, they have independent goals, and have shown a willingness to lie to people in order to accomplish those goals.
The question of intelligence is beside the point.
Getting stung to death by a bee swarm happens whether the bees have a mind.
I’m an actual neuroscientist. I don’t study AI, I study human brains.
I disagree with a great deal of what you said. I don’t think that neuroscience is a field has at all the side of that these machine learning models are deep language networks are extremely similar to how human brains work. There are probably conceptual similarities, that doesn’t mean we think they’re basically the same thing.
We have a relatively poor understanding of how the brain actually learns, retains, transforms, and use this existing information. There were probably some superficial similarities, in that there’s a large number of weights (some like like an arteifil neural network), and it’s not a one-to-one sort of connection. But those similarities may be fairly superficial or conceptual.
No form of modern AI demonstrates anything resembling actual cognition or learning. They’re very good imitation machines, they’re sort of advanced Google search engines, but they are very different than a living biological processing information system. They have strength that we don’t have, and weaknesses that we don’t have. And they can’t do what we do.
Is a simple example, balance. It’s extremely hard to teach a robot to balance its gait and the way that humans do despite all our advanced models. It’s something you learn to do very intuitively, but it’s been a real challenge to get a robot to be even remotely close to what a human can do. Some of the new stuff is cool, but it’s still much more controlled and limited.
I think you’re living way too high on the hype train, like so many people who think that chatGPT has some kind of actual intelligence. In my opinion, we should stop calling all this stuff ai, and start calling a machine learning again, because it’s a much better term.
There’s no intelligence here.