Cortical Labs’ CL1 platform uses 800,000 living human neurons on a chip, forming a real-time closed-loop interface with software — in this case, learning to play Doom. The neurons show signs of adaptive learning that silicon alone can’t replicate, and the potential efficiency gains over traditional AI are significant. But no existing regulatory framework covers this technology. The same biological similarity that makes these neurons valuable is what makes the ethical questions so urgent — nobody is asking whether they can suffer, or what a future of sentient biological computers actually looks like.
irascible_Clown on
This explains why in a lot of movies and sci fi books ships in the future sometimes are living ships.
Ulthanon on
Someone more clever than I noted that its probably a bad idea, to grow a disembodied brain, hook it up to a virtual hell, and give it guns.
TF-Fanfic-Resident on
> The neurons show signs of adaptive learning that silicon alone can’t replicate
Hopefully this is just a bridge to better AI as opposed to a fundamental limit of silicon and metal chips. Are there any reasons to think that it might be a fundamental issue with inorganic materials?
somethingworthwhile on
I do not like this. How may brain cells is too many? Like, where do we draw the line between sentience and meat? If we duplicate a GPU with pig brain cells, is that ethically unambiguous? Yuck!
napkin41 on
This is… so cool… and not disturbing or depressing at all… can’t wait to… see more applications?
ainanenane on
Suffering they or not, depends on what difficulty they have to play Doom
RichardDr on
the interesting part nobody is focusing on is the efficiency gap. 800,000 neurons playing doom in real time vs the billions of parameters and megawatts of power it takes for a silicon AI to do the same thing badly. biological neural networks are still orders of magnitude more energy efficient than anything we can build
the ethical debate is valid but kind of premature here – 800k neurons is roughly the brain power of a fruit fly. nobody is losing sleep over fruit fly suffering. the real question is what happens when they scale this up by 3-4 orders of magnitude and start approaching mouse-brain complexity. that is where the ethics get genuinely uncomfortable
the practical angle though: if biological compute turns out to be meaningfully better at certain tasks than silicon, the economic incentive to scale it will be enormous regardless of ethical consensus. same pattern as every other technology – capability runs ahead of regulation. the time to figure out the rules is now while its still playing doom, not after someone builds a biological coprocessor that runs wall street
8 Comments
Cortical Labs’ CL1 platform uses 800,000 living human neurons on a chip, forming a real-time closed-loop interface with software — in this case, learning to play Doom. The neurons show signs of adaptive learning that silicon alone can’t replicate, and the potential efficiency gains over traditional AI are significant. But no existing regulatory framework covers this technology. The same biological similarity that makes these neurons valuable is what makes the ethical questions so urgent — nobody is asking whether they can suffer, or what a future of sentient biological computers actually looks like.
This explains why in a lot of movies and sci fi books ships in the future sometimes are living ships.
Someone more clever than I noted that its probably a bad idea, to grow a disembodied brain, hook it up to a virtual hell, and give it guns.
> The neurons show signs of adaptive learning that silicon alone can’t replicate
Hopefully this is just a bridge to better AI as opposed to a fundamental limit of silicon and metal chips. Are there any reasons to think that it might be a fundamental issue with inorganic materials?
I do not like this. How may brain cells is too many? Like, where do we draw the line between sentience and meat? If we duplicate a GPU with pig brain cells, is that ethically unambiguous? Yuck!
This is… so cool… and not disturbing or depressing at all… can’t wait to… see more applications?
Suffering they or not, depends on what difficulty they have to play Doom
the interesting part nobody is focusing on is the efficiency gap. 800,000 neurons playing doom in real time vs the billions of parameters and megawatts of power it takes for a silicon AI to do the same thing badly. biological neural networks are still orders of magnitude more energy efficient than anything we can build
the ethical debate is valid but kind of premature here – 800k neurons is roughly the brain power of a fruit fly. nobody is losing sleep over fruit fly suffering. the real question is what happens when they scale this up by 3-4 orders of magnitude and start approaching mouse-brain complexity. that is where the ethics get genuinely uncomfortable
the practical angle though: if biological compute turns out to be meaningfully better at certain tasks than silicon, the economic incentive to scale it will be enormous regardless of ethical consensus. same pattern as every other technology – capability runs ahead of regulation. the time to figure out the rules is now while its still playing doom, not after someone builds a biological coprocessor that runs wall street