This needs more tests. It looks like current results are the combination of how braincells naturally filter the experience if sound and ai on top of that.
It looks like the brain actually does recognize voices as different but we need an ai to read this from the brain. I am curious how much better this performs then just pure ai.
Id alo like to know how the brain got exposed to sound cause irl an organic microphone is an ear, is it a brain with ears?
Even if not better then just ai voice recognition. Sending experiences trough neural matter and using ai to analyze the way it responds will learn us a lot about how the brain actually works.
Maybe I misunderstood but it seems they just used the brain cells as a microphone and the voice recognition was done by a machine learning algorithm?
This needs more tests. It looks like current results are the combination of how braincells naturally filter the experience if sound and ai on top of that.
It looks like the brain actually does recognize voices as different but we need an ai to read this from the brain. I am curious how much better this performs then just pure ai.
Id alo like to know how the brain got exposed to sound cause irl an organic microphone is an ear, is it a brain with ears?
Even if not better then just ai voice recognition. Sending experiences trough neural matter and using ai to analyze the way it responds will learn us a lot about how the brain actually works.