As a philosophy major who’s interested in artificial intelligence I’m always drawing weird connections. Back in high school when I took my first ever philosophy class we read Plato’s Republic. The thing that interested me the most was the Allegory of the Cave. The other day I was talking with Mike Altarace, a customer engineering manager for Google and I was telling him about how I relate the Allegory of the Cave to General AI and how it mimics humans (or at least tries to).
For those of you who haven’t read the Republic, I’ll give you a Cliff Notes version of the Allegory. If you’re already familiar with the Republic, then you can skip to the juicy part
There are a bunch of prisoners in a cave and they’ve been there since birth. They know nothing else. The way they’re chained only allows them to see the wall directly in front of them. Behind the prisoners lies a fire and between that fire and the prisoners, various people and objects pass by. However, because of where the prisoners are seated, they can only see the shadows of these people and objects. They’ve quite literally never seen the real world.
I promise I’m almost done with the philosophy lesson. Sorta.
Socrates basically explains that if these prisoners were to be released and were forced to look at the fire and the objects cast in the shadows, they would initially be blinded by the firelight and wouldn’t be able to view the people and objects in their full form.
As we all know our eyes adjust to bright lights with time. Once the prisoner’s eyes adjust, they start to grasp the concept that the shadows they’d taken to be their reality had been illusions the entire time. The light is even brighter outside the cave, but eventually the prisoner’s eyes adjust to that too. Then they become enlightened.
The true real world was outside the cave the whole time.
Shit.
Now, If the prisoner returned to the cave after being in the outside world and told all his prisoner friends about it, they’d probably think he was full of it. The freed prisoner represents an intellectual philosopher who has insight into the intelligible realm, the truest form of life, the true reality, whatever you wanna call it.
Alright so you’re probably wondering how this all relates to AI. Well the way I see it, artificial intelligence and what it produces are the the shadows in the cave. If you submit an essay written by ChaptGPT your teacher may not even notice. If you really think about it, the teachers are the prisoners and the essay would be a shadow. Kinda morbid I know. But your teacher is taking your submission as their reality because they assume it was your work that you wrote that you submitted. But it’s really an illusion, they weren’t actually your ideas.
What if you think you’ve been chatting with a human significant other for a year but you’ve actually been talking to an AI chatbot? Would you even believe somebody if they told you it was AI and not a human if you’d grown attached and formed what you thought was a human connection. I’m sure you’d be confused. Imagine what’s going to happen when AI starts to look like human beings.
As AI becomes more advanced, it’s going to be able to produce things that mimic human products and behavior. If we aren’t told that it’s AI are we being tricked? Are we taking it as our reality when it’s not actually our reality? Should we adjust our expectations? Is this the new reality?