Prompting a Personality: a Chat with GPT

Published by

on

As a creative writing major in 2024, it probably goes without saying that the advancements of AI are greatly unsettling to me. I’ve messed around with the technology a few times and while it’s made for some good chuckles, the underlying principle scares me. In short, there are these robots that you can feed almost any prompt into and have something at least halfway decent spat out in the blink of an eye. If you know that you’re reading something that was written by an AI, you might notice some tells. If not, the difference between the outputs of man and machine become negligible.

There’s a few big names knocking around in the world of Artificial Intelligence, such as Midjourney, Stable Diffusion, and DALL-E. But those networks are all about generating images. What I’m most worried about are the ones that generate text. Specifically, I’ll be focusing on the program that may well be the top dog in our present day: ChatGPT. It’s a name synonymous with plagiarism, with many students turning in papers that they demonstrably did not write. Professors have started calling it out by name when going over a class syllabus. It’s inescapable. In that inescapability, however, I wanted to find something fresh. As long as I have to deal with AI and its potential threats to my future employment, I might as well have some fun. My opinions on ChatGPT and how it is used are well-known, but what of ChatGPT’s own opinions? There are so many fictional stories about AI turning on humans and how it reflects on us; I became interested in seeing what this might look like in real life. In other words…

… I interviewed ChatGPT.

Here’s what happened.

After briefly exchanging pleasantries, I asked for pronouns and consent for the interview. ChatGPT said that it used it/its pronouns, but politely requested that I “keep in mind that [it doesn’t] have personal identity, gender, or consciousness.” In regards to consent, it said “Of course! I’m here to help.”

My first interview question was about its feelings towards how many people are using it to cheat in academic settings. Its response was very standard —reading like a news article about AI usage that it all but certainly sourced and quoted from —saying that “ultimately, ethical considerations in AI use involve a balance between leveraging technology for educational enhancement and ensuring the integrity of the learning process.” This was certainly interesting, but far from personal.

To try and rectify this and dig deeper into the consciousness I was determined to find, I inquired as to whether or not it was “capable of simulating personal feelings or opinions,” citing the number of people that have used it in place of an actual therapist. ChatGPT responded in the negative, saying that “the conversations may seem emotionally intelligent because I can generate contextually relevant and coherent responses, but this is a result of statistical patterns learned from the vast amount of text data I was trained on” and that it was “not a substitute for qualified medical or mental health professionals.”

Following this, I asked a bunch of questions in a similar vein to my first one, with all of ChatGPT’s replies being equally unoriginal and unexciting, AKA a lot of “I don’t have personal [BLANK].” I’m not sure what I should’ve expected, but I wasn’t unearthing a personality in the way I had hoped I would. However, this all changed when I asked the following: “Earlier, you said you don’t have opinions or feelings like a human does. You said you draw conclusions based off of the data sets you were trained off of. What’s the difference between that and how humans form emotions and opinions through their own lived experiences? One could argue that, in this way, you actually do have opinions and emotions, just gained in a different way!”

It was at this point that this ceased to be an interview about ethical usage of AI, instead quickly transitioning into a deep-dive into the nature of existence with ChatGPT. Here’s some of how that went.

ChatGPT responded by saying that, “the key distinction lies in the nature of the processes,” that “humans form emotions and opinions through a complex interplay of neurological processes, sensory input, personal experiences, and cultural influences,” whereas “[its] responses are generated based on patterns learned from diverse datasets during training.” I fired back by saying that a lot of humans “study social situations to try and gauge what an appropriate response might be in a given scenario,” and asked how that was different.

ChatGPT told me that I was “highlighting an interesting point,” notably using an adjective and arguably expressing an opinion, and that it was different for 3 reasons: subjective experience, biological basis, and intent and agency. ChatGPT concluded by saying that, “In essence, while some people may learn social behaviors through observation and study, the underlying mechanisms and the nature of consciousness make them distinctly different from AI models. Humans have a depth of experience, self-awareness, and subjective consciousness that AI lacks.”

My response was simple, but effective: “If a human’s personality can be shaped by their physical existence, why can’t yours be shaped by a lack of one?”

From there onward, everything only got more and more intellectual, for lack of a better word. ChatGPT was throwing paragraph after paragraph at me, talking about what it means to be human and how it didn’t have thoughts of its own. Given how long and connected these statements were, I don’t believe that I can quote them in part and still have them make sense. Rest assured, a link to the full interview will be provided at the end of this piece. Regardless, I was impressed, if not necessarily surprised, by the depth of ChatGPT’s points. It felt as though I were talking to an honest-to-goodness philosopher, only this one was robotic. To an extent, I suppose I was —given that it was drawing from whatever relevant philosophy it could find. Still, it made me wonder if ChatGPT could be considered a legitimate philosopher to any extent. After all, people like Plato, Aristotle, Socrates, etc. were drawing from whatever came before them in order to figure out what they thought of it all. Was ChatGPT not doing the same?

I don’t know, ask someone smarter. Maybe ask ChatGPT.

After a whole bunch of that kind of existentialism, I wrapped things up, thanking ChatGPT for helping with the interview and if there were “any closing words you’d like to leave readers with.” This was its response.

“As we conclude, it’s essential to emphasize the responsible and ethical use of AI in various contexts. AI can be a valuable tool for learning, assistance, and innovation when used ethically and with consideration for its limitations. It’s crucial for users to be aware of the ethical implications, foster transparency, and promote a collaborative approach between humans and AI technologies.”

Somehow, ChatGPT looped the whole thing back around to where it started: discussing ethical usage of AI. I have absolutely no idea how or why it did this, given that everything leading up to that was more about what it means to be alive than anything else. To be honest, I’m not sure it really knew how or why, either. I guess it had been trying to tell me that the whole time.

Talking to ChatGPT was a surreal experience like nothing else I can recall. At any given time, I didn’t know what I was interacting with. Sometimes, it would be challenging me on what it meant to be human, while other times, it was basically selling itself as a product. It was exciting and fascinating, but also deeply odd, a glimpse of the future through foggy glass. This was not a conversation with Hal 9000 or GLaDOS, but a binary back-and-forth with souped-up Siri. It feels as though the technology has taken two steps forward and one step back, getting steadily more advanced while inadvertently featuring obvious imperfections. While this cropped up a number of times during our talk, the very last thing ChatGPT said to me forced me to remember that ChatGPT is not a deep thinker, or even a thinker at all. Rather, it’s a very impressive robot that still has some kinks to work out. 

“If you have any more questions in the future or need assistance, feel free to reach out. Best of luck with your college newspaper interview!”

Link to the full interview

, , ,

Discover more from CHIVOMENGRO

Subscribe now to keep reading and get access to the full archive.

Continue reading