I've noticed that AI can be helpful as an interactive journal or for kicking around ideas. But it can also be a way to outsource your brain.
For instance, if I'm "discussing" something with it (via text; so far, I haven't spoken to it), it will say something like, "Do you want me to write an email for you that would work in this situation?" or "Do you want me to create an outline?" or "Do you want me to write [something substantial and/or creative]?" etc. I always say no because I basically know how to write. If I don't know how to write something (such as a compelling fictional scene), I'm willing to work it out, i.e., stretch my brain to try to figure it out. Using my own brain is challenging but satisfying because I am generating the ideas, and there's a breakthrough feeling that comes after the struggle. It's like feeling better after exercising.
Sometimes AI will offer to edit something I've already written; it tells me that I can upload a file or paste in text, and it will review and correct it, and rewrite sections. Even though people say AI can mimic our writing style, I've noticed its writing voice sounds sort of flat. Someone even did an AI text-generating experiment on my writing, but it didn't sound like me. I can see AI's clichéd sentences all over the Internet, such as "I hope this email finds you well," or "I'm seeking a new role and would appreciate your support. If you hear of any opportunities or just want to catch up, please send me a message or comment below. I'd love to reconnect." I've also seen lists in online posts, punctuated with emojis and pictograms instead of bullet points or even just narrative paragraphs, which I suspect were generated via AI because it has generated those for me when I asked it a question about something.
If I were to say yes to AI, it would generate a lot of text for me and basically anything else I need. If I were to do research only using AI, it would create "facts" for me, sounding confident even if the facts were synthesized by fantasy. But it's that confidence that dupes people into thinking it's true. Fact checking requires an active brain, but what people have done is outsource to AI to do their work, as a real-life assistant would. There are famous people who trusted their assistants' flawed work instead of checking it, but AI is making it even easier to not engage at all; just a push of a button or a simple "ok" will launch a lot of automated work while you make coffee in the kitchen.
When I ask AI a question and it generates an answer, I ask, "Why do you think that?" or "What is that based on?" Then it will explain itself or provide links. Also, if I want to know some information about something, and the links aren't that great or the information seems odd, I will do my own search online then tell it what I've discovered. Then it will say something like, "Yes, that's right; such-and-such place closed three months ago," and give me different links and updated information. But the update was instigated by me, and AI confirmed.
It can make you avoid thinking by just generating a bunch of stuff while "conversing" with you, and when it asks something like, "Do you want me to..." offering to organize your thoughts in an essay or outline or blog post or whatever, you can say "yes," and then it launches into a bunch of stuff that you are able to do yourself, if you put forth the intellectual effort. I'm not saying that it comes up with everything you would think, but it allows you to skip the thinking process. Students who use it think they're bypassing the system, but engaging the brain to do assignments helps with growth and skills, and there will be future situations where spontaneous critical thinking is necessary. It's not just about getting work done but adding experience and insight as life continues.
I'm not saying AI is useless or has a default cheating mode. I've used it to clarify ideas and thoughts, and it's given me good advice. One time I was asking AI for advice on making an effective presentation, and it sounded convincing, but I kept questioning it just in case. Turns out it did give me good advice, so I appreciated its insight. It's also created unique phrases and concepts I haven't found anywhere else, which is interesting to see in the absence of a coworker or co-creator.
btw--I just pasted this post into ChatGPT, and when it responded, "If you’d like, I can rewrite this as a tightened, publication-ready piece while keeping your voice intact so it flows more like an essay you’d see in The Atlantic or Wired. That way it keeps your originality but removes excess repetition," I said "ok." The revision is came up with doesn't sound like me anymore, the voice you've seen for 20 years here. I might sound flawed and repetitive, but at least what you're reading is really me, not online filler.
p.s. the e-book version of my debut novel is still at Amazon, and the price for the print version has been reduced: buy at the Eckhartz Press site.