Confidence is a funny thing. I struggled with it as a child. I didn’t believe I was good enough to excel in school or athletics. Even so, I was a terrific partner in pop-culture trivia.
Growing up to be a Grumpy Designer came with a boost to my self-worth. I discovered a talent for design, development, and writing. That other people enjoyed my work made me feel capable. I also started caring less about what other people think.
However, I still feel self-conscious when in the presence of a confident person. You know, the type who could sell anything to anyone. I tend to doubt myself in those situations. Perhaps that intimidation factor is why these people often rise to high-powered positions, but I digress.
These days, I don’t have to leave my house to feel that same sense of insufficiency. All I have to do is fire up an artificial intelligence (AI) app. It will respond with a level of confidence we mortals can only dream of.
Here’s why I find AI’s self-assured attitude to be troubling. Scary, even.
Confident Answers Don’t Always Mean Accurate Results
A well-tuned AI model can say anything while keeping a straight face. OK, it has no face (yet). But hear me out. This thing could tell you that the moon is made of cream cheese and really mean it. The worrisome part is that some people will trust the answer without question.
I see this brazenness when asking AI for coding advice. I’ll share a buggy code snippet or request a new one from scratch. The tool generates code and an explanation of how it works. How thoughtful!
Surprisingly, it turns out that code generated in less than a minute isn’t always reliable. Testing AI’s output doesn’t necessarily work the first time. It either produces the same issue or introduces new ones.
Pointing out these issues humbles the robot, to a degree. The go-to response seems to be, “Oh, that’s right! I forgot to account for x, y, and z. Here’s how to fix it.”
That magnetic personality springs back to life, with an answer just as confident as the first one. The cycle continues until AI gets it right or I give up.
None of this means AI isn’t helpful. However, the tone of its responses gives humans a false sense of security. Large language models (LLMs) can convince us in ways most people can’t. Thus, it discourages us from critical thinking.
The impact is already being felt. Future generations that grow up with this omnipresent technology will lose even more. For them, AI’s answer may be the only one they see.
Can AI Take It Down a Notch?
I understand the predicament AI companies are in. From a marketing perspective, an app that provides answers to difficult questions must demonstrate competence. A confident tone is one way to establish trust with users.
There doesn’t seem to be an easy alternative. For instance, I can’t imagine a sheepish response will give the same positive vibes. Can you imagine Google’s Gemini responding with, “This probably won’t work, but try it anyway”?
I don’t believe enlarging those tiny disclaimers will help, either. So, what’s the solution? There are a few things that might help (notice my semi-confident tone).
Do a Better Job of Citing References
AI is not magic. It does not conjure knowledge from the cosmos. It crawls websites (sometimes, a little too much). However, it does a poor job of citing its sources of information.
Some apps add clickable footnotes, while others seem to offer no visible references. That’s not good enough.
Sources could be better integrated into AI’s responses. For example, instead of saying:
“Be sure to escape the output of your WordPress plugin.”
It could be tweaked to:
“According to official WordPress documentation, plugin authors should escape everything from untrusted sources.”
This step accomplishes two things. First, it credits the source of information (the polite thing to do). Second, it invites users to dig deeper into the subject matter.
Tell Us What Could Go Wrong
Since LLMs have gobs of data, I’m willing to bet they can account for the worst-case scenario. They could use this information to keep users safe when experimenting with code, cooking, or other potentially dangerous activities.
Like the coffee cup that warns you of the hot liquid inside, AI apps could provide a safety checklist with their responses. Reminders to back up your website, for instance, might save someone from a disaster.
It’s a common technique for writers and other content creators. We’re not supposed to assume the reader knows everything. Thus, informing them of potential dangers is part of being a responsible resource.
AI should be held to the same standards as its human counterparts.
Ask Follow-up Questions to Provide the Best Answer
There’s more than one way to accomplish something. And some subjects are controversial or have gray areas. That’s the reality of our world. AI should strive to notify us of those instances without taking sides.
When using AI to write code, I have noticed occasions where it provides multiple approaches. It’s helpful, as I can use the one that works best for me. This should become the rule, not the exception.
I realize it might be difficult for an app to do this without proper context. We humans aren’t always clear about what we want. So, why not ask us a few follow-up questions? Or make responses with multiple options a choose-your-own adventure experience?
This promotes a more conversational approach to tasks and encourages critical thinking. It may even lead to better results.
AI Doesn’t Have to Be So Smug
AI answers our queries with robust confidence, regardless of its accuracy. It’s a concerning trend, given how quickly these tools are being adopted. Humans who blindly follow AI’s advice will inevitably be in for a rude awakening.
I can’t see this as a positive for either AI companies or users. Trust is a key component for growth. Users will flock to an app they trust and shun ones that provide bad information. Just like other consumer products, reliability matters for AI’s ultimate success.
Perhaps the answer is for these models to trade a bit of convenience for a more trustworthy process. We humans do it all the time. Maybe these machines could learn a thing or two from us.
Related Topics
Published on The Digital Insider at https://is.gd/r1nfiB.
Comments
Post a Comment
Comments are moderated.