Should I set up a personal AI agent to help me with my daily tasks?
—Seeking help
In general, I think relying on any type of automation in one’s daily life is dangerous when taken to extremes and potentially alienating even when used in moderation, especially when it comes to interactions personal. An AI agent that organizes my to-do list and gathers online links for further reading? Fabulous. An AI agent that automatically messages my parents every week with a quick life update? Horrible.
The strongest argument for not involving more generative AI tools in your daily routine, however, remains the environmental impact these models continue to have during training and generating results. With all of this in mind, I delved into the WIRED archives, published at the glorious dawn of this mess we call the Internet, to find more historical context to your question. After a bit of searching, I came away convinced that you probably already use AI agents every day.
The idea of AI agents, or God forbid “agentic AI,” is the current buzzword for all the tech leaders trying to tout their recent investments. But the concept of an automated assistant dedicated to performing software tasks is far from a new idea. Much of the discourse around “software agents” in the 1990s reflects the current debate in Silicon Valley, where tech company executives are now promising an incoming flood of AI-powered generative agents trained to perform tasks online on our behalf.
“One problem I see is that people wonder who is responsible for an agent’s actions,” reads a WIRED interview with MIT professor Pattie Maes, originally published in 1995. “In particular, things like agents spending too much time on a machine or buying something you don’t want on your behalf. The agents will raise many interesting questions, but I am convinced that we will not be able to live without them.
I called Maes in early January to find out how her views on AI agents have changed over the years. She’s more optimistic than ever about the potential of personal automation, but she’s convinced that “extremely naive” engineers aren’t spending enough time tackling the complexities of human-machine interactions. In fact, she says, their recklessness could cause another AI winter.
“The way these systems are currently built is optimized from a technical perspective and an engineering perspective,” she says. “But they are not at all optimized for human design problems.” It focuses on how AI agents are still easily fooled or resort to biased assumptions, despite improvements in the underlying models. And misplaced trust leads users to trust responses generated by AI tools when they shouldn’t.
To better understand the other potential pitfalls of personal AI agents, let’s divide this nebulous term into two distinct categories: those that feed you and those that represent you.
Feeders are algorithms with data about your habits and tastes that search through swathes of information to find what’s relevant to you. Sounds familiar, doesn’t it? Any social media recommendation engine filling a timeline of personalized posts or incessant ad tracking showing me those mushroom candies for the thousandth time on Instagram could be considered a personal AI agent. As another example from the ’90s interview, Maes mentioned a newsgathering agent specially developed to bring back the stories she wanted. This looks like my Google News landing page.