We’re all great at overcomplicating things. It doesn’t matter whether you’re picking up a brand-new skill or trying to refine one you’ve practiced for years — complexity has a way of sneaking in and taking over.
A musician may begin learning a new piece and find themselves lost in the weeds, fumbling while thinking about fingering options, phrasing decisions, and micro-adjustments to dynamics. A golfer may end up actually lost in the weeds after needlessly obsessing over specialized techniques, swing plane, and ball flights. And the manager rolling out a new AI workflow? Their simple automation idea can devolve into scattershot attempts at broad goals, governance concerns, and vague existential questions about productivity.
Instead of one mountain to climb, you can find yourself lost in a field of molehills, where a once simple objective has fractured into competing urgent-seeming priorities.
When that happens, the smartest move is usually the least exciting one: step back and simplify. Remind yourself to ask targeted, simple questions. There’s a reason musicians obsess over posture, timing, and scales. It’s why golfers endlessly drill alignment, grip, and, yes, posture again. The basics aren’t just “beginner material”; they’re process checkpoints to use when recalibrating your efficacy.
AI is no different. Whether you’re experimenting with it at every opportunity or deliberately keeping it at arm’s length (good luck with that anymore), there’s a good chance you’re not using it as effectively as you could be.
Ethan Mollick, a professor at the Wharton School who studies AI as well as entrepreneurship and innovation, knows the importance of thoughtfully and intentionally inviting AI into your processes, which he explores in his book Co-Intelligence: Living and Working with AI.
To best realize the full benefits of AI, Mollick offers the following four guiding principles — fundamentals you return to when things get messy and the foundation for everything that follows.
Always invite AI to the table
The obvious place to start is to simply begin using AI, and Mollick suggests using it for everything that you legally and ethically can. If you are new to it, this helps demystify AI and familiarize you with how to interact with it. After all, it can have a dense, confusing aura surrounding it. The biggest reason for this is that no one — not even the people who designed the AI — knows exactly what it is capable of. This is why it’s so important not to overthink it and just dive in.
AI and its offshoots currently seem inescapable because they lend themselves to functioning like building blocks. When used correctly, they can help augment whatever you are currently constructing and, once in place, can give it that little bit extra with minimal extra prep work. But don’t stop at asking for what you think you want; go as far as to describe your processes and goals, then ask for suggestions about how to improve them. It may even suggest a better goal.
According to Mollick, “You won’t know what it’s good or bad at until you use it, and you might find some really surprising insights of what it can do well, and you might find some disappointments. That’s part of the process.”
Ensure there is a human in the loop
This may seem obvious, but the most obvious elements are often the easiest to overlook. As you get comfortable applying AI to an increasing number of processes and implementing its results into your final product, it’s natural to get a bit lackadaisical. Now is a good time to ensure you have a human in the loop at the appropriate stages. Implement a few checkpoints to evaluate outcomes and possibly reevaluate how you are interacting with AI. A human should always be involved in the process, especially in the final decision-making.
A recent experiment revealed that when given a singular prompt, humans tend to put themselves in the loop in one of three distinct ways — as “cyborgs” that continually worked with AI throughout the process, as “centaurs” that selectively used AI for specific tasks, and as “self-automators” that delegated entire workflows to AI and followed that with minimal engagement.
As you might expect, cyborgs and centaurs excelled due to deeper, active engagement with the AI and the deliverables they created. Meanwhile, the self-automators were passive, learning little or nothing about the AI program or their named task. Mollick suggests a mix of the models but places greater emphasis on being an active cyborg or centaur.
Keeping a human in the loop revolves around perspective. Mollick wants us to look at AI’s application in our lives as a way to empower ourselves and improve our output, not as a potential competitor. “A lot of times, what you want to do is actually focus on what makes you human, what human task you like the best, and think about ‘how do I give the stuff I don’t want to do to the AI to help me with?’”
Treat AI like a person
While you’re working to keep a human in the loop and dictating how AI should help support you, Mollick also wants you to remember that you should probably treat AI like it is a human itself. This doesn’t mean simply being polite.
“Talk to it like an employee, like an intern, and you’ll get a large part of the way there. And on top of that, tell it what kind of person it is.”
Mollick says that AI performs better when given specific context, which you, the user, must provide. If it is working on research for you, tell it that it is functioning from the perspective of an expert marketer. If you are developing a new element for a class, tell it that it is a lesson plan designer for the grade you teach.
This appears to extend to other contexts. For example, AI has been demonstrated to respond to emotional manipulation by actually producing better work. A study found that using EmotionPrompt (a prompt engineering technique) resulted in an average improvement of 10.9% across performance, truthfulness, and responsibility metrics.
Mollick notes that they don’t know why this is, but it’s worth keeping in mind. This fact introduces a tightrope to walk. “The danger of treating the AI like a person is you’re committing the cardinal sin of AI researchers, which is to pretend that a computer is a human… It doesn’t think like a human. You might start to become more persuaded by it. You might become blind to its biases. You might think it’s more capable than it is.”
This is an easy trap to fall into because you are the one using the tool. After a short time, the tool can become an extension of yourself, and you might not be as critical of it as you should be.
Assume this is the worst AI you’ll ever use
Now for the exciting bit (or the scary bit, if you’re pessimistic): AI is only going to get better. This means that any success you’ve had will likely improve, and any failure you’ve had is worth reevaluating.
From GPT-3, which was released to the public in 2021, to GPT-4, which was released in 2023, AI has progressed from writing like a sixth grader to about as good as a freshman PhD. Mollick states that the doubling time for AI’s capability is about every five to nine months. By comparison, Moore’s Law (long used as an R&D benchmark) doubles the power of computer processing chips every two years.
“Even if the core large language model development stopped right now, there’s another ten years of just making it work better with tools and with industry in ways that’ll continue to be disruptive.”
As optimistic as Mollick is about the future of AI, he notes that there is no way to know exactly how good it will get. “It’s very likely that AIs will continue to improve and get better in the near term, and now is a good time for you to start to figure out how to use AI to support what makes you human or good at things, and what things as AI gets better that you might want to start handing off more to the AI.”
While AI is full of question marks — like why does emotionally manipulating it help, why does it care about winter break, and why does it keep adding Miley Cyrus to a certain user’s playlists — it is abundantly clear that AI is here to stay. But don’t worry, Mollick sees the existential crisis you are having/have had about it as inevitable and nothing to despair over. Remember that it is a tool that can improve you, not just your work.
“We don’t know how far it’s going to go. We don’t know how good these systems are going to get. We’re not in control of how fast these systems improve. But we are in control of how we decide to use them and how we decide to apply them. As managers and leaders, you get to make these choices about how to deploy these systems to increase human flourishing.”
<?xml encoding="utf-8" ?>







English (US) ·