I’ve been wrestling with the question of where this is all going: AI is becoming more capable, seemingly by the minute, replacing more and more tasks and, ultimately, whole professions. Impacting industries so fast, we’re barely able to account for the changes.
Just a few days ago, Figma announced a host of new AI features for its design tools that will allow app makers to become more effective than ever before but also threaten to make designers increasingly obsolete. As a product designer who employs and oversees other designers, I felt the need to answer this question for myself. Am I going to become obsolete or superhuman? And how much can I learn from what’s now happening to designers and developers about what might become of other professions?
What AI Automates First, and Does It Matter Ultimately?
There’s been plenty of studies around the question of which jobs are likely to get automated first. This Brookings Institute report finds that better paid, better educated workers are those most at risk. A friend reminded me of this very poignant tweet from author Joanna Maciejewska:
“You know what the biggest problem with pushing all-things-AI is? Wrong direction.
I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”
This resonated with a lot of people, myself included. But when I think more deeply about this situation, it looks like only a short-term situation. AI is developing faster in higher-data spheres like written language, images, and code — which are plentiful online. But it is also making tremendous inroads in low-data availability areas like driving, healthcare, agriculture, and predictive maintenance. A/C repairmen jobs might be safe in the short term. But in the long term - especially when we have human-level AI - these systems could simply be trained to do any task that a human can. Soon - AI will be able to do art and writing, and ALSO do your laundry and dishes. In fact, in purely practical terms, it could probably do anything that you or I can do.
Are There Any (Relatively) Safe Jobs?
Given my belief that in matters of pure competence AI will soon surpass humans in essentially every task, for me a safe job is a job that AI is unable to do not because of competence, but because of a more fundamental reason. As far as I can tell, there are only a few of those and, therefore, a few relatively safe job categories in the coming years:
Jobs of Ownership - These jobs are associated with owning capital, intellectual property, or other exclusive rights enforceable by law. AI cannot own property or intellectual property and will therefore likely serve the people who do quite well. These jobs include:
Investor
Business Owner
Founder
Inventor of Patents
Famous Artist
Jobs of Trust - These jobs are granted based on an institution’s trust in the character and judgment of a person, not necessarily on how much work they do. These include:
Board Member
Treasurer
Most top-level corporate executive jobs (“Head of”) jobs will likely survive AGI because the company needs at least one person to supervise and evaluate that function, even if 99.999% of the work is done by AI.
Jobs of Human Connection - These jobs depend on the ability to foster a sense of connection, belonging, inspiration, and love in others (real or pretend). These include:
Political / union / community leader
Community managers
Therapist / Coach
Spiritual leader
Sports / Music / TV / Comedy star
Sex worker
What Happens to Other Jobs?
This is a big question, because I think this category includes the vast majority of jobs in existence today. Here are some things I think we know:
AI will allow low-skill workers to compete more effectively, increasing supply.
AI will allow high-skill workers to work faster and more efficiently and to scale to many more projects, increasing supply.
AI will allow clients and employers get increasingly decent results without any human labor at all by using AI tools, shrinking demand.
AI will allow companies to do everything that needs doing with a lot fewer people, shrinking demand.
Talented people are likely to increasingly give up looking for a job in a shrinking job market, and start their own businesses — increasing supply and demand.
Where I think these trends lead is a market where supply for human intelligence/talent/work for pay keeps increasing, whereas demand for the same drops sharply. This is bad news for folks working in jobs of expertise where Ownership, Trust, or Connection are not a primary concern, and I think the market will start orienting towards these three poles.
The Poles of a Human Economy
As AI will increasingly provide a more practical alternative to hiring a human, humans will be forced to think up other reasons employers and clients should pay for their services. If we go back to the three poles I discussed above, we can imagine these trends:
Ownership - More people may start businesses or become investors or inventors of solutions involving AI. This will allow them to benefit from the increasing competence of AI through their ownership stake in a particular AI solution.
Trust - More people may start positioning themselves as trustworthy supervisors and guides for AI and for institutions relying heavily on AI. These positions would depend on aligned values and a combination of knowledge and character.
Connection - More people may opt to build a unique brand, community, or other type of emotional connection with their customers or employers.
I think in all the above cases, trust in your brand - whether an individual or a business - will be crucial. People will have to know what you stand for and love it. Otherwise — they will almost certainly be able to get a better deal elsewhere.
If you have not 100 but 10,000 To-Do List apps - you will choose the one that means the most to you personally. If you have not 100 but 1,000 AI apps that can do the work of an employee, you will think twice about a person’s character and what their humanity adds to your business before you even consider hiring them.
The Utopia Scenario
The world will never run out of problems or challenges. There will always be another product, another project, another goal to be achieved. And that’s the good news.
Imagine a world where each person invests their time in one or more problems that interest them. Supported by any number of expert AIs as employees - each human could become akin to a CEO of their own global enterprise. Friends can join forces and work together - but they don’t have to. People with similar values can choose to organize, work together, or compete.
Imagine a world where every product category has thousands of tiny companies offering ever-cheaper and more personalized solutions. Where the choice is ultimately driven not by pure function (since the bar on that will be super high) — but on values, style, artistic merit, and on feelings of alignment and belonging.
If you don’t want to work — you won’t have to. Either society would provide for basic income, or easier still — each newborn could one day be provided with an AI who would be in charge of generating income for them.
There is definitely a huge disruption coming to all of our lives. But if we foster a culture of belonging, connection, trust, and ownership, we might just survive it, incidentally creating the first-ever truly human economy.
Getting it together.
In the end, we might look back at these early years of AI and realize they were the most crucial to setting AI on the right path.
If path-dependence is a thing in aligning AI and humanity, and I think it is, we have to get it together politically, socially, and psychologically now more than ever.