Originally published in听Carroll Capital, the print publication of the Carroll School of Management at Boston College. .


If you鈥檝e read the headlines about artificial intelligence, you might believe it will turn us all into horses. Automobiles, of course, changed horses from essential laborers to luxury purchases in just a few years. AI, doomsayers predict, will do something similar to us humans. It鈥檒l take our jobs and leave us to fill niche roles.

Professors at Boston College鈥檚 Carroll School of Management who study AI call predictions like that overblown. Yes, AI will revolutionize the workplace, and, yes, some kinds of jobs will disappear. The McKinsey Global Institute, for example, has estimated that activities accounting for 30 percent of hours currently worked in the United States could be automated by 2030. But Carroll School scholars argue that people who learn to use AI to increase their productivity could end up better off. As they see it, AI-adept folks will be able to work faster and smarter.

鈥淚 don鈥檛 think our real concern right now is about overall job loss,鈥 says Sam Ransbotham,听a professor of business analytics. 鈥淲hat鈥檚 going to happen is you鈥檙e going to lose your job to someone who鈥檚 better at using AI than you are, not to AI itself.鈥

How do you become an AI ace? It鈥檚 doable for many people, says Ransbotham, who鈥檚 also the host of the podcast Me, Myself, and AI.听You don鈥檛 have to become an expert, just the most knowledgeable person in your office.

With curiosity and diligence, most anyone can learn enough to figure out how to apply AI on the job, he says. The way to start is with play. Go online and play around with ChatGPT, Open AI鈥檚 chatbot. Try, say, having it write first-draft emails or memos for you. (But fact-check anything you use: ChatGPT and other large language models can sometimes offer up 鈥渉allucinations,鈥 information that sounds plausible but is false.)

鈥淎I tools are accessible to the masses,鈥 Ransbotham says. 鈥淭hat鈥檚 an interesting change. Most people don鈥檛 play with Python听code.鈥 He uses AI to generate the background images for slides in his presentations. 鈥淔or me, images on slides fall into the good-enough category. I want my computer code to be awesome, but the images I use on slides can just be good enough.鈥

In speaking of 鈥済ood-enough slides,鈥 Ransbotham was alluding to the peril of leaning听too heavily on AI: what he calls the 鈥渞ace to mediocrity.... You can use an AI tool to get to mediocre quickly,鈥 he explains. ChatGPT, for example, can give a draft of an email or memo in seconds. But its prose will be generic, lacking color and context, because ChatGPT 鈥渁verages鈥 the prose it finds on the web. Stop there, and you鈥檒l end up with average prose.

Sam Ransbotham 350 x 418 - 1

Business Analytics Professor Sam Ransbotham

Another way to tool up on AI is to read and listen. Plenty of established publications, like Wired and Ars Technica, as well as newer ones, like Substack newsletters by Charlie Guo and Tim Lee, cover AI. Ditto for podcasts like Ransbotham鈥檚. As you explore, understand that, despite the hype, the technology does still have real limitations, says Sebastian Steffen, an assistant professor of business analytics. 鈥淚 tell my students that ChatGPT is great for answering dumb questions,鈥 he says. 鈥淔or factual questions, it鈥檚 quicker than Wikipedia.鈥

But AI can鈥檛 make judgments, which is often what work entails. Your boss may ask you to help formulate strategy, allocate staff time and resources, or determine whether a worrisome financial indicator is a blip or the beginning of something bad. Facts can inform those decisions, but facts alone won鈥檛 make them.

Steffen cautions that it may take several decades before we really understand how to use AI and the best ways to incorporate it into our workplace routines. That鈥檚 typical of big technological rollouts. Even AI鈥檚 inventors may not see the future as clearly as they claim. 鈥淎lfred Nobel invented dynamite to use in mining, but other people wanted to use it for bombs,鈥 he says. That troubled Nobel, a Swedish chemist, and was one of the reasons he funded the Nobel Prizes.

Even in an AI world, humans will still likely have plenty to do, says Mei Xue, associate professor of business analytics. 鈥淭hink about doctors鈥攚e still need someone to touch the听patient鈥檚 belly鈥 to get subtle information that sensors miss, she says. Robots can move pallets in warehouses, but they haven鈥檛 learned bedside manner. Xue says humans will likely continue to fill roles that require 鈥渢alking to clients, meeting with customers, reading their expressions, and making those personal connections鈥攚e can gather subtle impressions that AI can鈥檛.鈥

AI can鈥檛 tell whether the crinkles at the corner of someone鈥檚 eyes are from a smile or a grimace. So soft skills will still be rewarded. Brushing up on those may pay off.

Even in humdrum workplace communications, like those endless emails and memos, there will likely be a continuing role for us humans, Xue says. 鈥淲hat鈥檚 unique with us humans is personality, originality, compassion鈥攖he
emotional elements.鈥 ChatGPT can generate jokes, but it can鈥檛 know your coworkers or clients and what will resonate with them.

I don鈥檛 think our real concern right now is about overall job loss. What鈥檚 going to happen is you鈥檙e going to lose your job to someone who鈥檚 better at using AI than you are, not to AI itself.
Professor Sam Ransbotham

Similarly, you can let AI write your cover letters for jobs or pitches to clients. But you might fail to stand out, Xue says. ChatGPT 鈥渋s searching for what鈥檚 available on the internet and putting together what鈥檚 best based on probability,鈥 she explains. 鈥淔or now, it can鈥檛 provide originality."听

Xue adds that one can find the need for a human touch, or voice,
in unexpected places. 鈥淭his weekend I was listening to some books
on an app in Chinese. I found they offered two types of audiobooks鈥攐ne read by a real person and one by an AI voice. I didn鈥檛 like the AI readings. They sounded fine but had a perfect voice. When you have a real person read, you feel the emotion and uniqueness.鈥

Teachable Moment听

The Carroll School gives professors three options for using AI as a tool.

By Lizzie McGinn

With the launch of ChatGPT in Fall 2022, many educators feared that AI would completely upend academic integrity, a concern that many Carroll School faculty initially shared. 鈥淎t first [the reaction was] 鈥榳e have to stop this menace,鈥欌 says Jerry Potts, a lecturer in the Management and Organization Department. Still, a handful of professors started making a compelling case: AI wasn鈥檛 going anywhere鈥攊nstead, the Carroll School would have to rethink how to use it academically. By the following fall, three new policy options had been presented: Professors could completely prohibit AI, allow free use with attribution, or adopt a hybrid of the two options.

Some faculty members, like Potts, have fully embraced AI as an educational tool. In his graduate-level corporate strategy class, one project tasks students with pitching a business plan for a food truck with only 30 minutes to prepare.听Potts has found that while AI often helped with organizing the presentations, it was humans who came up with the most creative ideas overall. Bess Rouse, associate professor of management and organization and a Hillenbrand Family Faculty Fellow, opted for a hybrid AI approach and allows it only for specific class assignments. In one case, she instructed students to use ChatGPT in preparing for peer reviews, which minimized the awkwardness of critiquing other students鈥 work.

鈥淭here is less concern that this will be the ruination of teaching,鈥 says Ethan Sullivan, senior associate dean of the undergraduate program. 鈥淲e鈥檝e instead pivoted to how AI complements learning.鈥 For his part, Potts is optimistic. He says that if professors stay on top of this technology and adapt their courses accordingly, 鈥淲e should be able to take critical thinking to another level.鈥


Tim Gray is a contributing writer for the Carroll School of Management.

Illustration by David Plunkert.听