ChatGPT — the artificial intelligence (AI) language model that has taken the world by storm — recently passed, in many cases with flying colours, 12 of 15 MBA modules put to it by South African business schools in an average time of nine minutes per paper.
The results are hardly surprising. ChatGPT has already done well in a Wharton University MBA exam, a Stanford Medical School final and several legal exams at US universities.
Harvard University has been quick to embrace the new technology, announcing its plans to introduce the first ChatGPT-powered teaching assistant next year, while the White House has just penned a blueprint for an AI Bill of Rights.
At the other extreme, some public schools in New York and Seattle have prohibited the use of ChatGPT on all devices and networks, while Russia, China, North Korea, Cuba, Iran, Syria and Italy have all banned its use outright.
Of course, students are typically two steps ahead of the authorities when it comes to being early adopters of technology. With plagiarism and cheating already creating a headache for universities, especially online providers, the rapid advance of AI has created a real dilemma for business schools.
On the one hand, the deans canvassed for this story realise they need to embrace AI to understand it better so that they can maximise the benefits in education and research. On the other, they know they have to police its use by students to prevent them from gaming the system, but with the knowledge that the development of anti-cheating software will probably be outpaced by the speed at which generative AI is evolving.
The solution appears to be that business schools will have to improve the way they engage and assess students. For MBAs, which are all about promoting higher-order learning and original thinking, this can only be a good thing.

“The answer is to embrace it, but judiciously and with a learning mindset,” says Shamim Bodhanya, a senior research fellow at Rhodes Business School and MD of the Leadership Dialogue, a private consultancy that is coaching academics on how to use ChatGPT.
He admits that AI poses serious risks and dangers — including that it can perpetuate racial and gender bias — but feels that universities should be putting basic AI guidelines in place so that academics know it’s OK to start working with it.
Formal institutional responses to AI are still in their infancy in most parts of the world but South African universities are starting to adopt AI policies.
Wits Business School (WBS) recently released senate standing orders on AI. They focus less on AI-detection tools and more on shifting testing away from high-stakes, summative assessments to more continuous and diverse assessments that include oral examinations, student presentations and other face-to-face activities.
Rhodes Business School has also just issued a policy on AI. “We’ve adopted a values-based approach (where we educate the student on the pros and cons of AI) combined with checks and balances as well as penalties and sanctions to make sure our academic project isn’t compromised,” explains school director Owen Skae.
“Ultimately, we need to embrace AI so we can understand what it can and can’t do,” he adds. “It’s a pipe dream to think we can avoid it, but we still must ensure that the academic rigour that we stand for is maintained.”

Passing the test
Skae participated in the MBA study by André Calitz of Nelson Mandela University and Margaret Cullen of the Nelson Mandela University Business School, in which ChatGPT passed 12 MBA exam papers from 2022, provided and marked by four local business schools.
The AI scored 51% overall, but Skae gave it only 4% for his paper on management accounting. It seems ChatGPT, while good at generating fact-based text, fares less well with numerical analysis. For instance, it is unable to parse Excel spreadsheets or produce graphs and images without the additional boost of plug-ins, though these are readily available. However, its latest iteration, GPT-4, easily overcomes many of these shortcomings.
Sam Altman, the CEO of ChatGPT creator OpenAI, said in a well-publicised interview last year that his firm would help devise ways to identify ChatGPT plagiarism. However, he warned that complete detection was impossible, and said schools should avoid relying on plagiarism-detection tools.
“Generative text is something we all need to adapt to,” Altman said. “We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme as well.”
Destroying a nation does not require the use of atomic bombs or the use of long-range missiles. It only requires lowering the quality of education and allowing cheating in the examinations by the students
— Sign at the entrance to Unisa
The most revealing part of the Calitz and Cullins paper, “ChatGPT: The New MBA Student in Your Class”, is the comments from the markers. ChatGPT’s best score was 78% for economics, followed by 75% for marketing management. The examiner of the latter paper noted: “If you can google the answer, ChatGPT is spot on.”
The AI model scored 65% on the organisational behaviour module, with the lecturer saying its answer to the first question on the paper “is probably the best answer I ever received on this particular question”.
Other lecturers pointed out that while many of ChatGPT’s answers to straightforward theoretical questions were of textbook quality, it was sometimes less proficient at interpreting case studies.
And when asked to predict an outcome or express an opinion, it issued a disclaimer, stating that as an AI it is not able to predict the future or form an opinion. However, experts say that with a few changes in prompts, or the use of plug-ins, this limitation can easily be overcome.
“This is food for thought and perhaps makes me appreciate that we have sit-down, in-person exams,” said one examiner. “On the other hand, it also offers me the opportunity to reflect on whether I am asking the right sort of questions.”
Another examiner said: “For any theoretical-based assessment or online questionnaire we will have to think of innovative ways to truly test a delegate’s knowledge [so as] not [to] just get an AI-generated report.”
Indeed, Calitz and Cullins recommend that, to ensure students are doing their own work, educators should avoid assignments that don’t require critical, original thinking or creativity in the way knowledge is applied.

Prepare for an ‘arms race’
In addition to changing the way they assess students, most universities are turning to proctored exams, secure online assessments and anti-plagiarism software, such as Turnitin, to prevent the unsanctioned use of AI.
However, Ahmed Shaikh, MD of Durban-based Regent Business School, has found that Turnitin’s AI detector has a low rate of accuracy, especially when scanning for plagiarism in academic assignments for MBA modules.
“In general, digital scamming has taught us that no matter how we try to minimise the overall impact of a rip-off, the ingenuity of scammers always comes up with an alternative scam,” he says.
Bodhanya thinks there’s going to be an “arms race” between advances in AI and the ability of AI-detection software to keep up — a race that the latter will lose. As such, he thinks the whole way of educating young people needs to change.
He recommends that business schools:
- Use the flipped classroom approach: where students are encouraged to use AI in preparing for the class at home but once in the classroom must apply that knowledge in how they engage and debate with their peers.
- Change the way they assess students: written essays and multiple-choice tests should be supplemented with more creative forms of assessment such as oral engagements and in-class discussion. Online institutions can use supervised online discussion rooms.
- Set deliberate booby traps and red herrings by inserting false facts into assignments that robots won’t detect but a human should. However, Bodhanya cautions there may be ethical problems with this approach.
- Embrace AI fully: tell students they must use it in completing their assignments but must also show how they constructed their AI prompts, cite all the AI resources used and critique the AI’s output.
Marko Saravanja, chair of Regenesys Business School, believes AI is here to stay and will continue to evolve. The business school has decided not to fight the technology but rather to embrace it to deepen the learning experience for students.
The problem is not cheating students but educators who are not creative and innovative or technologically competent enough to create assessments that are cheat-proof
— Marco Saravanja
At the same time, Regenesys uses Turnitin to detect AI-generated writing. It is also exploring similar products, such as GPTZero, and has updated its academic policies to make AI-inspired academic dishonesty a punishable offence of the highest level.
But the best solution, Saravanja believes, is to use a mix of assessment methods, from individual proctored assessments to team-based applied projects.
“Our MBA students do projects in teams and give an oral presentation of their projects to the assessors, which provides us with an opportunity to engage directly with the students, ask questions, assess their in-depth understanding and explore if it is their own work or AI-generated,” he explains.
“The problem is not cheating students but educators who are not creative and innovative, or technologically competent enough, to create assessments that are cheat-proof.
“I believe that AI will run the world in our lifetime. Those who embrace it will become masters. Those who don’t will become slaves.”
WBS information systems lecturer Mitchell Hughes is “absolutely in favour” of using AI in MBA programmes and stresses the importance of creating a culture of digital and AI literacy at the business school.
He believes that if used with positive intent to enrich teaching, learning and the student experience, AI has “tremendous potential for good”.
“It’s obviously important to acknowledge and do our best to mitigate the threats, particularly in terms of ethics and privacy,” he says. “But on balance the potential benefits far outweigh the challenges. It is therefore a phenomenon to be embraced, not denied or shut down.”

Working with AI
But given the risks, why is it necessary to embrace AI at all? Why not just pull up the drawbridge? Surely it’s better to be safe than sorry?
Interestingly, none of the business schools canvassed for this piece believe this is the right approach. AI offers just too many benefits and is already widely diffused into all sectors, not just academia.
Shaikh believes that AI is a major technological revolution that cannot be shied away from; academia will just have to adapt to its use.
“We are very aware that we need to fundamentally rethink ways in which we teach, so we are thinking about integrating generative AI tools mindfully into our core curriculum for the MBA,” he says.
This will be shaped by the school’s policy on AI. Shaikh expects that it will require academics to attend workshops on the basics of teaching with AI and on course redesign, as well as on how to partner with students in determining how AI is used in the classroom.
There are several generative AI tools on the market specifically designed for use in education.
They include Gradescope, an AI tool that enables students to assess each other while providing feedback; Fetchy, an AI platform for educators that can create lesson plans, enhance teaching methods and help with time management; Dragon Speech Recognition, which can transcribe up to 160 words per minute; Cognii, a virtual learning assistant; and Carnegie Learning’s various platforms, which mimic human tutors.
It’s a pipe dream to think we can avoid it, but we still must ensure that the academic rigour that we stand for is maintained
— Owen Skae
Other tools that, like ChatGPT, can provide visual, audio, text and code generation include AlphaCode, GitHub Copilot and Google’s chatbot, Bard.
Many academics are already using AI tools to create referencing lists and systems and to do scoping reviews to find seminal articles — a once laborious, paper-based process involving library cards, heavy books and dusty archives.
“Today we are overwhelmed with information,” says Skae. “So if AI can save you time [on the drudge work] which you can rather spend comparing and contrasting, AI can enhance your knowledge. But if it allows you to get away with shallow thinking … well, that’s an inherent danger we have to guard against.”
Perhaps the best reason for not shutting the door on AI is that, as Calitz and Cullins conclude in their paper, students will graduate into a world full of generative AI programmes and will need hands-on experience to navigate their use in the workplace.
Caitlin Ferreira, MBA programme director at the University of Cape Town’s Graduate School of Business, agrees. While stressing that the main aim of an MBA is to create individuals who deliver work and ideas that offer uniquely human insights and higher-order solutions, she believes MBAs also have an important role to play in preparing students to navigate a world in which AI will play an increasingly important role.
“If we closed our eyes and pretended that AI tools didn’t exist, we would be doing our students and ourselves a disservice,” she says. “Instead, teaching students how to use these tools … in a way that augments human capabilities … sets students up for future success.”
However, she adds that it’s equally important for students to learn how not to use AI.

“We need to teach students to critique information and ensure that they’re aware of misinformation and potential security and data privacy concerns when using AI tools,” she says.
Milpark Business School director Segran Nair agrees that it is part of the role of a business school to ensure prospective business leaders know how AI works and what its potential is in their respective fields.
He has seen first-hand how the phrasing of AI prompts can result in different responses, some completely inaccurate. “This is why students need to be coached in terms of how they use AI,” he says. “Academic staff can use it to heighten critical thinking on MBA programmes, to show how fake information and false facts are presented, and what the importance of referencing and the correct prompting for information is.”
Nair also points out that a tool like ChatGPT can be very useful for helping students phrase concepts and polish language.
“Students often come from disadvantaged backgrounds where English instruction was poor,” he says. “Using a programme to help them phrase and put together assignments can be very beneficial in assisting students battling language barriers.”
On the other hand, the downside observed by some academics is that with the rise in AI-generated text and speech-recognition software students’ own writing skills are already being compromised.
Gordon Institute of Business Science (Gibs) deputy dean Louise Whittaker believes that when AI is used as a tool to assist writing and thinking, rather than to replace thinking, it can be “very powerful”.
For instance, some Gibs lecturers are actively asking students to get AI to generate text that they then have to critique to produce more refined, applied answers.
But while aware of these positive applications, she makes a crucial point: “MBAs are supposed to teach leadership and personal mastery skills as well as technical skills. AI cannot replace or even teach these.”
The bottom line is that AI is here to stay, and business schools need to adapt quickly to this new terrain. AI can certainly expedite teaching and learning but it can also expedite cheating. And while there are ways to ensure cheating is minimised, none is foolproof.
Even the most ardent AI enthusiasts accept that it presents risk. But while it has very few limitations, don’t expect AI to produce original thinking or provide a shoulder to cry on. Well, not yet, anyway.
Nelson Mandela University researchers asked ChatGPT to come up with its own list of the pros and cons of using ChatGPT in education.
It said the pros of using ChatGPT in education are:
*Availability: ChatGPT can provide 24/7 support to students, which can be useful for students in different time zones or with varying schedules;
*Personalisation: ChatGPT can provide individualised responses to students based on their queries, making learning more personalised and interactive;
*Speed: ChatGPT can provide instant answers to student questions, helping to keep the learning process fast paced and efficient; and
*Convenience: ChatGPT can be accessed from anywhere with an internet connection, making it convenient for students who are on the go or have limited access to traditional educational resources.
It said the cons of using ChatGPT in education are:
*Accuracy: ChatGPT is only as accurate as the data it has been trained on, and there is a risk of students receiving incorrect information if the model has not been trained on the specific topic in question;
*Limited creativity: ChatGPT is limited to providing answers based on existing data and may not be able to generate new ideas or approaches to solving problems;
*Lack of human interaction: While ChatGPT can provide instant answers, it lacks the personal interaction and feedback that students receive from a human teacher; and
*Dependence on technology: ChatGPT relies on technology to function and if the technology fails or is unavailable, students may be unable to access educational resources.
— ChatGPT, according to ChatGPT















Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.