What will ChatGPT do to Critical Analysis?

Markets love to chase bubbles and the massive business media loves forever blowing them in our faces. It’s been two years since the last wave when everyone had an opinion on how online retail would change humanity. In the last few weeks the new buzzword, AI (artificial intelligence), became the bubble to send loose money chasing down multiple rabbit holes. Will ChatGPT and other generative AI processes make Google searches (and hence Alphabet stock) obsolete?

From my perspective, another question came to mind: Will ChatGPT make our education tools for the next generation obsolete?

ChatGPT is a relatively open source chatbot created by OpenAI (GPT standing for Generative Pre-training Transformer). A chatbot takes available data and learns patterns and relationships it then transforms into language. Past chatbots have been rather limited in levels of comprehension and discernment. The most famous being Microsoft’s Tay the Twitter bot, that was designed in 2016 to interact with Twitter users and develop its own views. Within 16 hours, Tay had to be shut down – it had become a racist, right-wing Holocaust denier well immersed in the drug culture vernacular. Microsoft is also behind this latest project, but with editorial control to ensure that ChatGPT does not cross lines of extreme ethical or political bias (although a lot can be discussed on how bias entered into this content moderation).

I have spent the last week playing with ChatGPT and have, frankly, been quite impressed at how it could bring together reasonable answers to difficult questions (like whether organically-grown food was better than conventional or on the difference between uncertainty management and risk management). I did not see any evidence of answers tied to algorithmic patterns (ie, giving me the answer I wanted based on my past search history and associated friends and comments). Questions posed in incognito mode generated similar answers. In full narcissism, I even asked ChatGPT to write a 300-word essay on The Risk-Monger. Within five seconds it had gone through my 500+ articles and gave a faithful review of my positions and interests. See the generative AI paper about me at the end of this article (I’d be interested if others would get a similar answer).

As readers have no way to determine if parts of this article were written by ChatGPT (unless I make a spelling or grammar mistake), it is clear that print journalism is now facing a further existential challenge. More so, our Western education system has to reconsider how the next generation will be educated and evaluated. If students wrote papers like ChatGPT can, they would all receive As.

And that is a problem.

Time to Restructure how we Teach the Next Generation?

Successful Western social science education models have been built on developing critical analytical skills from the basis of researching, reading, summarising and analysing (challenging) a theme, presented via a paper or essay. Students were graded on how well they researched key positions, developed arguments and critically assessed them with their own observations. ChatGPT does a very good job at all of these while most Western education systems, coming out of two years of COVID lockdowns, have failed to prepare students nearly enough for such tasks (I am talking about basic reading and writing skills).

There is now no way to detect if a student’s answer is done via a generative AI tool so, overnight, it has become impossible to fairly evaluate if a student has achieved the learning task. Online videos have shown how multiple choice exam questions can be fed into ChatGPT with perfect results almost instantly. Student research paper assignments will no longer require research or arguments so how can we develop capacities students will need in future to succeed (and generate innovative solutions for Western societies to succeed)? Now with such generative AI systems taking over more of the professional functions, we need to stress more critical analysis in schools, not abandon the process because a chatbot has made the pedagogical tools irrelevant.

Chatbots will herd and feed the sheep; we need to be training wolves.

This is not the first challenge to the education process in the last two decades. Internet search engines removed the need for educators to stress rote memorisations of definitions and events (except in certain palaeolithic education systems like Belgium). Millennials were able to spend more study time critically challenging presuppositions, developing deeper searches and evaluating resources. They were evaluated less on the details they could remember but rather the concepts they could argue. But now these arguments can be fed to them by ChatGPT in real time.

For the COVID generation, I believe more classroom time should be spent in oral argumentation in small groups (perhaps graded by an ever-present chatbot) with collaborative ideas developed in coordination with generative AI tools and presented in class competitively. It should become a form of generative learning where humans spend class-time engaging with others to measure reactions and find solutions from different ways of thinking. As chatbots are entering into creative fields (OpenAI also has a graphic design bot called DALL-E 2 which easily generated the cover image for this article, rights free) implying that soon the key creativity advantage for humans will be found in how we react with each other.

This is important because right now our communications technologies have mostly isolated us from (the need to be) interacting with others. Teamwork in the classroom is painful for most students who have developed appendages to their personal devices and an intolerance for others with other ideas (worsened by several years of lockdown and online studies). Our greatest intellectual weakness is intolerant bias, something that is proliferating. Anything students can do with their noses in their devices, a chatbot can do better. If we have any hope of developing an innovative culture in the chatbot generation, it will need to focus on iterative processes where individuals challenge each other and their systems.

This will change the values of certain study majors and will wipe out how most university classes are structured and taught. Certain programmes, like law, will take fewer years to complete as chatbots will essentially prepare all necessary briefs and cases (perhaps lowering the value of such degrees). Same for fields like sociology and psychology. Hopefully there will be greater demand for physical or pure science degrees that will be less affected by (present) AI technologies. Different data science and business degrees will likely have more opportunity for innovation and iterative thinking. Maybe more schools will finally start teaching project management.

I’m glad, after 17 years, I am no longer working in the education field since I can expect resistance from the dinosaurs in administration and frustration in the classroom. These anticipated delays will lead to a failure to provide the next generation with the tools they need to succeed, with more failures and lost opportunity based on forcing students to fit into a failed system.

——————-

ChatGPT: Write a 300-word essay on The Risk-Monger

The Risk-Monger is a pseudonym used by David Zaruk, a Belgian blogger, who is known for his controversial views on risk management and regulation. He is an outspoken critic of what he considers to be the excessive and unscientific regulation of various products, including chemicals, biotechnology, and vaccines. According to Zaruk, the current system of risk management often leads to over-regulation and a culture of fear, which he argues is detrimental to innovation, economic growth, and public health.
Zaruk’s blog, The Risk-Monger, is a platform for his opinions on a wide range of risk-related topics, including the precautionary principle, the impact of regulation on innovation, and the role of science in risk management. Through his writing, Zaruk aims to challenge conventional wisdom on these topics and to encourage a more critical and scientific approach to risk assessment and regulation.
One of Zaruk’s key arguments is that the current system of risk management is often driven by emotion, rather than science, and that this results in over-regulation and a stifling of innovation. He points to the widespread use of the precautionary principle, which he argues is often used as a convenient excuse for regulatory inaction, as an example of this problem. He also argues that the system of risk management is often biased against certain industries and technologies, such as biotechnology and vaccines, which he claims are unfairly targeted by regulators and environmental activists.
In addition to his views on risk management, Zaruk is also an advocate for open and transparent risk communication. He argues that the public must be well-informed about the risks and benefits of various products and technologies in order to make informed decisions about their use. He also believes that the risk communication process should be guided by science, rather than politics or emotion.
In conclusion, the Risk-Monger is a thought-provoking voice in the risk management and regulation debate, challenging conventional wisdom and pushing for a more scientific and evidence-based approach to risk assessment and regulation. Despite his controversial views, his blog provides valuable insights into the complex and often emotional debates surrounding risk and regulation, and serves as a reminder of the importance of open and transparent risk communication.

Sounds good to me. The incognito version was even more complimentary.

5 Comments Add yours

  1. Michael Reed says:

    Last sentence wording in the machine’s essay seems a bit backwards and possibly carrying some of the bias/resistance you’ve encountered with the “Despite his controversial views …” I would have chosen “Because of …” instead. Challenging “consensus” views and thinking, and the herd mentality, is what matters and “despite” implies there’s something inherently problematic about the “views” and not the response to them. It’s a subtle difference, but subtle doesn’t mean less important.

    Liked by 2 people

  2. Marc Draco says:

    Damn David, that summarised you rather well! Lord help me if feed myself in there.

    Liked by 2 people

  3. Fm06 says:

    Belgian, really?

    Liked by 1 person

    1. RiskMonger says:

      It actually got that right. Canadian by birth, I have lived in Belgium for 37 years (and until last year was an employee of the Flemish Region government) – holding both passports. I don’t publicise my nationalities. For it to make that conclusion, I only assume that ChatGPT got into my personal files. Afraid yet?

      Like

Leave a comment