An AI (artificial intelligence) chatbot that can pass the content of an exam prepared by a Wharton business school professor. A diabetes drug that can help celebrities (including Elon Musk) lose massive amounts of weight with seemingly no effort. He ChatGPT and the drug Ozempic, respectively, are in the thick of battle to be the most exciting panaceas of 2023. But are they really panaceas? And if they were, should we trust them?
What was old is new again
Neither the technology of ChatGPT (Chat Generative Pre-trained Transformer) of OpenAI nor Ozempic are exactly new. The current version of ChatGPT, version 3, is an update to the previous version, ChatGPT-2, which was released in 2019. Ozempic (and other similar semaglutide drugs) have been around for years. Ozempic was first approved by the US Food and Drug Administration for the treatment of type 2 diabetes in 2017, and in 2021 for the management of chronic overweight.
But what exactly is ChatGPT? When asked to describe himself, the ChatGPT indicated that:
ChatGPT is an AI language model developed by OpenAI, capable of generating human-like text with the information provided to it. The OpenAI model is trained on a corpus of text data and can generate answers to questions, summarize long texts, write articles, and much more. It is typically used in conversational AI applications to simulate human-like conversations with users.
At first glance, these two miracle products seem irresistible. Who doesn’t want to have an AI chatbot writing content or code? Who doesn’t want to lose weight without feeling hungry? ChatGPT and Ozempic are two completely different things, but they both appeal to humanity’s most basic instinct: to get something for nothing. And precisely for this reason I consider that we should not trust either of them.
Cheaters will cheat with ChatGPT
I’m not a big fan of Taylor Swift, but she puts it very well: “Personaries will pretend.” The same way cheaters will cheat with OpenAI’s ChatGPT. Cheats have always been able to cheat on tests and essays, whether by requesting old essays by mail – before the internet existed – or, today, in any of the millions of digital ways that exist. In that sense, ChatGPT is nothing more than just a novelty tool that will allow cheaters to cheat on an impressive scale.
I may sound old fashioned but i think cheating That’s wrong. And also all academic institutions, most of which have a code of honor and ethics. If you use ChatGPT to write an essay or create any other work and submit it as your own, you would be cheating. Cheating is wrong, and besides, the person you hurt the most when you cheat is yourself. Ultimately, whether at work or in life, cheaters will lose out because they lack the necessary skills that the cheat prevented them from learning.
There’s a reason we need ethical AI
I am not a doctor, so I cannot comment as an expert on the long-term effects of the contraindicated use of Ozempic. But I have a PhD in data science and I can assure you that ChatGPT is positioned as an extremely aggressive assistant. Technologists have always talked about the value of AI to help and improve the human race, not to replace or dull it. ChatGPT does not help or enhance human creativity, it only returns a configuration of the data with which it has been trained. That does not equate to intelligence. Many instances of the errors that ChatGPT frequently makes under the guise of authority have been reported.
The reality is that neither ChatGPT nor any AI has consciousness. FICO currently uses generative AI to produce synthetic training data for robustness/scenario testing and Robotic Process Automation (RPA) in certain customer interactions in areas such as fraud case management and collection. RPAs are built ethically, explainably, responsibly, and deterministically—absolutely essential qualities when using AI to address any financial question or case that affects people. If not carefully managed, those AI-based interactions and decisions can quickly become unresponsive, misguided, unpredictable, and dishonest.
Put ChatGPT in perspective
I find ChatGPT an amazing toy for fun. It is effective in finding and correcting errors in computer programming. Schools are even trying to find ways to teach with it instead of blocking its access to students. ChatGPT is also a dangerous IT “drug” for the mind, which, if used in a contraindicated way, will weaken people’s creative intelligence. It reminds me of the fact that Steve Jobs forbade his children from using iPads.
Furthermore, without audits and regulatory restrictions, ChatGPT cannot be safely used in customer-related decisions. I do not support the use of ChatGPT in FICO because it cannot be audited, interpreted or explained – it is not the right technology for our company or for making financial decisions that affect customers. And I’m not the only one who thinks so. Even Sam Altman, the CEO of Open AI, agrees: “Currently, it is a mistake to use ChatGPT for important tasks. The system is a glimpse of the future and, in terms of robustness and reliability, there is still a lot of work to be done,” he said.. So clear.
The world will adapt to ChatGPT as it has to all technologies. The universities will modify their study plans and the way in which they will apply the exams; I even predict a return to written and oral exams. As for me, I will continue to write articles based on my own ideas, without using a chatbot.
scott zoldi At FICO, Scott has been responsible for the creation of more than 120 analytical patents. Scott serves on two boards of directors, Software San Diego and the Cyber Center of Excellence. Scott received his Ph.D. in theoretical and computational physics from Duke University.