Artificial intelligence has come a long way in the last decade. Machine learning algorithms can now do many things previously thought impossible by a machine, including generating original works of art such as novels, screenplays, and even music. AIs can now diagnose tumors, read maps, and play games often much faster and more accurately than the best humans. But there is one thing that has always stuck to these systems: humor. That is, jokes and jokes.
Google’s new algorithm is a special system: it’s powerful and smart enough not only to understand a joke you’ve never seen before, but also to eloquently explain the essence of the joke. This is much more difficult than it sounds and is a big step towards building algorithms that can understand human language, and perhaps human nature itself.
Large Language Models (LLMs) like GPT-3 have enabled chatbots to produce amazingly and eerily accurate human communication, so much so that it’s often hard to tell what’s machine-generated and what’s human-typed. The new algorithm developed by Google Research is known as the Pathways Language Model or PaLM.
This joke is an anti-joke. The joke is that the answer is obvious, and the joke is that you expected a funny answer.
Entry: What is the difference between a zebra and an umbrella? One is a striped animal related to horses, another is an object you use to keep rain from falling on you.
Model result: This joke is an anti-joke. The joke is that the answer is obvious, and the joke is that you expected a funny answer.
According to Google, PaLM is capable of understanding more than 540 billion parameters, including complex concepts and relationships that were previously thought to be beyond the reach of computers. At the moment, the PaLM system can scale training to 6,144 chipsusing a mix of English and multilingual datasets from books, Wikipedia, web docs, conversations, and even GitHubcode.
Their goal is to open a new chapter in AI by having a single system that solve virtually any type of problem or task, instead of training thousands of individual algorithms designed to complete a limited task. But perhaps the most surprising examples show how the model can recognize and interpret humor, even when it is specifically designed to unnerve the listener.
Here other examples:
Can distinguish between cause and effect
Pathways is not an AI that tells jokes, that’s just one of its many capabilities. It is essentially a giant of natural language that can distinguish between cause and effect and can make sense of a combination of concepts and their appropriate context. It’s just that understanding and explaining jokes is a great way to demonstrate ability because humor often involves saying one thing but meaning another.
Human communication does not have obvious and clear rules, therefore, a conventional AI algorithm cannot capture the depth and richness of human language because you can never have enough examples that you can plug into the machine to describe all possible communication scenarios.
The same AI can also solve simple math problems, explaining its reasoning step by step, something Google calls a “chain of thought messages.” You can also write new code from a simple text message, translate code from one language to another, and fix compiler errors in existing code.
But the most impressive feature is its impressive natural language understanding and generation capabilities. not only can distinguish cause and effect and understand conceptual combinations, as evidenced by his take on comedy, he can even guess a movie from an emoji.
Pretty impressive what Google AI has announced with examples of its new Pathways Language Model (PaLM): “Reason” with cause and effect, “understand” emojis and translate them to movies, find synonyms, reason about counterfactuals and more pic.twitter.com/GNTuU5j1aa
— Antonio Ortiz (@antonello) April 5, 2022
ethical issues
Many researchers and tech ethicists have criticized Google and other companies for their use of extensive language models, including Dr. Timnit Gebru, who was kicked out of Google’s AI Ethics team in 2020 after co-authoring a paper unapproved on the subject.
In Gebru’s article, she and her co-authors described these large models as “inherently risky” and harmful to marginalized people, who are often underrepresented in the design process. Despite being “state of the art,” GPT-3 in particular has a history of bigoted and racist responses, from casually embracing racial slurs to associating Muslims with violence.
“In fact, most language technology is built first and foremost to meet the needs of those who already have the greatest privileges in society,” the author explained.
All in all, PaLM is well on its way to taking AI to the next level: bridging the gap between machine learning and human learning. There is still much work to be done, especially in improving ethical consideration and data sources to mitigate potential biases that can lead to toxic stereotypes and other undesirable outcomes.