What is GPT-4?
GPT-4 is a state-of-the-art machine for creating text. However, its exceptional skill for creating text is akin to its ability to understand and make logical deductions of the world. For instance, if you give GPT-4 a question from a US bar examination, it will write an essay that displays proficiency in legal knowledge; if you provide it with a medicinal molecule and inquire for derivatives, it will seem to apply knowledge in biochemistry; and if requested for a joke about a fish, it will appear to have a sense of humour – or at least a good recollection for bad puns (“what do you get when you cross a fish and an elephant? Swimming trunks!”).
Is it the same as ChatGPT?
Though similar, ChatGPT is more analogous to a car while GPT-4 is the engine. This powerful general technology can be moulded into multiple uses. You could have already experienced it, because it has been powering Microsoft’s Bing Chat for the past five weeks. GPT-4 can be employed to power more than just chatbots. For instance,
- Duolingo has integrated a variant of it into its language learning app that can explain where learners made errors, rather than merely stating the correct answer;
- Stripe is leveraging the tool to monitor their chatroom for malicious actors; and
- Be My Eyes, a company that specializes in producing assistive technologies, is utilizing a revised version of GPT-4 to build explainers that simplify complex topics.
GPT-4 surpasses its prior versions on a spectrum of technical challenges, and it also has a moral compass that is significantly incorporated into the system. To better illustrate this point, OpenAI recently published a comprehensive paper of examples where GPT-3 posed potential dangers that GPT-4 guards against. To further test the system, it was also provided to third-party researchers to see if they could utilize GPT-4 to mimic a sinister AI from the movies. Although it did not achieve most of those tasks, the investigators did manage to simulate it and use it to persuade a human via Taskrabbit.
AI Race Disrupting Education Firms, Raises Ethical Questions
Artificial intelligence (AI) is raising moral questions for many of its developers, who have been warning of this reality for decades. Deciding whether or not to be ethical is a straightforward question that can be answered in only two ways as the ability to understand the intricate details of this type of action is complex. While AI systems can be taught the rules, they can also be taught to break them. This has become an issue as the AI race has begun to disrupt education firms and the UK’s No. 10 has gone on record about the “existential” consequences of AI.
Notably, data protection laws have been a hot topic recently, with UK watchdogs warning chatbot developers, and OpenAI leaders calling for regulations to prevent AI from having a far-reaching negative impact on humanity. Furthermore, UK schools have told media outlets that they are feeling “bewildered” when it comes to AI, and do not trust tech firms. Italy’s privacy watchdog has taken steps to protect users by banning ChatGPT due to data breach concerns.
As AI chatbots continue to become more advanced, it has become harder for people to spot phishing emails, according to experts. And, in a related issue, experts have warned that elections in both the UK and US are at risk of AI-driven disinformation. Even with all of these ethical questions that come with AI, Neil Tennant of the Pet Shop Boys has said that AI song writing should not be considered a sin.