Ever since OpenAI launched its popular ChatGPT, the AI language model has received praise from users and although it’s supposedly supposed to make our lives easier, the platform hasn’t been without its critics.
After investing $10 billion into OpenAI, Microsoft has been leveraging the AI platform over the past few weeks from working on utilising ChatGPT to control robots and drones, to introducing GPT-4.
The Windows company announced during a hybrid information event that it will reveal GPT-4 during the AI In Focus event on March 16th. Although much is yet to be revealed about the new AI model, it’ll offer “different possibilities,” setting it on the right path to battle Google and Meta’s text-to-video generating platforms.
Andreas Braun, CTO of Microsoft Germany (pictured) said in an announcement at an AI kickoff event on March 9th, “We will introduce GPT-4 next week, there we will have multimodal models that will offer completely different possibilities – for example, videos.”
Marianne Janik, CEO of Microsoft Germany also spoke at the German event by emphasising the importance of utilising the potential of AI, adding that it’s about performing repetitive tasks in new ways and that, “We in Germany still have a lot of legacy in our companies” and “keep old treasures alive for years”.
During the Microsoft event, the CEO of the tech company will be present when GPT-4 is unveiled and showcased to the public as we could be witnessing “the future of AI” this week.
Microsoft’s demonstration of GPT-4 will include how the AI can be used in productivity apps such as Word, Outlook and Teams.
Since language models can recognise, summarise and generate text from knowledge gained from large datasets, Microsoft might be looking to utilise OpenAI’s Dall-E tool that can create images from natural language descriptions for use in GPT-4.
Holger Kenn, chief technologist of business development at Microsoft Germany explained what multimodal AI can do, saying that it can translate text into images, music and video. He added that Microsoft products already have responsible AI built into them and that “millions of queries can be mapped into the APIs” using the cloud.
While we may have to wait until March 16th to get an idea of what GPT-4’s parameters could be, the largest GPT 3.5 model has 175 billion parameters while Meta’s AI language model has 65 billion parameters.
Both Google and Meta are still working on their respective text-to-video generators and if Microsoft’s GPT-4 is what it seems, it could also generate short videos with sound (which both Google and Meta lack), that aren’t blurred or have distorted animations.