Google said that its AI chatbot, Bard, will now be available in 180 countries around the world. This makes it a rival to Microsoft’s ChatGPT. Google is taking this step because it wants to use creative AI in its most important products, like its top search engine.
At Google’s yearly Developers Conference, CEO Sundar Pichai talked about how the company is rethinking its services with creative AI at the centre. Microsoft quickly added similar technologies, and Google is doing the same, even though people are worried about how AI will change society and take away jobs.
Google showed how creative AI will improve Gmail, picture editing, and online work tools, among other things. The business said it would use AI in a responsible way and make sure its work was in line with social standards.
Google took Bard off the waiting list and made it available everywhere in English. They also said that they plan to add support for 40 languages in the future. The company also made browser add-ons that will add AI features to apps like Gmail and Maps. These add-ons will let apps do things like auto-complete text in emails and come up with art ideas based on what is available.
But the growth of creative AI raises worries about how it could be used wrongly, like to spread false information through voice clones, deep-fake videos, and convincing written messages. Experts have called for care and a stop to the development of powerful AI systems to make sure they are safe.
Where is the competition between Android and iOS going?
Notably, Microsoft’s OpenAI-backed AI models are now available to the public. These models use picture and video processing to improve the Bing search engine and the Edge browser. AI’s risks and ethics effects are still being talked about.
Even though there are risks, both Google and Microsoft are working hard to integrate creative AI, showing how it can improve user experiences and change many businesses. As these technologies continue to improve, it will be important to follow strict ethical rules and create them in a responsible way to deal with possible problems and make sure AI is used for the good of society.