Google announced its new AI model, Gemini, on Dec. 6, claiming it surpasses OpenAI’s GPT-4 in capabilities, particularly in advanced math and specialised coding.
Gemini, being multimodal, is designed to understand and combine various types of information, offered in three versions: Ultra, Pro, and Nano, to cater to different needs.
Despite Google’s benchmark tests showing Gemini Ultra outperforming GPT-4 in most academic benchmarks, critics and social media users have raised doubts about the testing methods and Google's marketing tactics.
Accusations of "misleading" promotion and "cherry-picking" examples in Gemini's favor have been highlighted, with some suggesting that the comparison used an outdated version of GPT-4.
Users on social media have started conducting their own informal tests, comparing Gemini (via Google’s Bard tool) with GPT-4, leading to mixed reviews and experiences.