Early this morning, Microsoft Research open-sourced the strongest small parameter model to date—phi-4. On December 12 last year, Microsoft first showcased phi-4, which has only 14 billion parameters but performs exceptionally well. It outperformed OpenAI's GPT-4o in GPQA graduate-level assessments and the MATH benchmark tests, as well as top open-source models Qwen 2.5 -14B and Llama-3.3-70B. In the USA Mathematics Competition AMC test, phi-4 achieved a score of 91.8, surpassing well-known open and closed source models like Gemini Pro 1.5, GPT-4o, Claude 3.5 Sonnet, and Qwen 2.5, with overall performance comparable to the 405 billion parameter Llama-3.1. (AIGC open Community)
微软开源最强小模型Phi-4,超GPT-4o、可商用
Microsoft has open-sourced the strongest small model Phi-4, which surpasses GPT-4o and is commercially available.
The translation is provided by third-party software.
The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.