Inspur has turned its hand to AI, and claims it has produced a text-and-code-generating machine-learning model superior to GPT-3 produced by OpenAI. And did so using significantly fewer GPUs.
Inspur’s model is called Yuan 1.0 and produces text in Chinese. The Chinese server maker says the model has 245.7 billion parameters (GPT-3 has 175 billion), claims it can pass a Turing test, and reckons it can beat humans at an idiom-reading comprehension task.
Yuan 1.0 can also fool most of the people most of the time, apparently. Inspur claims humans who reviewed dialogues, news articles, poems, and couplets produced by the model could distinguish them from human-penned text “less than 50 per cent of the time.”
A pre-publication paper explains Yuan 1.0 in considerable detail. The model drew on five terabytes of samples and was trained using 2,128 GPUs – rather fewer than the 10,000 used to train GPT-3 (to be fair, Inspur hasn’t offered apples-to-apples GPU comparison info.)
China’s government has made increased use of AI an economic priority, and places great store in the potential of the technology to improve services for its citizens. News that Inspur has developed a very powerful model is therefore welcome in the Middle Kingdom.
Yuan 1.0’s debut may be less well-received elsewhere. Nicolas Chaillan – the Pentagon’s first chief software officer, who quit the job after branding it “probably the most challenging and infuriating of my entire career” – recently offered his opinion that China’s AI development capabilities have outpaced the USA’s.
That situation, he opined, means China will achieve military superiority within 15 to 20 years.
Microsoft may also worry it’s backed a dud, as it secured an exclusive licence for GPT-3 and plans to use it across its product line. GPT-3 still has the advantage of speaking English, but Redmond has aspirations to do better in China. ®