Tencent has introduced a new benchmark, ArtifactsBench, that aims to fix current problems with testing creative AI models.
Ever asked an AI to build something like a simple webpage or a chart and received something that works but has a poor user experience? The buttons might be in the wrong place, the colours might clash, or the animations feel clunky. It’s a common problem, and it highlights a huge challenge in the world of AI development: how do you teach a machine to have good taste?
For a long time, we’ve been testing AI models on their ability to write code that is functionally correct. These tests could confirm the code would run, but they were completely “blind to the visual fidelity and interactive integrity that define modern user experiences.”
This is the exact problem ArtifactsBench has been designed to solve. It’s less of a test and more of an automated art critic for AI-generated code
Getting it right, like a human would should
So, how does Tencent’s AI benchmark work? First, an AI is given a creative task from a catalogue of over 1,800 challenges, from building data visualisations and web apps to making interactive mini-games.
Once the AI generates the code, ArtifactsBench gets to work. It automatically builds and runs the code in a safe and sandboxed environment.
To see how the application behaves, it captures a series of screenshots over time. This allows it to check for things like animations, state changes after a button click, and other dynamic user feedback.
Finally, it hands over all this evidence – the original request, the AI’s code, and the screenshots – to a Multimodal LLM (MLLM), to act as a judge.
This MLLM judge isn’t just giving a vague opinion and instead uses a detailed, per-task checklist to score the result across ten different metrics. Scoring includes functionality, user experience, and even aesthetic quality. This ensures the scoring is fair, consistent, and thorough.
The big question is, does this automated judge actually have good taste? The results suggest it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard platform where real humans vote on the best AI creations, they matched up with a 94.4% consistency. This is a massive leap from older automated benchmarks, which only managed around 69.4% consistency.
On top of this, the framework’s judgments showed over 90% agreement with professional human developers.
Tencent evaluates the creativity of top AI models with its new benchmark
When Tencent put more than 30 of the world’s top AI models through their paces, the leaderboard was revealing. While top commercial models from Google (Gemini-2.5-Pro) and Anthropic (Claude 4.0-Sonnet) took the lead, the tests unearthed a fascinating insight.
You might think that an AI specialised in writing code would be the best at these tasks. But the opposite was true. The research found that “the holistic capabilities of generalist models often surpass those of specialized ones.”
A general-purpose model, Qwen-2.5-Instruct, actually beat its more specialised siblings, Qwen-2.5-coder (a code-specific model) and Qwen2.5-VL (a vision-specialised model).
The researchers believe this is because creating a great visual application isn’t just about coding or visual understanding in isolation and requires a blend of skills.
“Robust reasoning, nuanced instruction following, and an implicit sense of design aesthetics,” the researchers highlight as example vital skills. These are the kinds of well-rounded, almost human-like abilities that the best generalist models are beginning to develop.
Tencent hopes its ArtifactsBench benchmark can reliably evaluate these qualities and thus measure future progress in the ability for AI to create things that are not just functional but what users actually want to use.
See also: Tencent Hunyuan3D-PolyGen: A model for ‘art-grade’ 3D assets

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
#3D, #Agreement, #Ai, #AiBigDataExpo, #AIDevelopment, #AIModels, #Amp, #Animations, #Anthropic, #Apps, #Arena, #Art, #ArtificialIntelligence, #Automation, #Benchmark, #Benchmarks, #BigData, #Building, #Buttons, #California, #Challenge, #Chart, #Claude, #Cloud, #Code, #CodeGeneration, #Coding, #Colours, #Companies, #Comprehensive, #Conference, #Creativity, #Cyber, #CyberSecurity, #Data, #Design, #Developers, #Development, #DigitalTransformation, #Enterprise, #Environment, #Evaluation, #Event, #Events, #Fair, #Framework, #Future, #Games, #Gap, #Gemini, #Genai, #GenerativeAi, #Giving, #Gold, #GoldStandard, #Google, #How, #Human, #Humans, #Industry, #InSight, #IntelligentAutomation, #It, #Leaderboard, #Learn, #LESS, #Llm, #LLMs, #London, #Measure, #Metrics, #MLLM, #Model, #Models, #Multimodal, #OPINION, #Opposite, #Other, #Platform, #Professional, #Qwen, #Qwen2, #Rankings, #Reasoning, #Research, #Security, #Skills, #Solve, #Sonnet, #Technology, #Tencent, #Test, #Testing, #Time, #Transformation, #Twitter, #UserExperience, #Vision, #Vote, #Web, #Webinars, #Work, #World, #Writing
Published on The Digital Insider at https://is.gd/dC8NBH.
Comments
Post a Comment
Comments are moderated.