Wan 2.7 vs Wan 2.6: Complete Comparison 2026

L
Lynne
Mar 24, 2026 in Information
Wan 2.7 vs Wan 2.6: Complete Comparison 2026

TL; DR Core Takeaways

  • WAN 2.7 has evolved from a "generation tool" into a "creative system." Features like instruction editing, first-and-last frame control, and 9-grid input allow creators to move beyond repetitive "gacha-style" prompting.
  • For content creators, the biggest change isn't just a boost in image quality, but a workflow shift from "Generate → Filter → Restart" to "Generate → Edit → Iterate."
  • The systematic accumulation of prompts and generation experience is the hidden barrier to mastering the WAN series models and the key to setting creators apart.

Why This Article Is Worth 5 Minutes of Your Time

You’ve likely seen plenty of WAN 2.7 feature comparison tables by now. First-and-last frame control, 9-grid image-to-video, instruction editing... these features look great on paper. But honestly, a feature list doesn't solve the core question: How do these things actually change the way I make videos every day?

This article is for content creators, short-video operators, and brand marketers who are currently using or planning to try AI video generation tools. We won't just repeat the official changelog; instead, we’ll break down the practical impact of WAN 2.7 on daily workflows through 5 real-world creative scenarios.

A bit of background data: AI video generation volume grew by 840% between January 2024 and January 2026, and the global AI video generation market is expected to reach $18.6 billion by the end of 2026 1. 61% of freelance creators use AI video tools at least once a week. You aren't just chasing a trend; you are keeping up with the iteration of industry infrastructure.

The Core Shift of WAN 2.7: From "Gacha" to "Director"

The key to understanding WAN 2.7 isn't about how many new parameters were added, but how it changes the relationship between the creator and the model.

In WAN 2.6 and earlier versions, AI video creation was essentially a "gacha" process. You wrote a prompt, clicked generate, and prayed the result met your expectations. A creator on Reddit using the WAN series admitted: "I use first-frame input, generate only 2-5 second clips at a time, use the last frame as the input for the next segment, and adjust prompts as I go." 2 While this frame-by-frame relay method is effective, it is incredibly time-consuming.

The combination of several new capabilities in WAN 2.7 pushes this relationship from "gacha" toward "directing." You are no longer just describing what you want; you can define the start and end points, modify existing clips using natural language, and use multi-angle reference images to constrain the generation direction. This means iteration costs are drastically reduced, and creators have significantly more control over the final output.

In short: WAN 2.7 isn't just a better video generator; it is becoming a video creation and editing system 3.

5 Real Scenarios: What WAN 2.7 Can Do for Creators

Scenario 1: Say Goodbye to "Restarting" — Use Instruction Editing to Iterate Videos

This is the most transformative capability of WAN 2.7. You can send an existing video along with a natural language instruction to the model—such as "change the background to a rainy street" or "change the coat color to red"—and the model returns the edited result instead of generating a new video from scratch 4.

For creators, this solves a long-standing pain point: previously, if you generated a video you were 90% happy with, you had to regenerate the entire thing just to fix that remaining 10%, often losing the parts you liked in the process. Now, you can edit video as if you were editing a document. An analysis by Akool points out that this is exactly where professional AI video workflows are headed: "Fewer prompt lotteries, more controllable iterations." 5

Pro Tip: Treat instruction editing as a "refinement" phase. First, use text-to-video or image-to-video to get a base clip that is directionally correct, then use 2-3 rounds of instruction editing to fine-tune details. This is much more efficient than repeated regenerations.

Scenario 2: First-and-Last Frame Control — Giving Narratives a "Script"

WAN 2.6 already supported first-frame anchoring (where you provide an image as the first frame of the video). WAN 2.7 builds on this by adding last-frame control, allowing you to define both the start and end points of a video while the model calculates the motion trajectory in between.

This is huge for creators making product showcases, tutorials, or narrative shorts. Previously, you could only control "where it starts"; now, you can precisely define the complete arc from "A to B." For example, in a product unboxing video: the first frame is the sealed box, the last frame is the product fully displayed, and the unboxing action in the middle is automatically completed by the model.

WaveSpeedAI's technical guide mentions that the core value of this feature lies in "constraint as a feature." Giving the model a clear endpoint forces you to think precisely about what you actually want, and this constraint often yields better results than open-ended generation 6.

Scenario 3: 9-Grid Input — Multi-Angle References in One Step

This is the most innovative architectural feature in WAN 2.7. Traditional image-to-video only accepts a single reference image. WAN 2.7's 9-grid mode allows you to input a 3×3 image matrix, which could be multi-angle photos of the same subject, keyframes of a continuous action, or different variations of a scene.

For e-commerce creators, this means you can feed the model front, side, and detail shots of a product all at once, ensuring no "character drift" when the video switches angles. For animators, you can use a sequence of key poses to guide the model in generating smooth action transitions.

Note: The computational cost of 9-grid input is higher than single-image input. If you are running high-frequency automated pipelines, you need to factor this into your budget 4.

Scenario 4: Integrated Character + Voice Reference — Easier Virtual Influencers

WAN 2.6 introduced video generation with voice references (R2V). WAN 2.7 upgrades this to joint reference of subject appearance + voice direction, anchoring both character looks and vocal characteristics in a single workflow.

If you are creating virtual influencers, digital human talking heads, or serialized character content, this improvement directly reduces pipeline steps. Previously, you had to handle character consistency and voice matching separately; now, they are merged into one step. Discussions on Reddit confirm this: one of the biggest headaches for creators is "characters looking different between different shots" 7.

Scenario 5: Video Re-creation — One Asset, Multiple Platforms

WAN 2.7 supports re-creation based on an existing video: preserving the original motion structure and rhythm while changing the style, replacing the subject, or adapting it to a different context.

This is extremely valuable for creators and marketing teams who need multi-platform distribution. A high-performing video can quickly generate variations in different styles for different platforms without starting from zero. 71% of creators say they use AI to generate initial drafts and then refine them manually 1; the video re-creation feature makes this "refinement" stage much more efficient.

The Overlooked Hidden Barrier: Prompt and Experience Management

After discussing the new capabilities of WAN 2.7, there is one issue that is rarely discussed but has a massive impact on a creator's long-term output quality: How do you manage your prompts and generation experience?

A Reddit user sharing AI video creation tips mentioned: "Most viral AI videos aren't generated by one tool in one go. Creators generate a lot of short clips, pick the best ones, and then polish them with editing, upscaling, and audio syncing. Treat AI video as parts of a workflow, not a one-click finished product." 8

This means that behind every successful AI video, there are countless prompt experiments, parameter combinations, failures, and successes. The problem is that most creators leave this experience scattered across chat histories, notebooks, and screenshot folders, making it impossible to find the next time they need it.

Enterprises use an average of 3.2 AI video tools simultaneously 1. When you switch between WAN, Kling, Sora, and Seedance, each model has a different prompt style, parameter preference, and best practices. Without a systematic way to accumulate and retrieve this experience, you are starting from scratch every time you switch tools.

This is exactly where YouMind can help. You can save the prompts, reference images, generation results, and parameter notes from every AI video generation into a single Board (Knowledge Space). Next time you encounter a similar scenario, you can search or let AI help you retrieve your previous experience. With the YouMind Chrome extension, you can clip great prompt tutorials or community shares with one click, no more manual copy-pasting.

Example Workflow:

  1. Create a "WAN Video Creation" Board in YouMind.
  1. After each video generation, save the prompt, parameter settings, and results (screenshots or links) as an asset.
  1. Use tags to distinguish scenario types (Product Showcase / Narrative Short / Social Media / Tutorial).
  1. After accumulating 20-30 records, search for "Product Unboxing First-and-Last Frame" directly in the Board, and AI will help you find the most effective prompt combination from before.
  1. Use the Audio Pod feature to turn your research notes into a podcast for easy review during your commute.

It should be noted that YouMind does not currently integrate direct API calls for the WAN model (the video generation models it supports are Grok Imagine and Seedance 1.5). Its value lies in the asset management and experience accumulation phase, rather than replacing your video generation tools.

A Realistic Look: Current Uncertainties of WAN 2.7

Amidst the excitement, there are a few practical issues to keep in mind:

Pricing has not been announced. 9-grid input and instruction editing will almost certainly be more expensive than standard image-to-video. Multi-image input means higher computational overhead. Don't rush to migrate your entire pipeline until pricing is finalized.

Open-source status is unconfirmed. Historically, some versions of the WAN series were released as open-source under Apache 2.0, while others were API-only. If your workflow relies on local deployment (e.g., via ComfyUI), you’ll need to wait for official confirmation on the 2.7 release format 4.

Prompt behavior may change. Even if the API structure is backward compatible, WAN 2.7's instruction-following tuning means the same prompt might produce different results in 2.6 vs. 2.7. Don't assume your existing prompt library will migrate seamlessly; treat 2.6 prompts as a starting point, not a final draft 4.

Quality improvements require real-world testing. The official descriptions mention improvements in clarity, color accuracy, and motion consistency, but these need to be tested with your own actual assets. General benchmark scores rarely reflect edge cases in specific workflows.

FAQ

Q: Are WAN 2.7 and WAN 2.6 prompts interchangeable?

A: They are likely compatible at the API structure level, but behavior is not guaranteed to be identical. WAN 2.7 has undergone new instruction-following tuning, so the same prompt might produce different styles or compositions. It is recommended to do A/B testing with your 10 most-used prompts before migrating.

Q: What type of content creators is WAN 2.7 suitable for?

A: If your work involves character consistency (serialized content, virtual influencers), precise motion control (product showcases, tutorials), or requires local modifications to existing videos (multi-platform distribution, A/B testing), WAN 2.7's new features will significantly boost efficiency. If you only generate occasional single short videos, WAN 2.6 is likely sufficient.

Q: How do I choose between 9-grid image-to-video and regular image-to-video?

A: These are independent input modes and cannot be mixed. Use 9-grid when you need multi-angle references to ensure character or scene consistency. When the reference image is clear enough and you only need a single perspective, regular image-to-video is faster and cheaper. 9-grid has higher computational costs and is not recommended as a default for all scenarios.

Q: With so many AI video generation tools, how do I choose?

A: Current mainstream options include Kling (high cost-performance), Sora (strong narrative control), Veo (top-tier quality but expensive), and WAN (good open-source ecosystem). It is recommended to choose 1-2 tools for deep use based on your core needs rather than trying everything superficially. The key is not which tool you use, but building a reusable creative experience system.

Q: How can I systematically manage AI video prompts and generation experience?

A: The core is building a searchable experience library. Record the prompt, parameters, result evaluation, and improvement directions after each generation. You can use YouMind's Board feature to collect and retrieve these assets, or use Notion or other note-taking tools. The focus is on developing a recording habit; the tool itself is secondary.

Summary

The core value of WAN 2.7 for content creators isn't just another image quality upgrade; it’s the shift of AI video creation from "generate and pray" to a controllable workflow of "generate, edit, and iterate." Instruction editing lets you change videos like documents, first-and-last frame control gives narratives a script, and 9-grid input makes multi-angle references a one-step process.

But tools are only the starting point. What truly separates creators is whether you can systematically accumulate experience from every creation. How to write the best prompts, which parameter combinations suit which scenarios, and what the lessons are from failed cases. The speed at which you accumulate this tacit knowledge determines your ceiling with AI video tools.

If you want to start systematically managing your AI creative experience, you can register for YouMind for free to try it out. Create a Board, put your prompts, reference materials, and generation results in it. Your future self will thank you during your next creation.

References

[1] 75 AI Video Statistics: What Marketers Need to Know (2026)

[2] Reddit: AI Video Generating Tools Discussion

[3] WAN 2.7 Coming Soon: A Major Upgrade to 2.6

[4] WAN 2.7 vs WAN 2.6: Feature Differences and Upgrade Decisions

[5] WAN 2.7 Preview: Better Quality, Motion, and Control Than Ever Before

[6] WAN 2.7 First-and-Last Frame Control: A Builder's Guide

[7] Reddit: In your opinion, what is the current best video generator?

[8] Reddit: My honest review after using AI video tools in my creative workflow for 6 months

Have questions about this article?

Ask AI for Free

Related Posts

GPT Image 2 Leak Hands-on: Does It Beat Nano Banana Pro in Blind Tests?

TL;DR Key Takeaways On April 4, 2024, independent developer Pieter Levels (@levelsio) was the first to break the news on X: three mysterious image generation models appeared on the Arena blind testing platform, codenamed maskingtape-alpha, gaffertape-alpha, and packingtape-alpha. While these names sound like a hardware store's tape aisle, the quality of the generated images sent the AI community into a frenzy. This article is for creators, designers, and tech enthusiasts following the latest trends in AI image generation. If you have used Nano Banana Pro or GPT Image 1.5, this post will help you quickly understand the true capabilities of the next-generation model. A discussion thread in the Reddit r/singularity sub gained 366 upvotes and over 200 comments within 24 hours. User ThunderBeanage posted: "From my testing, this model is absolutely insane, far beyond Nano Banana." A more critical clue: when users directly asked the model about its identity, it claimed to be from OpenAI. Image Source: @levelsio's initial leak of the GPT Image 2 Arena blind test screenshot If you frequently use AI to generate images, you know the struggle: getting a model to correctly render text has always been a maddening challenge. Spelling errors, distorted letters, and chaotic layouts are common issues across almost all image models. GPT Image 2's breakthrough in this area is the central focus of community discussion. @PlayingGodAGI shared two highly convincing test images: one is an anatomical diagram of the anterior human muscles, where every muscle, bone, nerve, and blood vessel label reached textbook-level precision; the other is a YouTube homepage screenshot where UI elements, video thumbnails, and title text show no distortion. He wrote in his tweet: "This eliminates the last flaw of AI-generated images." Image Source: Comparison of anatomical diagram and YouTube screenshot shown by @PlayingGodAGI @avocadoai_co's evaluation was even more direct: "The text rendering is just absolutely insane." @0xRajat also pointed out: "This model's world knowledge is scary good, and the text rendering is near perfect. If you've used any image generation model, you know how deep this pain point goes." Image Source: Website interface restoration results independently tested by Japanese blogger @masahirochaen Japanese blogger @masahirochaen also conducted independent tests, confirming that the model performs exceptionally well in real-world descriptions and website interface restoration—even the rendering of Japanese Kana and Kanji is accurate. Reddit users noticed this as well, commenting that "what impressed me is that the Kanji and Katakana are both valid." This is the question everyone cares about most: Has GPT Image 2 truly surpassed Nano Banana Pro? @AHSEUVOU15 performed an intuitive three-image comparison test, placing outputs from Nano Banana Pro, GPT Image 2 (from A/B testing), and GPT Image 1.5 side-by-side. Image Source: Three-image comparison by @AHSEUVOU15; from right to left: NBP, GPT Image 2, GPT Image 1.5 @AHSEUVOU15's conclusion was cautious: "In this case, NBP is still better, but GPT Image 2 is definitely a significant improvement over 1.5." This suggests the gap between the two models is now very small, with the winner depending on the specific type of prompt. According to in-depth reporting by OfficeChai, community testing revealed more details : @socialwithaayan shared beach selfies and Minecraft screenshots that further confirmed these findings, summarizing: "Text rendering is finally usable; world knowledge and realism are next level." Image Source: GPT Image 2 Minecraft game screenshot generation shared by @socialwithaayan [9](https://x.com/socialwithaayan/status/2040434305487507475) GPT Image 2 is not without its weaknesses. OfficeChai reported that the model still fails the Rubik's Cube reflection test. This is a classic stress test in the field of image generation, requiring the model to understand mirror relationships in 3D space and accurately render the reflection of a Rubik's Cube in a mirror. Reddit user feedback echoed this. One person testing the prompt "design a brand new creature that could exist in a real ecosystem" found that while the model could generate visually complex images, the internal spatial logic was not always consistent. As one user put it: "Text-to-image models are essentially visual synthesizers, not biological simulation engines." Additionally, early blind test versions (codenamed Chestnut and Hazelnut) reported by 36Kr previously received criticism for looking "too plastic." However, judging by community feedback on the latest "tape" series, this issue seems to have been significantly improved. The timing of the GPT Image 2 leak is intriguing. On March 24, 2024, OpenAI announced the shutdown of Sora, its video generation app, just six months after its launch. Disney reportedly only learned of the news less than an hour before the announcement. At the time, Sora was burning approximately $1 million per day, with user numbers dropping from a peak of 1 million to fewer than 500,000. Shutting down Sora freed up a massive amount of compute power. OfficeChai's analysis suggests that next-generation image models are the most logical destination for this compute. OpenAI's GPT Image 1.5 had already topped the LMArena image leaderboard in December 2025, surpassing Nano Banana Pro. If the "tape" series is indeed GPT Image 2, OpenAI is doubling down on image generation—the "only consumer AI field still likely to achieve viral mass adoption." Notably, the three "tape" models have now been removed from LMArena. Reddit users believe this could mean an official release is imminent. Combined with previously circulated roadmaps, the new generation of image models is highly likely to launch alongside the rumored GPT-5.2. Although GPT Image 2 is not yet officially live, you can prepare now using existing tools: Note that model performance in Arena blind tests may differ from the official release version. Models in the blind test phase are usually still being fine-tuned, and final parameter settings and feature sets may change. Q: When will GPT Image 2 be officially released? A: OpenAI has not officially confirmed the existence of GPT Image 2. However, the removal of the three "tape" codename models from Arena is widely seen by the community as a signal that an official release is 1 to 3 weeks away. Combined with GPT-5.2 release rumors, it could launch as early as mid-to-late April 2024. Q: Which is better, GPT Image 2 or Nano Banana Pro? A: Current blind test results show both have their advantages. GPT Image 2 leads in text rendering, UI restoration, and world knowledge, while Nano Banana Pro still offers better overall image quality in some scenarios. A final conclusion will require larger-scale systematic testing after the official version is released. Q: What is the difference between maskingtape-alpha, gaffertape-alpha, and packingtape-alpha? A: These three codenames likely represent different configurations or versions of the same model. From community testing, maskingtape-alpha performed most prominently in tests like Minecraft screenshots, but the overall level of the three is similar. The naming style is consistent with OpenAI's previous gpt-image series. Q: Where can I try GPT Image 2? A: GPT Image 2 is not currently publicly available, and the three "tape" models have been removed from Arena. You can follow to wait for the models to reappear, or wait for the official OpenAI release to use it via ChatGPT or the API. Q: Why has text rendering always been a challenge for AI image models? A: Traditional diffusion models generate images at the pixel level and are naturally poor at content requiring precise strokes and spacing, like text. The GPT Image series uses an autoregressive architecture rather than a pure diffusion model, allowing it to better understand the semantics and structure of text, leading to breakthroughs in text rendering. The leak of GPT Image 2 marks a new phase of competition in the field of AI image generation. Long-standing pain points like text rendering and world knowledge are being rapidly addressed, and Nano Banana Pro is no longer the only benchmark. Spatial reasoning remains a common weakness for all models, but the speed of progress is far exceeding expectations. For AI image generation users, now is the best time to build your own evaluation system. Use the same set of prompts for cross-model testing and record the strengths of each model so that when GPT Image 2 officially goes live, you can make an accurate judgment immediately. Want to systematically manage your AI image prompts and test results? Try to save outputs from different models to the same Board for easy comparison and review. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

Jensen Huang Announces "AGI Is Here": Truth, Controversy, and In-depth Analysis

TL; DR Key Takeaways On March 23, 2026, a piece of news exploded across social media. NVIDIA CEO Jensen Huang uttered those words on the Lex Fridman podcast: "I think we've achieved AGI." This tweet posted by Polymarket garnered over 16,000 likes and 4.7 million views, with mainstream tech media like The Verge, Forbes, and Mashable providing intensive coverage within hours. This article is for all readers following AI trends, whether you are a technical professional, an investor, or a curious individual. We will fully restore the context of this statement, deconstruct the "word games" surrounding the definition of AGI, and analyze what it means for the entire AI industry. But if you only read the headline to draw a conclusion, you will miss the most important part of the story. To understand the weight of Huang's statement, one must first look at its prerequisites. Podcast host Lex Fridman provided a very specific definition of AGI: whether an AI system can "do your job," specifically starting, growing, and operating a tech company worth over $1 billion. He asked Huang how far away such an AGI is—5 years? 10 years? 20 years? Huang's answer was: "I think it's now." An in-depth analysis by Mashable pointed out a key detail. Huang told Fridman: "You said a billion, and you didn't say forever." In other words, in Huang's interpretation, if an AI can create a viral app, make $1 billion briefly, and then go bust, it counts as having "achieved AGI." He cited OpenClaw, an open-source AI Agent platform, as an example. Huang envisioned a scenario where an AI creates a simple web service that billions of people use for 50 cents each, and then the service quietly disappears. He even drew an analogy to websites from the dot-com bubble era, suggesting that the complexity of those sites wasn't much higher than what an AI Agent can generate today. Then, he said the sentence ignored by most clickbait headlines: "The odds of 100,000 of those agents building NVIDIA is zero percent." This isn't a minor footnote. As Mashable commented: "That's not a small caveat. It's the whole ballgame." Jensen Huang is not the first tech leader to declare "AGI achieved." To understand this statement, it must be placed within a larger industry narrative. In 2023, at the New York Times DealBook Summit, Huang gave a different definition of AGI: software that can pass various tests approximating human intelligence at a reasonably competitive level. At the time, he predicted AI would reach this standard within 5 years. In December 2025, OpenAI CEO Sam Altman stated "we built AGIs," adding that "AGI kinda went whooshing by," with its social impact being much smaller than expected, suggesting the industry shift toward defining "superintelligence." In February 2026, Altman told Forbes: "We basically have built AGI, or very close to it." But he later added that this was a "spiritual" statement, not a literal one, noting that AGI still requires "many medium-sized breakthroughs." See the pattern? Every "AGI achieved" declaration is accompanied by a quiet downgrade of the definition. OpenAI's founding charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work." This definition is crucial because OpenAI's contract with Microsoft includes an AGI trigger clause: once AGI is deemed achieved, Microsoft's access rights to OpenAI's technology will change significantly. According to Reuters, the new agreement stipulates that an independent panel of experts must verify if AGI has been achieved, with Microsoft retaining a 27% stake and enjoying certain technology usage rights until 2032. When tens of billions of dollars are tied to a vague term, "who defines AGI" is no longer an academic question but a commercial power play. While tech media reporting remained somewhat restrained, reactions on social media spanned a vastly different spectrum. Communities like r/singularity, r/technology, and r/BetterOffline on Reddit quickly saw a surge of discussion threads. One r/singularity user's comment received high praise: "AGI is not just an 'AI system that can do your job'. It's literally in the name: Artificial GENERAL Intelligence." On r/technology, a developer claiming to be building AI Agents for automating desktop tasks wrote: "We are nowhere near AGI. Current models are great at structured reasoning but still can't handle the kind of open-ended problem solving a junior dev does instinctively. Jensen is selling GPUs though, so the optimism makes sense." Discussions on Chinese Twitter/X were equally active. User @DefiQ7 posted a detailed educational thread clearly distinguishing AGI from current "specialized AI" (like ChatGPT or Ernie Bot), which was widely shared. The post noted: "This is nuclear-level news for the tech world," but also emphasized that AGI implies "cross-domain, autonomous learning, reasoning, planning, and adapting to unknown scenarios," which is beyond the current scope of AI capabilities. Discussions on r/BetterOffline were even sharper. One user commented: "Which is higher? The number of times Trump has achieved 'total victory' in Iran, or the number of times Jensen Huang has achieved 'AGI'?" Another user pointed out a long-standing issue in academia: "This has been a problem with Artificial Intelligence as an academic field since its very inception." Faced with the ever-changing AGI definitions from tech giants, how can the average person judge how far AI has actually progressed? Here is a practical framework for thinking. Step 1: Distinguish between "Capability Demos" and "General Intelligence." Current state-of-the-art AI models indeed perform amazingly on many specific tasks. GPT-5.4 can write fluid articles, and AI Agents can automate complex workflows. However, there is a massive chasm between "performing well on specific tasks" and "possessing general intelligence." An AI that can beat a world champion at chess might not even be able to "hand me the cup on the table." Step 2: Focus on the qualifiers, not the headlines. Huang said "I think," not "We have proven." Altman said "spiritual," not "literal." These qualifiers aren't modesty; they are precise legal and PR strategies. When tens of billions of dollars in contract terms are at stake, every word is carefully weighed. Step 3: Look at actions, not declarations. At GTC 2026, NVIDIA released seven new chips and introduced DLSS 5, the OpenClaw platform, and the NemoClaw enterprise Agent stack. These are tangible technical advancements. However, Huang mentioned "inference" nearly 40 times in his speech, while "training" was mentioned only about 10 times. This indicates the industry's focus is shifting from "building smarter AI" to "making AI execute tasks more efficiently." This is engineering progress, not an intelligence breakthrough. Step 4: Build your own information tracking system. The information density in the AI industry is extremely high, with major releases and statements every week. Relying solely on clickbait news feeds makes it easy to be misled. It is recommended to develop a habit of reading primary sources (such as official company blogs, academic papers, and podcast transcripts) and using tools to systematically save and organize this data. For example, you can use the Board feature in to save key sources, and use AI to ask questions and cross-verify the data at any time, avoiding being misled by a single narrative. Q: Is the AGI Jensen Huang is talking about the same as the AGI defined by OpenAI? A: No. Huang answered based on the narrow definition proposed by Lex Fridman (AI being able to start a $1 billion company), whereas the AGI definition in OpenAI's charter is "highly autonomous systems that outperform humans at most economically valuable work." There is a massive gap between the two standards, with the latter requiring a scope of capability far beyond the former. Q: Can current AI really operate a company independently? A: Not currently. Huang himself admitted that while an AI Agent might create a short-lived viral app, "the odds of building NVIDIA is zero." Current AI excels at structured task execution but still relies heavily on human guidance in scenarios requiring long-term strategic judgment, cross-domain coordination, and handling unknown situations. Q: What impact will the achievement of AGI have on everyday jobs? A: Even by the most optimistic definitions, the impact of current AI is primarily seen in improving the efficiency of specific tasks rather than fully replacing human work. Sam Altman also admitted in late 2025 that AGI's "social impact is much smaller than expected." In the short term, AI is more likely to change the way we work as a powerful assistant tool rather than directly replacing roles. Q: Why are tech CEOs so eager to declare that AGI has been achieved? A: The reasons are multifaceted. NVIDIA's core business is selling AI compute chips; the AGI narrative maintains market enthusiasm for investment in AI infrastructure. OpenAI's contract with Microsoft includes AGI trigger clauses, where the definition of AGI directly affects the distribution of tens of billions of dollars. Furthermore, in capital markets, the "AGI is coming" narrative is a major pillar supporting the high valuations of AI companies. Q: How far is China's AI development from AGI? A: China has made significant progress in the AI field. As of June 2025, the number of generative AI users in China reached 515 million, and large models like DeepSeek and Qwen have performed excellently in various benchmarks. However, AGI is a global technical challenge, and currently, there is no AGI system widely recognized by the global academic community. The market size of China's AI industry is expected to have a compound annual growth rate of 30.6%–47.1% from 2025 to 2035, showing strong momentum. Jensen Huang's "AGI achieved" statement is essentially an optimistic expression based on an extremely narrow definition, rather than a verified technical milestone. He himself admitted that current AI Agents are worlds away from building truly complex enterprises. The phenomenon of repeatedly "moving the goalposts" for the definition of AGI reveals the delicate interplay between technical narrative and commercial interests in the tech industry. From OpenAI to NVIDIA, every "we achieved AGI" claim is accompanied by a quiet lowering of the standard. As information consumers, what we need is not to chase headlines but to build our own framework for judgment. AI technology is undoubtedly progressing rapidly. The new chips, Agent platforms, and inference optimization technologies released at GTC 2026 are real engineering breakthroughs. But packaging these advancements as "AGI achieved" is more of a market narrative strategy than a scientific conclusion. Staying curious, remaining critical, and continuously tracking primary sources is the best strategy to avoid being overwhelmed by the flood of information in this era of AI acceleration. Want to systematically track AI industry trends? Try to save key sources to your personal knowledge base and let AI help you organize, query, and cross-verify. [1] [2] [3] [4] [5] [6]

The Rise of AI Influencers: Essential Trends and Opportunities for Creators

TL; DR Key Takeaways On March 21, 2026, Elon Musk posted a tweet on X with only eight words: "AI bots will be more human than human." This tweet garnered over 62 million views and 580,000 likes within 72 hours. He wrote this in response to an AI-generated image of a "perfect influencer face." This isn't a sci-fi prophecy. If you are a content creator, blogger, or social media manager, you have likely already scrolled past those "too perfect" faces in your feed, unable to tell if they are human or AI. This article will take you through the reality of AI virtual influencers, the income data of top cases, and how you, as a human creator, should respond to this transformation. This article is suitable for content creators, social media operators, brand marketers, and anyone interested in AI trends. First, let's look at a set of numbers that will make you sit up. The global virtual influencer market size reached $6.06 billion in 2024 and is expected to grow to $8.3 billion in 2025, with an annual growth rate exceeding 37%. According to Straits Research, this figure is projected to soar to $111.78 billion by 2033. Meanwhile, the entire influencer marketing industry reached $32.55 billion in 2025 and is expected to break the $40 billion mark by 2026. Looking at specific individuals, two representative cases are worth a closer look. Lil Miquela is widely recognized as the "first-generation AI influencer." This virtual character, born in 2016, has over 2.4 million followers on Instagram and has collaborated with brands like Prada, Calvin Klein, and Samsung. Her team (part of Dapper Labs) charges tens of thousands of dollars per branded post. Her subscription income on the Fanvue platform alone reaches $40,000 per month, and combined with brand partnerships, her monthly income can exceed $100,000. It is estimated that her average annual income since 2016 is approximately $2 million. Aitana López represents the possibility that "individual entrepreneurs can also create AI influencers." This pink-haired virtual model, created by the Spanish creative agency The Clueless, has over 370,000 followers on Instagram and earns between €3,000 and €10,000 per month. The reason for her creation was practical: founder Rubén Cruz was tired of the uncontrollable factors of human models (being late, cancellations, schedule conflicts), so he decided to "create an influencer who would never flake." A prediction by PR giant Ogilvy in 2024 sent shockwaves through the industry: by 2026, AI virtual influencers will occupy 30% of influencer marketing budgets. A survey of 1,000 senior marketers in the UK and US showed that 79% of respondents said they are increasing investment in AI-generated content creators. To see the underlying drivers of this change, you must understand the logic of brands. Zero risk, total control. The biggest risk with human influencers is "scandal." A single inappropriate comment or a personal scandal can flush millions of brand investment down the drain. Virtual influencers don't have this problem. They don't get tired, they don't age, and they won't post a tweet at 3 AM that makes the PR team collapse. As Rubén Cruz, founder of The Clueless, said: "Many projects were put on hold or canceled due to issues with the influencers themselves; it wasn't a design flaw, but human unpredictability." 24/7 content output. Virtual influencers can post daily, follow trends in real-time, and "appear" in any setting at a cost far lower than a human shoot. According to estimates by BeyondGames, if Lil Miquela posts once a day on Instagram, her potential income in 2026 could reach £4.7 million. This level of output efficiency is unmatched by any human creator. Precise brand consistency. Prada's collaboration with Lil Miquela resulted in an engagement rate 30% higher than regular marketing campaigns. Every expression, every outfit, and every caption of a virtual influencer can be precisely designed to ensure a perfect fit with the brand's tone. However, there are two sides to every coin. A report by Business Insider in March 2026 pointed out that consumer backlash against AI accounts is rising, and some brands have already begun to retreat from AI influencer strategies. A YouGov survey showed that more than one-third of respondents expressed concern about AI technology. This means virtual influencers are not a panacea; authenticity remains an important factor for consumers. In the face of the impact of AI virtual influencers, panic is useless; action is valuable. Here are four proven strategies for responding. Strategy 1: Deepen authentic experiences; do what AI cannot. AI can generate a perfect face, but it cannot truly taste a cup of coffee or feel the exhaustion and satisfaction of a hike. In a discussion on Reddit's r/Futurology, a user's comment received high praise: "AI influencers can sell products, but people still crave real connections." Turn your real-life experiences, unique perspectives, and imperfect moments into a content moat. Strategy 2: Arm yourself with AI tools rather than fighting AI. Smart creators are already using AI to boost efficiency. Creators on Reddit have shared complete workflows: using ChatGPT for scripts, ElevenLabs for voiceovers, and HeyGen for video production. You don't need to become an AI influencer, but you need to make AI your creative assistant. Strategy 3: Systematically track industry trends to build an information advantage. The AI influencer field moves incredibly fast, with new tools, cases, and data appearing every week. Randomly scrolling through Twitter and Reddit is far from enough. You can use to systematically manage industry information scattered everywhere: save key articles, tweets, and research reports into a Board, use AI to automatically organize and retrieve them, and ask your asset library questions at any time, such as "What were the three largest funding rounds in the virtual influencer space in 2026?". When you need to write an industry analysis or film a video, the materials are already in place instead of starting from scratch. Strategy 4: Explore human-AI collaborative content models. The future is not a zero-sum game of "Human vs. AI," but a collaborative symbiosis of "Human + AI." You can use AI to generate visual materials but give them a soul with a human voice and perspective. Analysis from points out that AI influencers are suitable for experimental, boundary-pushing concepts, while human influencers remain irreplaceable in building deep audience connections and solidifying brand value. The biggest challenge in tracking AI virtual influencer trends is not too little information, but too much information that is too scattered. A typical scenario: You see a tweet from Musk on X, read a breakdown post on Reddit about an AI influencer earning $10,000 a month, find an in-depth report on Business Insider about brands retreating, and then scroll past a tutorial on YouTube. This information is scattered across four platforms and five browser tabs. Three days later, when you want to write an article, you can't find that key piece of data. This is exactly the problem solves. You can use the to clip any webpage, tweet, or YouTube video to your dedicated Board with one click. AI will automatically extract key information and build an index, allowing you to search and ask questions in natural language at any time. For example, create an "AI Virtual Influencer Research" Board to manage all relevant materials centrally. When you need to produce content, ask the Board directly: "What is Aitana López's business model?" or "Which brands have started to retreat from AI influencer strategies?", and the answers will be presented with links to the original sources. It should be noted that YouMind's strength lies in information integration and research assistance; it is not an AI influencer generation tool. If your need is to create virtual character images, you still need professional tools like Midjourney, Stable Diffusion, or HeyGen. However, in the core creator workflow of "Research Trends → Accumulate Materials → Produce Content," can significantly shorten the distance from inspiration to finished product. Q: Will AI virtual influencers completely replace human influencers? A: Not in the short term. Virtual influencers have advantages in brand controllability and content output efficiency, but the consumer demand for authenticity remains strong. Business Insider's 2026 report shows that some brands have begun to reduce AI influencer investment due to consumer backlash. The two are more likely to form a complementary relationship rather than a replacement one. Q: Can an average person create their own AI virtual influencer? A: Yes. Many creators on Reddit have shared their experiences of starting from scratch. Common tools include Midjourney or Stable Diffusion for generating consistent images, ChatGPT for writing copy, and ElevenLabs for generating voice. The initial investment can be very low, but it requires 3 to 6 months of consistent operation to see significant growth. Q: What are the income sources for AI virtual influencers? A: There are mainly three categories: brand-sponsored posts (top virtual influencers charge thousands to tens of thousands of dollars per post), subscription platform income (such as Fanvue), and derivatives and music royalties. Lil Miquela earns an average of $40,000 per month from subscription income alone, with brand collaboration income being even higher. Q: What is the current state of the AI virtual idol market in China? A: China is one of the most active markets for virtual idol development globally. According to industry forecasts, the Chinese virtual influencer market will reach 270 billion RMB by 2030. From Hatsune Miku and Luo Tianyi to hyper-realistic virtual idols, the Chinese market has gone through several development stages and is currently evolving toward AI-driven real-time interaction. Q: What should brands look for when choosing to collaborate with virtual influencers? A: It is crucial to evaluate three points: the target audience's acceptance of virtual personas, the platform's AI content disclosure policies (TikTok and Instagram are strengthening related requirements), and the fit between the virtual influencer and the brand's tone. It is recommended to test with a small budget first and then decide whether to increase investment based on data. The rise of AI virtual influencers is not a distant prophecy but a reality happening right now. Market data clearly shows that the commercial value of virtual influencers has been verified—from Lil Miquela's $2 million annual income to Aitana López's €10,000 monthly earnings, these numbers cannot be ignored. But for human creators, this is not a story of "being replaced," but an opportunity to "reposition." Your authentic experiences, unique perspectives, and emotional connection with your audience are core assets that AI cannot replicate. The key lies in using AI tools to improve efficiency, using systematic methods to track trends, and using authenticity to build an irreplaceable competitive moat. Want to systematically track AI influencer trends and accumulate creative materials? Try building your dedicated research space with and start for free. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]