What I Want to See From AI in 2026: Labels, Better Phone Features and a Plan for the Environment

What I Want to See From AI in 2026: Labels, Better Phone Features and a Plan for the Environment


In 2025, AI brought us new models that were far more capable of research, coding, video and image generation and more. AI models could now use heavy amounts of compute power to “think,” which helped deliver more complex answers with greater accuracy. AI also got some agentic legs, meaning it could go out onto the internet and do tasks for you, like plan a vacation or order a pizza. 

Despite these advancements, we might still be far off from artificial general intelligence, or AGI. This is a theoretical future when AI becomes so good that it’s indistinguishable from (or better than) human intelligence. Right now, an AI system works in a vacuum and doesn’t really understand the world around us. It can mimic intelligence and string words together to make it sound like it understands. But it doesn’t. Using AI daily has shown me that we still have a ways to go before we reach AGI.

Read more: CNET Is Choosing the Best of CES 2026 Awards

As the AI industry reaches monstrous valuations, companies are moving quickly to meet Wall Street demands. Google, OpenAI, Anthropic and others are throwing trillions in training and infrastructure costs to usher in the next technological revolution. While the spend might seem absurd, if AI does truly upend how humanity works, then the rewards could be enormous. At the same time, as revolutionary as AI is, it constantly messes up and gets things wrong. It’s also flooding the internet with slop content — such as amusing short-form videos that may be profitable but are seldom valuable. 

Humanity, which will be the beneficiary or sufferer of AI, deserves better. If our survival is literally at stake, then at the very least, AI could be substantively more helpful, rather than just a rote writer of college essays and nude image generators. Here are all the things that I, as an AI reporter, would like to see from the industry in 2026. 

It’s the environment

My biggest, most immediate concern around AI is the impact massive data centers will have on the environment. Before the AI revolution, the planet was already facing an existential threat due to our reliance on fossil fuels. Major tech companies stepped up with initiatives saying they’d aim to reach net-zero emissions by a certain date. Then ChatGPT hit the scene.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


With the massive power demand of AI, including Wall Street’s insatiable need for profitability, data centers are turning back to fossil fuels like methane gas to keep the GPUs humming, the tools that perform the complex calculations to string words and pixels together.

There’s something incredibly dystopian about the end of the planet coming at the hands of ludicrous AI-generated videos of kittens bulking up at the gym. 

Whenever I get an opportunity, I ask companies like Google, OpenAI and Nvidia what they’re doing to ensure AI data centers don’t pollute the water or air. They say they’re still committed to reaching emissions targets but seldom give specific details. I suspect they aren’t quite sure what the plan is yet, either. Maybe AI will give them the answer?

At the very least, I’m glad that the US is reconsidering nuclear energy. It’s an efficient and largely pollution-free energy source. It’s just a bit sad that it’s market demands that’ll bring back nuclear and not politicians fighting to protect the planet. At least the US can take inspiration from Europe, where nuclear energy is more common. It’s just frustrating that it takes five or more years to build a new plant. 

I want my phone to be smarter

For the past three years, smartphone makers such as Apple, Samsung and Google have been touting new AI features in their handsets. Often, these presentations show how AI could help edit photos or clean up texts. Even then, consumers have been underwhelmed by AI in smartphones. I don’t blame them. People turn to smartphones for quality snaps, communication or social media. These AI features feel more like extras than must-haves. 

Here’s the thing: AI has the capability to fix many pain points in smartphone usage. The technology is way better at things like vocal transcription, translation and answering questions than past “smart” features. The problem is that for AI to do these things well, it requires a lot of computing. And when somebody is trying to use speech-to-text, they don’t have time to wait for their audio to be uploaded to Google’s cloud so that it can be transcribed and beamed back to their phone. Even if the process takes 10 seconds, that’s still too long in the middle of a back-and-forth text chain.

AI Atlas

CNET

Local AI models are available to run on-device to do these sorts of quick tasks. The problem is that the models still aren’t capable of getting it right all the time. As a result, things can feel haphazard, with quality transcriptions working only some of the time. I’m hoping that in 2026, local AI on phones can get to a point where it just works.

I also want to see local AI models on phones that can be more agentic. Google has a feature on Pixel phones called Magic Cue. It can automatically pull from your email and text data and intuitively add Maps directions to a coffee date. Or if you’re texting about a flight, it can automatically pull up the flight information. This kind of seamless integration is what I want from AI on mobile, not reimagining photos in cartoon form. 

Magic Cue is still in its early stages, and it doesn’t work all the time or as you’d suspect. If Google, OpenAI or other companies can figure this out, that’s when I do feel consumers will really start to appreciate AI on phones. 

Is this AI?

Scrolling through Instagram Reels or TikTok, whenever I see something truly charming, funny or out of the ordinary, I immediately rush to the comments to see if it’s AI. 

AI video models are becoming increasingly convincing. Gone are wonky movements, 12 fingers and perfectly centered shots with uncanny perfection. AI videos on social now mimic security camera footage and handheld videos, and added filters can obscure the AI-ness of a video.

I’m tired of the guessing game. I want both Meta and TikTok to straight-up declare if the video uploaded was made with AI. Meta actually does have systems in place to try and determine if something uploaded was made with generative AI, but it’s inconsistent. TikTok is also working on AI detection. I’m not entirely sure how the platforms can do so accurately, but it’d certainly make life on social far less of a puzzle. 

Sora and Google do have watermarks for AI-generated videos. But these are getting easier to evade, and many people are using Chinese AI models, such as Wan, to generate videos. While Wan does add a watermark, people can find ways to download those videos without it. It shouldn’t be incumbent upon a few in the comments section to delineate whether a video is AI or not. (There are even subreddits that survey users trying to discern if a video is AI.)

We need clarity.

I’m tired of the constant guesswork. C’mon, Meta and TikTok — what’s the point of all the billions in AI investment? Just tell me if a video on your platform is AI. 



admin Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *