In 2025, AI introduced us new fashions that have been way more able to research, coding, video and image generation and extra. AI fashions might now use heavy quantities of compute energy to “suppose,” which helped ship extra advanced solutions with better accuracy. AI additionally obtained some agentic legs, which means it might exit onto the web and do tasks for you, like plan a trip or order a pizza.Â
Regardless of these developments, we’d nonetheless be far off from artificial general intelligence, or AGI. This can be a theoretical future when AI turns into so good that it is indistinguishable from (or higher than) human intelligence. Proper now, an AI system works in a vacuum and would not actually perceive the world round us. It could actually mimic intelligence and string phrases collectively to make it sound prefer it understands. However it would not. Utilizing AI every day has proven me that we nonetheless have a methods to go earlier than we attain AGI.
Learn extra:Â CNET Is Choosing the Best of CES 2026 Awards
Because the AI trade reaches monstrous valuations, corporations are transferring rapidly to fulfill Wall Road calls for. Google, OpenAI, Anthropic and others are throwing trillions in training and infrastructure costs to usher within the subsequent technological revolution. Whereas the spend might sound absurd, if AI does really upend how humanity works, then the rewards might be monumental. On the identical time, as revolutionary as AI is, it always messes up and will get issues unsuitable. It is also flooding the web with slop content material — akin to amusing short-form movies which may be worthwhile however are seldom helpful.Â
Humanity, which would be the beneficiary or sufferer of AI, deserves higher. If our survival is actually at stake, then on the very least, AI might be substantively extra useful, moderately than only a rote writer of college essays and nude image generators. Listed below are all of the issues that I, as an AI reporter, want to see from the trade in 2026.Â
It is the atmosphere
My greatest, most fast concern round AI is the affect large data centers will have on the environment. Earlier than the AI revolution, the planet was already going through an existential threat attributable to our reliance on fossil fuels. Main tech companies stepped up with initiatives saying they’d purpose to achieve net-zero emissions by a certain date. Then ChatGPT hit the scene.
Do not miss any of our unbiased tech content material and lab-based opinions. Add CNET as a most well-liked Google supply.
With the large energy demand of AI, together with Wall Road’s insatiable want for profitability, information facilities are turning again to fossil fuels like methane gas to maintain the GPUs buzzing, the instruments that carry out the advanced calculations to string phrases and pixels collectively.
There’s one thing extremely dystopian in regards to the finish of the planet coming by the hands of ludicrous AI-generated videos of kittens bulking up at the gym.Â
At any time when I get a chance, I ask corporations like Google, OpenAI and Nvidia what they’re doing to make sure AI information facilities do not pollute the water or air. They are saying they’re nonetheless dedicated to reaching emissions targets however seldom give particular particulars. I think they don’t seem to be fairly certain what the plan is but, both. Possibly AI will give them the reply?
On the very least, I am glad that the US is reconsidering nuclear energy. It is an environment friendly and largely pollution-free vitality supply. It is only a bit unhappy that it is market calls for that’ll convey again nuclear and never politicians preventing to guard the planet. A minimum of the US can take inspiration from Europe, the place nuclear energy is more common. It is simply irritating that it takes five or more years to build a brand new plant.Â
I need my telephone to be smarter
For the previous three years, smartphone makers akin to Apple, Samsung and Google have been touting new AI options of their handsets. Typically, these shows present how AI might assist edit pictures or clear up texts. Even then, customers have been underwhelmed by AI in smartphones. I do not blame them. Folks flip to smartphones for high quality snaps, communication or social media. These AI options really feel extra like extras than must-haves.Â
Here is the factor: AI has the aptitude to repair many ache factors in smartphone utilization. The expertise is approach higher at issues like vocal transcription, translation and answering questions than previous “sensible” options. The issue is that for AI to do these items effectively, it requires numerous computing. And when any person is making an attempt to make use of speech-to-text, they do not have time to attend for his or her audio to be uploaded to Google’s cloud in order that it may be transcribed and beamed again to their telephone. Even when the method takes 10 seconds, that is nonetheless too lengthy in the midst of a back-and-forth textual content chain.
Native AI fashions can be found to run on-device to do these types of fast duties. The issue is that the fashions nonetheless aren’t able to getting it proper on a regular basis. Because of this, issues can really feel haphazard, with quality transcriptions working only some of the time. I am hoping that in 2026, native AI on telephones can get to some extent the place it simply works.
I additionally wish to see local AI models on phones that can be more agentic. Google has a characteristic on Pixel telephones known as Magic Cue. It could actually robotically pull out of your electronic mail and textual content information and intuitively add Maps instructions to a espresso date. Or in the event you’re texting a couple of flight, it will probably robotically pull up the flight info. This type of seamless integration is what I need from AI on cell, not reimagining pictures in cartoon kind.Â
Magic Cue continues to be in its early phases, and it would not work on a regular basis or as you’d suspect. If Google, OpenAI or different corporations can determine this out, that is once I do really feel customers will actually begin to respect AI on telephones.Â
Is that this AI?
Scrolling via Instagram Reels or TikTok, every time I see one thing really charming, humorous or out of the bizarre, I instantly rush to the feedback to see if it is AI.Â
AI video fashions have gotten more and more convincing. Gone are wonky actions, 12 fingers and completely centered photographs with uncanny perfection. AI movies on social now mimic safety digital camera footage and handheld movies, and added filters can obscure the AI-ness of a video.
I am uninterested in the guessing recreation. I need each Meta and TikTok to straight-up declare if the video uploaded was made with AI. Meta truly does have programs in place to try to decide if one thing uploaded was made with generative AI, but it’s inconsistent. TikTok is also working on AI detection. I am not solely certain how the platforms can achieve this precisely, nevertheless it’d definitely make life on social far much less of a puzzle.Â
Sora and Google do have watermarks for AI-generated movies. However these are getting simpler to evade, and many individuals are utilizing Chinese language AI fashions, akin to Wan, to generate movies. Whereas Wan does add a watermark, individuals can discover methods to obtain these movies with out it. It should not be incumbent upon a couple of within the feedback part to delineate whether or not a video is AI or not. (There are even subreddits that survey users trying to discern if a video is AI.)
We want readability.
I am uninterested in the fixed guesswork. C’mon, Meta and TikTok — what is the level of all of the billions in AI funding? Simply inform me if a video in your platform is AI.Â