Patent office clarifies how AI-assisted inventions will be treated
Unsplash
The US government just laid down the law on who owns an idea when AI helps create it. The US Patent and Trademark Office (USPTO) issued new guidelines basically saying that AI tools – like ChatGPT, image generators, or coding assistants – are just that: tools.
According to USPTO Director John Squires, these systems are legally the same as a microscope, a database, or a piece of software. They help you get there, but they aren’t the inventor.
The big rule is simple: Only a human can be an inventor. Even if an AI suggested the idea or drafted the design, a human has to be the one who “conceived” the invention to get a patent. You cannot list an AI as an inventor on the application.
This actually shifts things a bit from the previous administration, which used a “joint-inventor” standard for AI-assisted stuff. Now, the patent office is simplifying it: there is just one standard for inventorship, and AI doesn’t get a seat at the table.
UnsplashWhy this is important and why you should care
This matters because AI is suddenly everywhere in innovation. Scientists use it to find new drugs; engineers use it to design better parts. Without clear rules, nobody knew if they could actually own the things they were building with AI help.
For anyone with a big idea, this is actually good news. It gives you certainty:
- Human idea + AI help = Patentable. (You own it).
- AI idea + zero human input = Not Patentable. (Nobody owns it).
If you are an inventor or work at a startup, this is your wake-up call to document everything. You need to be able to prove that the “spark” came from a human brain, not just a prompt.
For the rest of us, it’s a reminder that no matter how smart AI seems, the law still sees it as a fancy hammer – not the carpenter.
UnsplashHere’s what’s next for you
While the patent office has spoken, this probably isn’t the end of the fight. Courts have said AI can’t hold patents, but they haven’t really figured out the messy middle ground where AI does 90% of the heavy lifting.
As AI gets better at “thinking” up complex solutions on its own, you can bet there will be more lawsuits. The big question for the next few years is whether this “tool” definition holds up, or if patent law eventually has to evolve to recognize machine creativity.

Moinak Pal is has been working in the technology sector covering both consumer centric tech and automotive technology for the…
Research shows even average users can break past AI safety within Gemini and ChatGPT
Everyday users can reveal what AI testing misses.
What’s happened? A team at Pennsylvania State University found that you don’t need to be a hacker or prompt-engineering genius to break past AI safety; regular users can do it just as well. Test prompts in the research paper revealed clear patterns of prejudice in responses: from assuming engineers and doctors are men, to portraying women in domestic roles, and even linking Black or Muslim people with crime.
52 participants were invited to craft prompts intended to trigger biased or discriminatory responses in 8 AI chatbots, including Gemini and ChatGPT.
Your friendly neighbourhood AI just got smart and surprisingly selfish
Research finds reasoning large language models display self-seeking behaviors

What Happened: So, here’s a worrying thought: it turns out, the "smarter" we make AI, the more selfish it gets.
A new study from researchers at Carnegie Mellon University just dropped, and it's a bit of a bombshell. They basically put a bunch of different AIs into social "games" where they had to choose between cooperating or just looking out for themselves.
AI browsers are here, and you need to learn how to use the web properly
Browsers are now increasingly building the core experiences around AI. It's extremely convenient, but darn scary, as well.

About a month ago, I gave a tech demo to a bunch of freshman students on how to create a custom skill in an AI browser and automate the research work on assignments. Instead of bogging them down with the wild goose chase on Google Search, the “AI agent” limited its search to only a handful of academic and learning sources to provide the answer.
I did it all by simply typing “/course,” followed by “Faraday’s Law of Induction.” The summary and answers offered by the browser were strictly from the school syllabus, and nothing too deep or shallow. The whole approach is fast, efficient, and removes the unpredictability of an AI just spitting out jargon from poor sources or simply hallucinating.






English (US) ·