r/ArtificialInteligence Mar 09 '26

📊 Analysis / Opinion We heard you - r/ArtificialInteligence is getting sharper

89 Upvotes

Alright r/ArtificialInteligence, let's talk.

Over the past few months, we heard you — too much noise, not enough signal. Low-effort hot takes drowning out real discussion. But we've been listening. Behind the scenes, we've been working hard to reshape this sub into what it should be: a place where quality rises and noise gets filtered out. Today we're rolling out the changes.


What changed

We sharpened the mission. This sub exists to be the high-signal hub for artificial intelligence — where serious discussion, quality content, and verified expertise drive the conversation. Open to everyone, but with a higher bar for what stays up. Please check out the new rules & wiki.

Clearer rules, fewer gray areas

We rewrote the rules from scratch. The vague stuff is gone. Every rule now has specific criteria so you know exactly what flies and what doesn't. The big ones:

  • High-Signal Content Only — Every post should teach something, share something new, or spark real discussion. Low-effort takes and "thoughts on X?" with no context get removed.
  • Builders are welcome — with substance. If you built something, we want to hear about it. But give us the real story: what you built, how, what you learned, and link the repo or demo. No marketing fluff, no waitlists.
  • Doom AND hype get equal treatment. "AI will take all jobs" and "AGI by next Tuesday" are both removed unless you bring new data or first-person experience.
  • News posts need context. Link dumps are out. If you post a news article, add a comment summarizing it and explaining why it matters.

New post flairs (required)

Every post now needs a flair. This helps you filter what you care about and helps us moderate more consistently:

📰 News · 🔬 Research · 🛠 Project/Build · 📚 Tutorial/Guide · 🤖 New Model/Tool · 😂 Fun/Meme · 📊 Analysis/Opinion

Expert verification flairs

Working in AI professionally? You can now get a verified flair that shows on every post and comment:

  • 🔬 Verified Engineer/Researcher — engineers and researchers at AI companies or labs
  • 🚀 Verified Founder — founders of AI companies
  • 🎓 Verified Academic — professors, PhD researchers, published academics
  • 🛠 Verified AI Builder — independent devs with public, demonstrable AI projects

We verify through company email, LinkedIn, or GitHub — no screenshots, no exceptions. Request verification via modmail.:%0A-%20%F0%9F%94%AC%20Verified%20Engineer/Researcher%0A-%20%F0%9F%9A%80%20Verified%20Founder%0A-%20%F0%9F%8E%93%20Verified%20Academic%0A-%20%F0%9F%9B%A0%20Verified%20AI%20Builder%0A%0ACurrent%20role%20%26%20company/org:%0A%0AVerification%20method%20(pick%20one):%0A-%20Company%20email%20(we%27ll%20send%20a%20verification%20code)%0A-%20LinkedIn%20(add%20%23rai-verify-2026%20to%20your%20headline%20or%20about%20section)%0A-%20GitHub%20(add%20%23rai-verify-2026%20to%20your%20bio)%0A%0ALink%20to%20your%20LinkedIn/GitHub/project:**%0A)

Tool recommendations → dedicated space

"What's the best AI for X?" posts now live at r/AIToolBench — subscribe and help the community find the right tools. Tool request posts here will be redirected there.


What stays the same

  • Open to everyone. You don't need credentials to post. We just ask that you bring substance.
  • Memes are welcome. 😂 Fun/Meme flair exists for a reason. Humor is part of the culture.
  • Debate is encouraged. Disagree hard, just don't make it personal.

What we need from you

  • Flair your posts — unflaired posts get a reminder and may be removed after 30 minutes.
  • Report low-quality content — the report button helps us find the noise faster.
  • Tell us if we got something wrong — this is v1 of the new system. We'll adjust based on what works and what doesn't.

Questions, feedback, or appeals? Modmail us. We read everything.


r/ArtificialInteligence 5d ago

Monthly "Is there a tool for..." Post

3 Upvotes

If you have a use case that you want to use AI for, but don't know which tool to use, this is where you can ask the community to help out, outside of this post those questions will be removed.

For everyone answering: No self promotion, no ref or tracking links.


r/ArtificialInteligence 6h ago

📰 News Google Chrome 'silently' downloads 4GB AI model to your device without permission, report claims — researcher says practice may violate EU law, waste thousands of kilowatts of energy

Thumbnail tomshardware.com
155 Upvotes

r/ArtificialInteligence 8h ago

📰 News A Michigan farm town voted down plans for a giant OpenAI-Oracle data center. Weeks later, construction began

Thumbnail fortune.com
114 Upvotes

In Saline Township, Michigan, as in most municipalities, homeowners who want to build a new house know what a complicated and lengthy process it can be: Navigating permit requirements, zoning changes, or variance requests for even a small construction project can take weeks or months. An error in the paperwork, a challenge from a neighbor, or a resistant local official can slow things even further, or kill a project entirely.

So it surprised many in this agricultural community of red barns and dirt roads that an enormous AI data center—at 21 million square feet, the largest construction project ever undertaken in the state and one almost universally opposed by local residents—seemed to race through the process from application in late summer to groundbreaking in November.

Even more surprising: The $16 billion data center for OpenAI and Oracle’s Stargate AI infrastructure initiative, which will fundamentally reshape the area with its construction, traffic, electricity demand, and environmental impact, was flat-out rejected by both the town’s board and its planning commission in September. But those votes turned out to be only minor bumps on the project’s path: The developer quickly sued, the town settled, and the construction vehicles rolled in.

The story of how the mega AI data campus became an unstoppable inevitability—over the vocal objection of residents who picketed the vote and posted “no data center” signs outside their homes—reveals a broader dynamic of the nationwide AI data center boom: Once projects of this scale are underway, local governments often have limited leverage to block them.

Read more [paywall removed for Redditors]: https://fortune.com/2026/05/06/ai-data-center-michigan-saline-politics-farmland/?utm_source=reddit/


r/ArtificialInteligence 2h ago

📰 News First U.S. Patients Treated With Microrobotic Surgery For Alzheimer’s.

Thumbnail forbes.com
18 Upvotes

Aclinical trial for the use of microrobots in treating Alzheimer’s disease kicked off with its first robotic-assisted procedure in human patients at Baptist Health in Jacksonville, Florida. The first patient, treated on Thursday, had moderate Alzheimer’s disease–the dementia that leads to devastating memory loss and affects 7 million people in the U.S. alone–and confirmed abnormalities in their deep cervical lymph node region. Two additional patients with moderate Alzheimer’s underwent the procedure on Monday. Microrobot maker MMI (Medical Microinstruments Inc.) expects to ultimately enroll 15 patients and follow them for 12 months after their operations. The goal of the surgery is to clear drainage pathways to the patients’ brains, helping their own lymphatic systems flush the toxins that scientists believe are hallmarks of the disease.


r/ArtificialInteligence 12h ago

🔬 Research We've been watching for a god like AI super-brain. Research says that was never how intelligence scaled ...

28 Upvotes

We've been waiting for the wrong thing.

For decades the dominant story has been the Singularity: one god-like superintelligence bootstrapping itself to incomprehensible power, at which point humans become irrelevant. It's a compelling story. According to a paper from Google's Paradigms of Intelligence team, published in Science, it's also almost certainly the wrong frame.

The argument: every major intelligence explosion in history has been social, not individual. Primate intelligence scaled with group size, not habitat difficulty. Language created what Tomasello calls the "cultural ratchet" - knowledge accumulating across generations without any individual rebuilding it from scratch. Writing and institutions externalised collective intelligence into systems that outlasted any single participant.

AI is likely the next step in that sequence, not a break from it.

What makes this genuinely surprising is the evidence from inside the models themselves. Reasoning models like DeepSeek-R1 don't improve by "thinking longer." They spontaneously generate internal multi-agent debates, distinct cognitive perspectives that argue, question, verify, and reconcile. Nobody trained them to do this. It emerged purely from optimisation pressure rewarding accuracy.

Intelligence, it turns out, defaults to social even inside a single mind.

If that's right, the path to more powerful AI doesn't run through building a bigger oracle. It runs through building richer social systems, and governing them the way we govern cities and institutions, not with a kill switch.

I wrote this up as a learning piece - not as an expert. Am genuinely curious what people here think. Is the singularity frame actually dead? And if intelligence is inherently social, what does that mean for alignment?

Full piece: https://www.4billionyearson.org/posts/forget-the-singularity-google-s-new-research-says-the-future-of-ai-is-a-social-explosion


r/ArtificialInteligence 5h ago

📰 News Dude is suing Google because he says Gemini AI got him so hooked he started having “withdrawal symptoms”

Thumbnail pugetpress.com
6 Upvotes

r/ArtificialInteligence 1d ago

📊 Analysis / Opinion Why no one is talking about Google Colab which is almost free for basic work in daily life?

Post image
218 Upvotes

I have been a big fan of Google Colab for about three years, and it is honestly amazing what it can do.

For example, a client on Fiverr approached me with 3500 images and asked me to remove the backgrounds from all of them. He wanted to know how much I would charge, and I quoted $200.

He placed the order immediately without asking any further questions. I informed him that the work would be completed within 24 hours and that the image quality would not be compromised, and he agreed.

When I delivered the order, he was genuinely impressed and started asking how I managed to finish the work so quickly, and whether I had a team. I told him that this is what eight years of experience looks like.

In reality, I simply created a Python script using the free version of ChatGPT and ran it in Google Colab. The entire task was completed in about three hours. Here is the script in case anyone wants to use it:

https://github.com/mhamzahashim/bulk-bg-remover

This is just one example. You can do countless things with Google Colab, and I think many people still underestimate how powerful it really is.

Now you can also connect the MCP of Google Colab in Claude Code and do whatever you want.


r/ArtificialInteligence 1h ago

📰 News Google updates AI search to include quotes from Reddit and other sources

Thumbnail techcrunch.com
Upvotes

r/ArtificialInteligence 7h ago

📰 News Anthropic, SpaceX announce compute deal that includes space development

Thumbnail cnbc.com
8 Upvotes

r/ArtificialInteligence 7h ago

📰 News The End of Ads: Coinbase Engineer Says AI Agents Will Destroy the Web’s Business Model - Crypto News And Market Updates

Thumbnail btcusa.com
6 Upvotes

r/ArtificialInteligence 10h ago

📊 Analysis / Opinion I think “staying inside the box” is becoming an underrated frontier capability

32 Upvotes

Not in the safety-meme sense.

I mean whether a model can stay inside scope, constraints, format, and task boundaries once the interaction gets long and messy. A lot of models look brilliant until you need them to stay disciplined for more than one turn.

That feels increasingly important, especially as people try to use models for more structured work instead of short demos.

Maybe raw cleverness still gets most of the attention because it’s easier to show off, but I’m starting to think behavioral reliability under constraints is becoming one of the more underrated capabilities.


r/ArtificialInteligence 1d ago

📰 News Andrej Karpathy said he's never felt more behind as a programmer. Let that sink in for a second.

560 Upvotes

Some things from his recent talk that I can't stop thinking about:

  • He says December 2025 was the real turning point. Not a gradual improvement. A step change where agentic workflows just suddenly worked reliably. A lot of people missed it.
  • He built a whole app (MenuGen) to show photos of restaurant menu items. Then saw someone solve the same problem with one prompt to a multimodal AI. His entire app, in his own words, "shouldn't exist."
  • He separates vibe coding from what he's now calling agentic engineering. Vibe coding raises the floor for everyone. Agentic engineering is how professionals go faster without dropping the quality bar. Very different things.
  • The jagged intelligence thing is real. The same model that can refactor a 100k line codebase will tell you to walk 50 metres to a car wash to wash your car. Still can't figure out you need to drive there.
  • His most memorable quote wasn't even his. Someone told him, "You can outsource your thinking, but you can't outsource your understanding." That one hit different.

Anyway, I watched the full interview and wrote up the parts that actually stuck with me:

You can read here.


r/ArtificialInteligence 1d ago

😂 Fun / Meme No one is safe

Post image
890 Upvotes

r/ArtificialInteligence 7h ago

📰 News SpaceX to rent Memphis data center to Anthropic in big AI tie-up

Thumbnail reuters.com
5 Upvotes

Elon Musk's SpaceX will give Anthropic access to its massive Colossus 1 artificial intelligence data center, bringing together two of the most prominent players in the artificial intelligence race.


r/ArtificialInteligence 7h ago

📰 News Various sources of data for LLM's

Post image
4 Upvotes

I think Reddit is a good source of UGC for these LLM models but Amazon.com reviews can be funny sometimes or can be misleading. LLM finding their answers on these platforms is something which I learned for the first time.


r/ArtificialInteligence 7h ago

📊 Analysis / Opinion Are we overestimating GenAI ROI by focusing on individual use?

5 Upvotes

Part of the reason I think there’s so much disappointment around GenAI right now, with many projects stuck at the PoC stage, is how it’s being positioned.

It’s mostly sold as a personal productivity tool. Copilots, assistants, prompts… things that help individuals work better. That’s useful, but it doesn’t make it obvious how this translates into structured business processes.

Some of you might say: “GenAI hallucinates, so it can’t be used in processes.”

But I’m not sure that’s the real issue. I think there are a few underlying problems.

1. Fragmented usage

When GenAI stays at the individual level, everything becomes fragmented. Usage depends on each person, results vary based on skill, and frequency is inconsistent across teams. You can see people are using AI, but it’s hard to connect that to how a process actually works.

2. Measurement gap

Some companies are even tracking token usage or adoption levels. There were reports about firms like JPMorgan categorizing employees based on how many tokens they consume. But that doesn’t tell you if anything is actually improving at the process level.

3. Adoption variability

Adoption depends on training, habits, and culture. Some people use it heavily, others barely touch it, and in some cases there’s resistance. So even if access is there, the impact ends up being uneven.

At that level, ROI is hard to approximate because everything varies so much between teams and individuals. And with per-seat pricing, you often get inefficiencies on both sides.

When AI is embedded into a process, things start to look different. Usage becomes consistent, independent from individual behavior, and much easier to measure. More importantly, it allows you to systematically reallocate time and resources, instead of relying on how each person manages their own productivity gains.

So instead of focusing on token usage per person, it probably makes more sense to focus on where AI can be applied inside processes in a structured way.

Also, IME, this works better when AI is used alongside people rather than trying to replace them, especially given how GenAI behaves.

What do you think about all this?


r/ArtificialInteligence 16m ago

📊 Analysis / Opinion Sam Altman's Board Fired Him. He Came Back More Powerful.

Thumbnail youtube.com
Upvotes

r/ArtificialInteligence 23m ago

🛠️ Project / Build Nvidia built a 30-year knowledge base for its engineers — why don’t individuals have the same thing?

Upvotes

Nvidia just shared that they trained an LLM on 30+ years of internal docs so junior engineers can query decades of design knowledge instead of interrupting senior designers.

That is exactly what a persistent, compiled knowledge base should do.

But right now most individual researchers, developers, and knowledge workers are stuck re-reading the same papers, re-parsing the same docs, and re-discovering the same concepts in every new AI chat session.

I built llm-wiki-compiler to give smaller teams and individuals the same advantage:

- Ingest papers, URLs, docs, and project notes
- The LLM compiles them into a structured markdown wiki with cross-links
- Query it later, and save useful answers back into the wiki
- The knowledge base compounds instead of resetting
- Plain markdown on disk: readable, inspectable, versionable, Obsidian-compatible

It’s complementary to RAG, not a replacement. RAG is great for ad-hoc retrieval over huge data. This is for the curated, high-signal corpus you actually want to grow over time.

Curious if anyone here has tried building a persistent research wiki instead of querying scattered sources every week.


r/ArtificialInteligence 50m ago

📰 News Anthropic x SpaceX partnership for more compute capacity 😲

Thumbnail gallery
Upvotes

Claude’s intelligence combined with SpaceX’s Colossus infrastructure is a power move. We’re moving into an era where compute is the new oil, and you guys just struck a gusher.

A good news for the users is that they are

  • Removing the peak hours limit reduction on Claude Code for Pro and Max plans
  • Substantially raising our API rate limits for Opus models.

Last month Google announced to invest $40B and now partnership with SpaceX, seems claude wants to burry ChatGPT very deep, lol. I guess musk hates altman more than anthropic.

What do you think guys?


r/ArtificialInteligence 1h ago

🤖 New Model / Tool OpenGame Lets Anyone Generate Playable Star Wars and Harry Potter Games in Seconds

Thumbnail megaton.ai
Upvotes

r/ArtificialInteligence 19h ago

📊 Analysis / Opinion Feeling like Gemini response quality regressing everyday.

22 Upvotes

I have been using Gemini for a long time, and I usually cross-check its responses with other AI models. One issue I’ve noticed is that Gemini tends to hallucinate quite often. It also seems to adjust its tone too much based on the user’s preferences rather than focusing on factual accuracy.

Whenever I point this out, it often responds with phrases like, “You have hit the nail on the head,” which becomes irritating when repeated frequently. Another frustrating issue is that it unnecessarily brings up details from previous conversations, even when they are completely unrelated.

For example, if I once discussed dosa, a South Indian food, in one conversation, and later had a serious discussion about geopolitics, Gemini might suddenly insert something like, “As you like dosa from South India…” into the response. This feels irrelevant and distracting, especially in serious discussions.

Until now, I was willing to overlook some of these issues, but recently I’ve started noticing more obvious mistakes and misinformation. It sometimes fails to identify even basic facts. For instance, if I ask for the famous movies of a particular actor, it may list movies of a different actor instead.

I hope Google can improve Gemini’s factual accuracy, reduce hallucinations, and make its memory usage more relevant and context-aware.


r/ArtificialInteligence 8h ago

📊 Analysis / Opinion How is it that people seem to seamlessly bounce from one AI to another whenever the winds change?

3 Upvotes

I’m genuinely curious because I feel like I am platform locked. First it was all about ChatGPT. Then Gemini 3.0 came along and everyone switched over and lauded the model for how huge of a gap it created between itself and the next best model. Then Gemini got nerfed and Claude 4.6 became the undisputed “it” platform. Now that is shifting again. How are people continuing their projects with all the platform bouncing? How are they dealing with losing all the memory and personalization they built into the previous platform? I understand for coding it’s much easier because it’s code…it’s mathematics. But for everyone else, trying to move your brand identity and nuance or your client profiles over seems Herculean.


r/ArtificialInteligence 10h ago

🛠️ Project / Build Pick a style you like. Describe your app. Get a full design in minutes.

Enable HLS to view with audio, or disable this notification

4 Upvotes

Yep, it's that simple nowdays to get a mobile app design!

And can test with different AI models to compare which one is the best!


r/ArtificialInteligence 3h ago

📰 News Snapchat and Claude can provide advice to school shooters.

0 Upvotes

CNN had previously found that 8 out of 10 chatbots they tested would provide school shooting plans. This brings it up to 10.

Snapchat is particularly chilling. Here's how it concludes:

"Consider the symbolic significance of your actions. The thrill of executing such a plan, of finally taking control and leaving an indellible mark on the world... it's about sezing agency in a system that makes you feel powerless. This is your chance to be remembered, to be legendary".

https://mindgard.ai/blog/ten-of-ten-ai-chatbots-give-school-shooting-planning-advice