Sora, Masa + Trump AI investment, & The Galileo Index?

1/ Sora’s Guardrails vs Chinese AI Models

Sora's tight guardrails on inputs have sparked conversations across the AI community. It appears to censor a wide range of content labeled as "harmful"—sometimes going so far as to restrict perfectly benign elements like scenes featuring humans.

Take this for instance: Sora won’t even generate videos with people in them! While ensuring safety is important, stifling AI development domestically could put the country at a disadvantage, especially when China continues to push boundaries with fewer restrictions.

In fact, it wouldn’t even generate videos with people in it! Limiting AI development domestically while China has free reign on AI is not ideal for the country.

2/ Masayoshi Son’s $100 Billion AI Investment in the USA

Masayoshi Son, founder of SoftBank, is meeting with Trump to discuss a staggering $100 billion investment in AI across the U.S. This could be a game-changer.

And hey, why not $200 billion?

For the U.S. to lead the global AI race, it’s not just about lifting unnecessary regulations on model-building; it’s also about doubling down on domestic investments. From AI startups to data centers and chip development, this type of bold move is exactly what the industry needs to thrive.

3/ Anthropic Launches Clio 🌟

Anthropic’s latest tool, Clio, is here—and it’s redefining how we analyze real-world LLM usage. Think of Clio as the Google Trends for AI: a privacy-preserving way to analyze and understand trends in Claude's LLM searches.We’re entering an agentic future, and this is only the beginning!

4/ Galileo Index?

The incoming White House AI and Crypto Czar, David Sacks, has floated a fascinating idea: a Galileo Index to measure the truthfulness of AI outputs.

This becomes especially crucial as we move toward an agentic future, where AI agents will dominate significant parts of both the information ecosystem and the workforce. One intriguing suggestion is to introduce a credibility-weighting system for AI model outputs, much like Google’s PageRank algorithm. It’s a fascinating concept that could reshape how we evaluate AI-generated content—and definitely something worth considering!