OpenAI's Sora soars
The biggest news this week was – unsurprisingly – all about AI. We're all watching a tectonic shift from within as it happens.
This was yet another week of all the major players in AI – Google, Meta and OpenAI in particular – falling over themselves trying to one-up the others with announcements of splashy new advances. OpenAI came out as the clear winner in terms of attention, however.
OpenAI’s massive splash in generative AI video
OpenAI revealed Sora, an AI model that can generate realistic looking video based on text prompts.
While they aren’t the first to market with a text-to-video model, the results shared by the company and by founder Sam Altman have been truly impressive (though not without the occasional odd artifact that tips over into uncanny valley territory).
OpenAI is upfront about the model’s current limitations. To whit, from the company:
“The current model has weaknesses. It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark.
The model may also confuse spatial details of a prompt, for example, mixing up left and right, and may struggle with precise descriptions of events that take place over time, like following a specific camera trajectory.”
Sora is being released for red teams and a limited number of creative professionals to gauge its risks, and its potential uses in the creative industry. Basically that means you’re not getting access just yet, even if you happen to be a paying OpenAI subscriber.
Google debuts Gemini 1.5
Google dropped its version 1.5 update for Gemini Pro, which is its next-gen mid-sized generative AI model. The headline feature here is a massive 1 million token context window, which allows for really large data sets as input, including “hundreds of pages of text, entire code repos and long videos.”
That basically dwarfs all other competitors at the moment for that metric, including Anthropic’s Claude 2.1 and GPT-4 Turbo.
A context window that big probably doesn’t sound that interesting to consumer users of GPT-style products, but it’s a massive unlock for research and business use and could represent another step-change in terms of what’s possible for working with tools like these.
YC’s call for startups is a bellwether
Y Combinator issues its latest ‘request for startups’ this week, which is often a solid indicator of what trends are on the rise in general in terms of interest from people writing checks for entrepreneurs.
The list hasn’t been updated in toto since 2018, so the fact that it’s done so now is telling, and a reflection that the market is sending very different signals now than it was now in terms of what needs investment and startup focus.
Overall, there are 20 categories on the new RFS list, including machine learning for robotics/simulation; defines tech; U.S. manufacturing onshoring; space; climate; spatial computing; dev tool business spun out of internal tooling, and more.
A lot of these echo where we were seeing reader interest and appetite flow in the background at TechCrunch over the last few years.
Sam Altman owns OpenAI’s ‘corporate’ venture fund
OpenAI has a corporate VC – but it’s structured a little differently than most: Sam Altman owns it.
The fund has already made investments, including in AI video editing tool Descript and custom LLM for large legal firm startup Harvey.
It seems at the moment like most of what’s linking the ‘OpenAI Startup Fund’ to OpenAI properly isn’t structure, but is more of a handshake agreement between Altman and the company he was temporarily deposed from.
OpenAI continues to be a stunningly odd example of governance that will probably be very fun as a course of study for future MBAs.
That’s the biggest news this week, and if you noticed a trend in that AI is the dominant topic, you’re clueing in to something I hear more and more: If you’re not doing AI in tech, you’re probably not doing anything that matters.
I shared this recently via social, but I think Nvidia CEO Jensen Huang put it best when I spoke to him back in 2017 – “AI is just the modern way of doing software.”