Notes from Human(X) Day 1
Attending the first ever Human(X) conference, which is focused on AI in production and application, rather than at the research or frontier level, is a wake-up call
There are a lot of signs that a technology or industry has ‘arrived,’ meaning that it’s reached a point where it impacts things in a way that can’t be reversed and that’s far-reaching. One is when it earns enough attention to make a large, lavish and well-crafted industry event viable. Human(X), a new conference happening in Vegas right now (which is where I am right now) is that for AI – loosely defined and broadly applied.
The show is put together by a group of people that includes creators of many other very successful trade shows, including Money20/20, HLTH and Shoptalk – and the benefit of that combined experience shows in everything from the quality of programming to the fit and finish, and also in how well-run it is. Having helped organize and run TechCrunch Disrupt for 10 years, I can safely say these folks have got it right in many ways.
The other key ingredient to a successful show is of course appetite from the market, and AI has that in abundance. What’s more impressive, and what makes a show like this so challenging to pull off, is that ‘AI’ as a concept is as loose a connective tissue as you can come up with when it comes to an organizing principle for doing a centralized thematic event. It touches every industry, and its adherents focus on everything from the physical hardware and infrastructure that makes it possible, to the UX of consumer-focused products, to the knock-on effects of workforce transformation that results from its uptake.
From my perspective, Human(X) is doing a good job of both delineating and connecting the various disparate streams that play across AI. It’s definitely a good sign that people I’ve talked to on the ground have repeatedly cited FOMO for cross-programmed events and discussions as one of their main complaints about the show so far – there’s clearly a lot of value in a cross-discipline, cross-industry approach.
Switching to my main takeaways from day one, they fall into two categories:
Human replacement in job functions is a hot topic, and seems to have crossed the Rubicon from a controversial topic into a clear desirable, at least for this crowd of mostly business and technical executive leaders
There’s a lot of buzz around the potential in latent data, and how to uncover existing stores of that, how to capture it, and how to assess and quantify its value
First, regarding human replacement, it’s come up as a topic in most of the sessions I’ve managed to tune in to. Yet there’s a general optimism about it as far as how it will impact the workforce and job opportunities. OpenAI Chief Product Officer Kevin Weil, for instance talked a lot about our historical tendency to adapt to the introduction of massively impactful new technologies that wipe out entire job categories or professions.
Even in conversations where the emphasis is on human augmentation rather than replacement, if you pull the thread it’s clear that the aim is to replace the productivity of ten people with the enhanced productivity of one. Past major technology shifts have of course been about value creation, but AI seems to be the one in which human headcount reduction (at the individual company level) is becoming unambiguously the goal.
Reading that back, it sounds like it contains an implied value judgment: it does not. I don’t think that in isolation job creation or elimination by AI is inherently bad or good. Humans are, collectively and individually, resilient and resourceful, and I think that as much as there are many potential negative unintended consequences of broad AI deployment, there are also many unseen or unpredictable positive outcomes, too, including the creation of entirely new job categories, modes of productivity and maybe even socio-economic organizational models.
On the second point, latent data is indeed an extremely exciting and interesting opportunity, especially as it pertains to the tech, VC and startup scene. The potential for existing, untapped data sets to create new value in businesses that may be stagnant or negative in terms of their primary growth trajectories is significant. Plus, the ways in which we might capture new and unexploited repositories of data that could enable new types of AI models and applications is very exciting, in terms of what it means for new venture formation and new data capture tech.
This was a major point of discussion in the panel I hosted on building AI infrastructure that’s suited for C-suite adoption, which included Honeycomb.io CEO and founder Christine Yen, Google Head of AI Developer Assistant/Agent Bin Ni, Roam Home founder (and August Home founder previously) Jason Johnson and McKinsey Senior Advisor Brian Goffman. Once the recorded version of that is available I’ll definitely share it here.
I also spoke to David Cox, VP, AI Models at IBM Research, and we had a great discussion about IBM’s latest Granite 3.2 enterprise-oriented foundation models, DeepSeek and its broader implications, open source and what that means for AI, and much more. Likewise, I’ll be sharing that when available.
More to come from day 2 and 3, and also stay tuned for thoughts on Cerebras and a few announcements they made at the show, along with takeaways from an illuminating chat with Cerebras founder and CEO Andrew Feldman.