On my mind - by Michael Dempsey
This email spawns from this thread.
These thoughts are poorly edited and researched and usually span areas of venture capital, emerging technology, and other random thoughts.
I’d love for this newsletter to be more of a discussion vs. me shouting to the internet, so please email me back and I promise to respond.
5 Thoughts
1) I’m seeking inspiration. Let’s find it together.
Here are a few areas I’ve spent a lot of time in this year:
Adversarial attacks on machine learning models
AI friends
Animation
Avatar-first products
Creative ML
Gender fluidity and its impact on consumer
Human pose estimation research
Fashion + ML
Full-stack robotics companies
Future of family planning
Modified humans, animals, and plants.
Psychedelics and their impact on healthcare
Using ML to decode non-verbal communication across animals and humans
Here are other areas I want to continue to talk to people about that I’ve spent some time in this year:
Science-based CPG products
The past, present, and future of game engines and large scale simulation engines
Social VR
Projection mapping + holograms
Enabling more angel investors
Space (non-earth observation related)
If you’d like to chat about any of these or other post-science project ideas, feel free to respond to this email or directly email mike@compound.vc
2) On CTRL-Labs & Fund Dynamics
The tweet above triggered some conversations over the past 24 hours. Friends at Lux, Spark, Founders Fund, GV, and others were all right in a big way, and right on a difficult space to build conviction on. To his credit, Josh Wolfe was a giddy public cheerleader for this company since before he invested (I remember how much he was excited the first time he told me about them). It is easy to be a cheerleader as a VC when the company is at series C+ and has nailed commercialization, it's a lot harder when it’s a Series A neural compute interface startup, so props to Josh.
The point I was making with my tweet was that creeping fund size and round dynamics in 2019 make it really difficult to nail venture returns and change investing dynamics...and this company is a great example of that.
Large “full-stack” firms that view their entry point as majority series A, now must deploy so much capital into a company over its lifespan, gathering more ownership, and thus trading deal multiple for cash-on-cash returns, as they target their 2.5-3x net fund. The dynamics of "will this return the fund" may no longer necessarily apply to these firms. Instead, with larger funds (and opportunity funds), the question weighs capital deployment over initial ownership in a much stronger way than vintages past.
The question becomes: "Can we get enough capital into this company so that if it has the outlier company that we hope to have 1-2x/fund, we'll return the fund (or maybe 75% of it)?"
But well-regarded founders can command substantial capital ahead of traction, and those founders working on massive upside, possible platform companies can further bend round dynamics in a capital abundant world. Specifically, for opportunities that are as "moonshot", capital intensive, and high upside as something like CTRL-Labs is/was, it then makes sense to bend but not break, get into the early rounds with the hopes that you can deploy not only your venture fund into the company, but also your opportunity fund capital. This is something that could have led to five $400M+ fund size firms splitting (to varying degrees) 3 announced rounds ($11M, and two $28M rounds).
Regardless of these dynamics, this is a great investment and as noted, the IRR has got to be incredible.
Two interesting notes in terms of the investors:
You can make the argument that Spark has now had this happen to them 3 times, as early backers of Oculus (alongside Matrix, who also invested in CTRL-Labs) and Cruise as well. So impressive.
Lux on a different vector (massive moonshot, capital intensive) saw this play out successfully (though over more time) with Auris Surgical (sold for $3.4B-$5B+, raised $733M).
Okay, enough armchair VCing, congrats to my friends who invested, and maybe to NYC which hopefully will get a few more deep tech angel investors in the mix.
3) Will digital goods be what kids today, buy as adults tomorrow?
I recently read This Is Not a T-Shirt by Bobby Hundreds. In the book he makes a point about Japan's role in sneaker collecting culture. Retro AF1s, Dunks, etc. were really sought after in the 2000s because the kids who wanted these shoes in the 1980s were now young professionals with money and were coming to market to "fulfill their fantasies". What is the version of those goods today? What will today’s children lust after 10-20 years from now once they have some discretionary income? Is it goods that already fetch high prices on thee aftermarket that are modeled from the beginning as scarce (a la Yeezys or Supreme), is it digital goods that they grew up with but couldn’t convince their parents to purchase (Roblox goods/Fortnite skins?) And if it IS digital goods, then how will aftermarkets really exist outside of NFTs (maybe NFTs are the answer) when the platforms these digital goods live on become irrelevant or stop functioning?
4) Expectations vs. reality, VC & parenting edition
In the early days of your venture career you often think about all the amazing things you’re going to do to find people, help companies, invest better, etc. Some of them truly do give alpha, while many of them fail for various reasons. As your portfolio grows you start to see where things break in this business and have to be very mindful about the experiments you run to achieve better returns, founder NPS, scale, and more.
Some of these lessons are ones that others who have come before you can teach you, but you really need to learn for yourself.
I wonder if the same thing happens with parenting. You think you’re going to do all of these amazing things for/with your children, but eventually you realize at each age the diminishing (or negative) returns that these idealistic thoughts have, and/or life continues to get in the way of some of these ambitions, and you end up not being able to actually do all of the things that in your childless state you thought you would be able to do one day.
5) Novelty Decay in AI-generated/synthetic content is taking hold
I wrote a quick note on how the novelty of both digital celebrities and AI-generated music is decaying and capping the upside of new entrants. I only expect this to accelerate.
Research / Links
Game character generation from a single photo - This paper has really interesting implications surrounding 2D photo → 3D asset generation. In general this is an area of research we’ve seen a bunch of people take aim at over the years (most famously, and early, Hao Li’s lab, and more recently everyone from universities to Facebook). There are novel bits and pieces technically, but the main thing is that this is a future that many people have thought about or wanted for some time (Loic at Morphin has often talked about how his origin story of his company started by wanting to put himself in FIFA and other games). Just always cool to see clear science fiction start to push forward towards reality on fairly “mainstream” ideas.
How much real data do we actually need: Analyzing object detection performance using synthetic and real data - This paper takes a look at the value that synthetic data can bring to training models. Specifically it realizes something that I don't feel is intuitive to most which is that the open-sourced synthetic data training sets don't generalize very well, nor do they have great diversity of data (these problems go hand in hand). What I've seen spending time in this space is that diversity of data is incredibly misunderstood and underemphasized. Even when looking at something as seemingly "gold standard" years ago as GTA V, researchers and engineers I spoke to at the time realized that the diversity of textures and more were incredibly underwhelming and wouldn't transfer learnings well to a real world environment. What we've started to see now is data expansion via style transfer or just synthesis, essentially utilizing GANs to increase diversity. My feeling on this paper is that it is quite negative for how many perceive synthetic data, especially in areas where highly generalizable perception models are needed (think self-driving), however it also tells me that companies focusing on specific types of perception/understanding are likely far ahead of where current open-source datasets are. I previously wrote about AV simulation and am an investor in AI Reverie.
Making the Invisible Visible: Action Recognition Through Walls and Occlusions - Using a combination of visual and non-visual signals to recognize actions through walls and more. Pretty incredible.
Adblockradio - Using ML to remove ads from radio feeds. Pretty funny.
If you have any thoughts, questions, or feedback feel free to DM me on twitter. All of my other writing can be found here.