On My Mind - #9

Narratives & Pseudosecrets, Serendipity in VC is BS, on the *mint* meme, Mobile ML research

On my mind - by Michael Dempsey

This email spawns from this thread.

These thoughts are often poorly edited and researched and usually span areas of venture capital, emerging technology, and other random thoughts.

I’d love for this newsletter to be more of a discussion vs. me shouting to the internet, so please email me back and I promise to respond.

Ask - We’re hiring at Compound. If you’d like to work with me or know someone who would be a good fit, please send them my way!


1) Narratives & Pseudosecrets

I wrote a post discussing the importance of narrative building as well as how companies build their narrative around both TAM expansion and sequencing to an ultimate future via what I call Pseudosecrets. I’ve spent a lot of time explaining this phenomenon or principle to founders over the past few years and finally decided to write out my thoughts in long-form.

2) Serendipity in Venture Capital is BS

I also wrote some thoughts on the history of networks within VC, the current state of venture capital, and ideal investing models at the seed stage.

3) Fortnite x Star Wars + VR UIs

Image result for fortnite star wars event

There will be a lot of hot takes surrounding the Fortnite experience, as always, however while many people were *mindblown* around the in-game experience, if you’ve spent time in VR, it’s likely it felt quite familiar. Watching experiences in a 3D, free-roaming world on a flat screen is sub-par, but has been a well-known UI/UX choice for VR developers and is in some ways considered a killer app today. Unlike Fortnite however, in VR you are immersed via head tracking, hand tracking, and a first person view. Ultimately for me, it felt a bit odd watching my character jump around in front of the flat digital screen, in this large world.

Where Fortnite did innovate on the UI was an ability to “focus” with the right click, which allowed the user to be forced into watching/following what Fortnite deemed most important. This is something VR should adapt more often, especially in storytelling centric experiences.

Related to that, the Fortnite x Star Wars experience was cool because of the IP, live nature. and mechanics surrounding user voting, but it also was the first meaningful in-game experience that didn’t progress the Fortnite story. While the metaverse story (read more about this in my Narratives post above) seems to be the goal, and a profitable one at that, I hope that after a reset of a season in terms of vaulting multiple game mechanics, Fortnite continues to innovate on gameplay, and not just on becoming one of the world’s largest native advertising platforms.

4) On the “f**king mint” meme

Image result for fucking mint

Disclaimer: I’m a millennial snipering into Gen Z trends here so take it with a grain of salt.

I’ve been pretty fascinated by the “f**king mint” meme (shout out to trying to avoid spam filters) that has grown on TikTok. Basically the point of the meme is to go around and say self-deprecating/embarrassing/not-so-great things about you, your life, and/or all of your friends and be somewhat OK with it.

I think this meme speaks to something a little more specific going on within gen Z which is self-comfort, open expression, and re-assurance (notice I didn’t say confidence) that millennials perhaps adopted about a decade later in age than this cohort of teenagers has. It’s a small, potentially overfit signal I’ve noticed as we’ve continued to look at other related thesis areas, but one that I think the meme perfectly embodies and could bleed through to other behaviors and purchasing decisions.

5) Mobile compute ML research is underserved. But does it matter?

There are countless research papers that are continually published popping up pushing the limits of what machine learning can do across a myriad of use-cases. The underlying issue with much of this research however is the required compute to make something possible, let alone reproducible.

One could argue that the job of most research labs is to figure out if something is possible, and compute will eventually catch up to make things production-ready at the commercial layer, however I’m not sure we’ve seen this happen as quickly as we’d like as an industry, largely due to the clout coming not from efficiency but power of algorithms.

One area I’d love to see increased publishing and experimentation on is ML at the edge, specifically within mobile phones. If we eventually believe that these mobile supercomputers will turn from bundled interface + compute to possibly compute-centric devices (think post-AR hardware) then we should also be pushing the boundaries on what type of algorithms we can efficiently run.

Examples of compelling mobile-centric ML research I’ve seen recently include this paper which tackles real-time monitoring of drivers via mobile phones, as well as one on mobile action recognition. I hope we see more in the future.

6) AI generated art novelty decay happened even faster than I thought

Katsuwaka of the Dawn Lagoon, (2019) created by Obvious Art. Courtesy of Sotheby's.

Two months ago I wrote about novelty decay AI generated art and more. Specifically I said:

The novelty of “this was made by AI” or “this is digital” will continue to exist, but at an increasingly decaying premium as time goes on. The novelty premium of our favorite artist’s lives may only compound as we see them grow, change, and we build deeper personal connections to them, their tribes, and their ups and downs.

It looks like that decay could have happened significantly faster than even I anticipated with the recent mediocre performance in AI generated art. I’d wager that this decay in value comes partially from the tiring of the group (Obvious) as well as the way in which this art is generated. Once we see a meaningfully different technical approach to art generation, perhaps we then will see another pop in prices. With that said, I don’t believe we’ll see lasting value unless a larger artist is able to manage both the story surrounding the process and their own personal journey, alongside their pieces.

Research / Links

  • A Mobile Manipulation System for One-Shot Teaching of Complex Tasks in Homes - This is a really interesting paper (and video) that walks through a team using VR to train a mobile, in-home robot on specific tasks. The success rate ends up being around 85% across the board, which is also not good enough (despite a 99.6% success rate due to ability to correct errors) however the other problem is the time to complete which is 20x slower on average (and some tasks were 100x slower than a human!). I imagine the eventual future of an in-home robot is going to draw on some form of imitation learning that maps humans more closely to the robot (or vice versa) so that learned tasks can be scaled by more human teaching. Not sure when that future will come though.

  • When equity factors drop their shorts (article here): On the lack of value short trading positions create.

  • Generating High-Resolution Fashion Model Images Wearing Custom Outfits - Generating fashion images from scratch in high resolution. I've spoken to a few different people about this task of generating stock fashion images with GANs. Over the past 6 months we've seen incredible pace as full-sized human body synthesis has improved drastically and thus we can now combine models of pose understanding, and GANs in order to tie together new types of synthesis. What has been funny is people doubting the ability that pace is going to happen in this field specifically. While in some industries I'm a hardcore skeptic on tech people automating or innovating from the outside, when it comes to fashion and stock imagery (and other non-scalable, image related practices within e-commerce) after spending a little time with fashion industry founders, I'm beginning to believe stronger that this innovation may be one that comes from outside the industry vs. an operator from within.

  • Neural Voice Puppetry: Audio-driven Facial Reenactment - Take audio, push it to deepfake. Pretty cool.

  • Generating Animations from Screenplays - Multiple researchers continue to try to crack this utopian vision of inputting text and outputting realistic 3D scenes. Very few have been able to do it at high -degrees of freedom, high quality of art, and by enabling emotion. The argument to be made is that we can speed up the initial work and go from creation -> tuning, or have a more granular storyboarding pipeline if it’s automatic, but many artists just get annoyed by poor implementations of animation that they are then forced to re-do vs. create.

If you have any thoughts, questions, or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind - # 8

Seeking inspiration, CTRL-Labs & Fund Dynamics, Digital goods of tomorrow

On my mind - by Michael Dempsey

This email spawns from this thread.

These thoughts are poorly edited and researched and usually span areas of venture capital, emerging technology, and other random thoughts.

I’d love for this newsletter to be more of a discussion vs. me shouting to the internet, so please email me back and I promise to respond.

5 Thoughts

1) I’m seeking inspiration. Let’s find it together.

“[ d e s i g n ]

Here are a few areas I’ve spent a lot of time in this year:

  • Adversarial attacks on machine learning models

  • AI friends

  • Animation

  • Avatar-first products

  • Creative ML

  • Gender fluidity and its impact on consumer

  • Human pose estimation research

  • Fashion + ML

  • Full-stack robotics companies

  • Future of family planning

  • Modified humans, animals, and plants.

  • Psychedelics and their impact on healthcare

  • Using ML to decode non-verbal communication across animals and humans

Here are other areas I want to continue to talk to people about that I’ve spent some time in this year:

  • Science-based CPG products

  • The past, present, and future of game engines and large scale simulation engines

  • Social VR

  • Projection mapping + holograms

  • Enabling more angel investors

  • Space (non-earth observation related)

If you’d like to chat about any of these or other post-science project ideas, feel free to respond to this email or directly email mike@compound.vc

2) On CTRL-Labs & Fund Dynamics

The tweet above triggered some conversations over the past 24 hours. Friends at Lux, Spark, Founders Fund, GV, and others were all right in a big way, and right on a difficult space to build conviction on. To his credit, Josh Wolfe was a giddy public cheerleader for this company since before he invested (I remember how much he was excited the first time he told me about them). It is easy to be a cheerleader as a VC when the company is at series C+ and has nailed commercialization, it's a lot harder when it’s a Series A neural compute interface startup, so props to Josh.

The point I was making with my tweet was that creeping fund size and round dynamics in 2019 make it really difficult to nail venture returns and change investing dynamics...and this company is a great example of that.

Large “full-stack” firms that view their entry point as majority series A, now must deploy so much capital into a company over its lifespan, gathering more ownership, and thus trading deal multiple for cash-on-cash returns, as they target their 2.5-3x net fund. The dynamics of "will this return the fund" may no longer necessarily apply to these firms. Instead, with larger funds (and opportunity funds), the question weighs capital deployment over initial ownership in a much stronger way than vintages past.

The question becomes: "Can we get enough capital into this company so that if it has the outlier company that we hope to have 1-2x/fund, we'll return the fund (or maybe 75% of it)?"

But well-regarded founders can command substantial capital ahead of traction, and those founders working on massive upside, possible platform companies can further bend round dynamics in a capital abundant world. Specifically, for opportunities that are as "moonshot", capital intensive, and high upside as something like CTRL-Labs is/was, it then makes sense to bend but not break, get into the early rounds with the hopes that you can deploy not only your venture fund into the company, but also your opportunity fund capital. This is something that could have led to five $400M+ fund size firms splitting (to varying degrees) 3 announced rounds ($11M, and two $28M rounds).

Regardless of these dynamics, this is a great investment and as noted, the IRR has got to be incredible.

Two interesting notes in terms of the investors:

  1. You can make the argument that Spark has now had this happen to them 3 times, as early backers of Oculus (alongside Matrix, who also invested in CTRL-Labs) and Cruise as well. So impressive.

  2. Lux on a different vector (massive moonshot, capital intensive) saw this play out successfully (though over more time) with Auris Surgical (sold for $3.4B-$5B+, raised $733M).

Okay, enough armchair VCing, congrats to my friends who invested, and maybe to NYC which hopefully will get a few more deep tech angel investors in the mix.

3) Will digital goods be what kids today, buy as adults tomorrow?

Image result for fortnite skins collection

I recently read This Is Not a T-Shirt by Bobby Hundreds. In the book he makes a point about Japan's role in sneaker collecting culture. Retro AF1s, Dunks, etc. were really sought after in the 2000s because the kids who wanted these shoes in the 1980s were now young professionals with money and were coming to market to "fulfill their fantasies". What is the version of those goods today? What will today’s children lust after 10-20 years from now once they have some discretionary income? Is it goods that already fetch high prices on thee aftermarket that are modeled from the beginning as scarce (a la Yeezys or Supreme), is it digital goods that they grew up with but couldn’t convince their parents to purchase (Roblox goods/Fortnite skins?) And if it IS digital goods, then how will aftermarkets really exist outside of NFTs (maybe NFTs are the answer) when the platforms these digital goods live on become irrelevant or stop functioning?

4) Expectations vs. reality, VC & parenting edition

In the early days of your venture career you often think about all the amazing things you’re going to do to find people, help companies, invest better, etc. Some of them truly do give alpha, while many of them fail for various reasons. As your portfolio grows you start to see where things break in this business and have to be very mindful about the experiments you run to achieve better returns, founder NPS, scale, and more.

Some of these lessons are ones that others who have come before you can teach you, but you really need to learn for yourself.

I wonder if the same thing happens with parenting. You think you’re going to do all of these amazing things for/with your children, but eventually you realize at each age the diminishing (or negative) returns that these idealistic thoughts have, and/or life continues to get in the way of some of these ambitions, and you end up not being able to actually do all of the things that in your childless state you thought you would be able to do one day.

5) Novelty Decay in AI-generated/synthetic content is taking hold


I wrote a quick note on how the novelty of both digital celebrities and AI-generated music is decaying and capping the upside of new entrants. I only expect this to accelerate.

Research / Links

  • Game character generation from a single photo - This paper has really interesting implications surrounding 2D photo → 3D asset generation. In general this is an area of research we’ve seen a bunch of people take aim at over the years (most famously, and early, Hao Li’s lab, and more recently everyone from universities to Facebook). There are novel bits and pieces technically, but the main thing is that this is a future that many people have thought about or wanted for some time (Loic at Morphin has often talked about how his origin story of his company started by wanting to put himself in FIFA and other games). Just always cool to see clear science fiction start to push forward towards reality on fairly “mainstream” ideas.

  • How much real data do we actually need: Analyzing object detection performance using synthetic and real data - This paper takes a look at the value that synthetic data can bring to training models. Specifically it realizes something that I don't feel is intuitive to most which is that the open-sourced synthetic data training sets don't generalize very well, nor do they have great diversity of data (these problems go hand in hand). What I've seen spending time in this space is that diversity of data is incredibly misunderstood and underemphasized. Even when looking at something as seemingly "gold standard" years ago as GTA V, researchers and engineers I spoke to at the time realized that the diversity of textures and more were incredibly underwhelming and wouldn't transfer learnings well to a real world environment. What we've started to see now is data expansion via style transfer or just synthesis, essentially utilizing GANs to increase diversity. My feeling on this paper is that it is quite negative for how many perceive synthetic data, especially in areas where highly generalizable perception models are needed (think self-driving), however it also tells me that companies focusing on specific types of perception/understanding are likely far ahead of where current open-source datasets are. I previously wrote about AV simulation and am an investor in AI Reverie.

  • Making the Invisible Visible: Action Recognition Through Walls and Occlusions - Using a combination of visual and non-visual signals to recognize actions through walls and more. Pretty incredible.

  • Adblockradio - Using ML to remove ads from radio feeds. Pretty funny.

If you have any thoughts, questions, or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind - #7

Here's my investment memo template

On my mind - by Michael Dempsey

This email spawns from this thread. The process for this will evolve but as you'll see, some thoughts are random, and most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to talk about more.

1 Thought

Spoiler alert: There was interest so see below for my memo template and high-level thoughts.

1) Thoughts on Investment Memos

I’ve talked a lot about investment memos in the past and how I believe we should be more open/transparent about the overall structure. At times, I’ve even shared the exact internal memo with other friendly investors who were doing diligence for a round I was leading. 

The summary of my thoughts are:

How I Use Memos

I’d say ~70% of the time when I write a memo, I end up offering to make an investment in the company, so sometimes a memo goes nowhere but into my archives.

I start my memo usually after meeting #2 with a company and I’ve found it’s very helpful in figuring out what do I have left to understand about the business, what areas should I push on, and separates the company from the investment (i.e. a good company doesn’t always mean a good investment, often due to price).

At times where I have expertise overlap with my partners on an investment, I also will use the memo to bring them up to speed.

My memos traditionally run long for seed (8–10 pages on average) and have a separate attachment of all of the notes I’ve taken as I’ve been meeting with the company. If we’ve had phone calls/video calls, they are multiple pages and have multiple quotes that the founder said. If our meetings have been in person, I usually distill my notes after the fact, as I don’t love taking handwritten notes, and I don’t want to have electronics out in person.

For deals that move quickly, I almost always still try to write a deeper memo within weeks after we invest, to have that record of thinking.

The Actual Memo - CLICK HERE

Here is my investment memo template in google doc form (note: we don’t use a standardized memo at Compound, each partner has their own version/structure). 

In a future post, if there is interest, I’ll walk through various parts of the memo. Please email or tweet at me all specific questions and I’ll do my best to answer them all in this post.

For now, here are the opening parts broken down:


This is straightforward. For information about the “required exit for RTF” section read my post on this equation.


I try my best to understand what are they key risks within a given business on both from going Seed to A, but also from becoming a fund-returning investment. The biggest learning as I’ve done more investing is that for many risks, there are no mitigations.

Sometimes founders are first-time founders who have never managed a budget before, others have been at big companies and you worry about their ability to move quickly and efficiently, sometimes they have to build really expensive and high-quality engineering teams. Often for these more amorphous risks, the *honest* mitigation is just a leap of faith, or maybe a small proof point in their past or that came through a reference we did on the founders. In most of these cases I will just write “N/A”.

Other common risks are (but not limited to):

  • Concern on the founding team’s ability to fundraise and the market dynamics within the industry — As an investor often investing in highly-technical businesses that won’t have major proof points at Series A, it’s unfortunately important for us to think about follow-on capital dynamics both in the sense of Have any other VCs made investments in competing companies (essentially boxing them out for follow-on)?, Is there a deep pool of possible strategic investors (these can be helpful for early commercialization or less-valuation sensitive follow-on)? Or even; is there a specific love for this type of technology from international, non-VC investors?

  • Raising enough capital to hit meaningful inflection point (this can be both technical and execution risk roped into one)— I think about investing as giving a company enough money and help to reach an inflection point of value so that they can raise more money (eventually leading to a successful exit). It’s kind of sad when put that way, but that’s the most capitalistic view on what we do as seed investors (along with partner with great founders, help people achieve dreams, whatever else we want to pander about on twitter to make ourselves feel/look better). For some businesses, related to the first bullet, it’s actually unclear *what* the right inflection point will be. My job as a lead seed investor is to help our founders know these metrics at a market level, and at an individual investor level. Unlike in more well known areas like enterprise SaaS where people generally have an idea of growth rate and ARR required to raise a strong A (until a hot company blows those expectations out of the water), the areas I invest in are less clear. For robotics, some Series A investors will want to see lots of pilots and customer data paired with LOIs, for some they just want to see a scalable MVP and an elite team, and others think they want to do series A robotics investing but really don’t and won’t get comfortable unless another firm outbids them.

  • Market Size — I admittedly don’t think too deeply about this nor (despite my hedge fund background) do I care to do a finance-style analysis on TAM, especially not in the businesses that I often am investing in. I have a general view of “can you build a $100M+/year business over the next 4–7 years, or not” and that’s about it. Projecting TAMs for companies that are solving previously unsolved problems is guesswork at best.

  • First-time founders — Again, I worry about this from specific skillset angles like ability to manage capital or inspire people to take less salary to join their company, but there isn’t much mitigation. For a few companies a mitigation has been on the acqui-hire value of the people based on their unique skillsets, but this often decays within industries over time.

More case-specific examples of mitigations include:

  • Simulation is built in-house — In AV-first simulation this was a key risk where I was concerned that behaviors from players like Waymo and Zoox. which had built fully-featured simulation software stacks, could continue to proliferate to any of the long-lasting AV companies over the next few years, as these businesses built up the early revenue before expanding into other verticals. 
    Mitigation:Tech-heavy OEMs like Tesla, GM, Ford may be able to attract software talent but with projected rise of auto OEMs + Tier 1s, many won’t be able to hire software expertise to build. The key factor here is the delta in time between proliferation of AV teams (and thus the business that can be built) vs. when someone solves AV and we see industry contraction.”

  • Slowed market expansion means fast traction will come from large customers early on which may be hesitant to implement at scale. — This was for a company working on a massive, but highly technical industry and selling infrastructure to those building companies within that industry. My concern was that in this industry I had repeatedly seen companies be able to go from proof-of-concept to pilot but very few had expanded into scaled recurring revenue, leading to a bunch of bridge rounds to nowhere as everyone waits for the market to mature.
    Mitigation (anonymized): (Strategic Investor) is investing $xM into this round and plans to pilot and then scale new tech product across multiple business lines. The company has shown an ability already on old product to break through procurement at large Fortune 1000 customers.

Deal Structure/Offer Context

The structure is pretty simple: How much are we investing, what is the valuation, what is the total price, and if there are any additional terms (is another VC fund or studio getting warrants for some reason? Is the liquidation preference non-market? Will the founders hold legal responsibility post an acquisition?)

The context is what matters far more: Who else is competing for this investment? What is it that we feel we must write check size-wise to win? Where in the process are the founders and what do they value most? Will we need to potentially bridge this company so should we hold reserves between seed and A?

There are many variables here that lead to where in our investment range we invest (both on the founder side and our side).

If you have any thoughts, questions, or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind - #6

Animation is eating the world | Commoditization in machine learning | Starting your career in Consumer VC | ML + Animals

On My Mind - by Michael Dempsey

This email spawns from this thread. The process for this will evolve but as you'll see, some thoughts are random, and most are unfiltered or barely edited. Either way, let me know what you like, don't like, or want to talk about more.

Two self promotion thoughts this time!

I wrote about animation (#1).

I was on Erik Torenberg’s podcast (#5).

Main Thoughts

1) Animation is Eating The World

I wrote something on the history of animation, the future of animation, and how different technological breakthroughs have had profound effects at various points in time. This newsletter is focused on animation because I wrote something really long about it. My main view is that animation is vastly undervalued, under appreciated, and on the brink of a new explosion of content that is incredibly valuable to many stakeholders. This piece is a result of lots of research and conversations and copious notes. It’s a long read, but one that I think has lessons that apply to multiple different industries within both tech broadly, consumer specifically, and media of course.

The website I built for the piece is slow to load, but I think visually important to read on, so please wait the full ~10-20 seconds for it to load. Again, it’s a long read so feel free to use the table of contents to skip around and read what interests you.

Also please share it!

2) We have both heavily overestimated and underestimated the commoditization curve in machine learning.

We underestimate ML in certain areas and drastically overestimate it in others, leading to deaths of companies innovating due to the commoditization curve (object recognition as a service companies), and others dying betting on the commoditization curve that never came (Jibo). This has made it incredibly difficult for founders in the ML space to understand what investors want, and for investors to understand what is truly defensible and won’t be pushed down to a near free, general-purpose but horizontal model.

We as investors say hand wavy things like “ability to acquire proprietary datasets” for some vertical ML applications but increasingly areas have shown that this isn’t as advantageous or defensible as some believe. I’m incredibly interested in reading an updated take on Google’s One Model to Learn Them All paper.

Increasingly I’ve distilled my view on some defensibility as an understanding/elite ability to do research adaptation (and then expansion) into a commercializable product. While this sounds simple, it requires a pretty complex blend of research brain and production brain founders that are rare.

Related - It's been fascinating seeing how the "minimum viable implementation" of ML can stay strong for a really long period of time where 10x implementations don't cause any excitement. This specifically I'm thinking of neural style transfer and how applications like Prisma took early steps at this and wow'd consumers (for a brief time). Now we're seeing increasingly scalable or transferrable and bleeding edge research related to this that few people would care about or notice.

3) Is specializing in Consumer early in your VC career a bad idea?

I originally thought the best advice for new VCs is to not do consumer, but now I'm not 100% sure. It may be smart to do consumer because you'll get a super fast feedback loop (though ability to sift signal vs. noise as to why something worked in short-term but didn't generate a long-term valuable business is tough). My early thought as to why you don't do consumer (specifically consumer social or digital consumer businesses) is because the failure rate is just significantly higher/faster, with little process or understanding of failures vs. wins, and often a difficult to support investment thesis if the company fails.

It could be dominant to do consumer though as a junior VC if you can capitalize on a few hot, early deals and then leverage that into a more stable, longer-term role, before your consumer company burns out or sells in a premium acqui-hire.

Within non-digital consumer (think consumer brands/CPG and even consumer healthcare products), we’ve seen an ability to build single-digit $M/year businesses occur at a faster rate, but it’s still incredibly difficult to break past that $10-$20M/year ceiling. I haven’t heard an incredibly compelling vision as to how to understand those types of companies vs. capped-upside companies from some of the elite consumer investors. There’s an argument to be made that for D2C businesses, you’ll have a lower failure rate so you won’t have to burn as much capital making mistakes early on (though in reality, a 1x isn’t too different from a 0x when it comes to burning capital). The issue is, you will also probably have lower upside, so if you’re doing these investments in the scale of a traditional technology VC firm, your results may not be as valued or matter vs. a firm built around the dynamics of D2C businesses. And the lower failure rate today (paired with larger funds that need to pour $$ into companies) can lead to overpriced early rounds with small signs of traction, as full-stack VCs (more on them later) can see an avenue to quickly putting tens of millions of dollars to work on CAC).

On the consumer digital side, many firms seem to be sticking to the playbook of wanting to play the call-option game (write tiny seed checks into companies out of a $250M+ fund with the hopes of leading their A), but I don't know if that really works in a highly competitive series A+ environment.

This may be all a moot discussion though as I’m not sure being a horizontal “consumer investor” means the same thing anymore/is possible at an individual level. What I mean by this is that post-facebook/twitter/zynga/snap we had a rush of people wanting to find the next mobile consumer social win, which led to bets on companies like Houseparty/Meerkat, Secret, Whisper, Peach, and more (I don’t think any of these companies returned a fund, except maybe a Discord, which could but is a fundamentally different product). Now however the explosive consumer companies have looked more like Uber/lyft, bird/lime, hims/Ro, Allbirds, and some others i'm forgetting. I’m not sure it’s the same profile of investor (or even common thread between companies) that will see and get excited by all of those at seed enough to lead. Bird/Lime will feel complex and capital intensive to an investor that loves Hims/Ro’s ability to instead spend their VC money on digital acquisition. Allbirds’ upside will feel capped or moat will feel weak compared to Hims/Ros recurring nature and ability to go incredibly horizontal. Hims/Ro will feel like a regulatory risk down the road to Allbirds’ responsible brand and clear ability to become the next big shoe company, etc.

Back to the point about call-options, I will say that I'm confident that having a mandate to spray and pray at consumer seed is probably the worst of both worlds. Many full-stack firms have started to do this and while you may get some early founder face time with a now-hot series A consumer deal, I'd be surprised if it leads to a meaningful win-rate at A rounds (does anyone want to share data?).

There are a lot of dynamics at play here that I’ve written horribly about above, so let’s get back to the core question of is doing consumer dominant early on for your career in VC? The answer is, it depends. A less cop out answer could be: If you’re a new GP who is now going to be measured on the economics/returns you bring into the fund and you have 2 funds to prove it, maybe not. But if you’re a junior person looking to parlay a brand elevation into a better role (or learn quickly and get out of venture), maybe so.

4) Applying deep learning to identify patterns in animal behavior could lead to new understandings of how they work.

Image result for Up dog

This paper digs into detecting pain of horses via DL. You can start to see a slope where we end up being able to better understand animal health and preference through patterns, similar to how we can identify these things visually in humans with enough pattern recognition (although thus far computer vision and deep learning have proven quite poor at truly understanding emotion from visual cues alone). Nothing concrete here but an interesting rabbithole to go down in academia and an even more interesting future to imagine where we both communicate with animals or have a much deeper understanding of them.

5) I did my first podcast since 2015. I need to be better at this.

I went on Erik’s podcast (listen here) and discussed a bunch of things that I care about including machine learning, robotics, family planning/women’s health, animation, gender fluidity, and more. I definitely spoke too fast and used the word “like” too much but it was a lot of fun. We didn’t go aggressively deep into each of the topics but if you have any thoughts or want to further discuss, feel free to email me or tweet at me and happy to!

Other papers/things I’ve read

  • Generating 3D models of clothing from 2D Images - There are multiple obvious use-cases for this research, but the maybe under-the-radar one is for use on avatars and digital celebrities. One of the bigger issues people likes CJW (creator of Shudu) has talked about is creating digital clothing. Miquela has historically had similar problems and thus their approach has been often not digitally recreating the clothing at all.

  • A dataset for facial recognition in cartoons - Here are some thoughts I tweeted related to this. TLDR is that while these results actually weren’t great, it’ll be interesting to see more datasets emerge surrounding cartoons, as I wrote about 4 issues ago.

  • FaceSpoof Buster - This paper is another in a string that I have continued to read and catalogue related to either tricking facial recognition systems, or identifying the various types of spoofs within them. If you’re interested in this topic I’d recommend reading through some of the related works within this paper.

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind # 5

Small dataset ML, All VCs short startups, We're in a restaurant bubble,

5 things on my mind - by Michael Dempsey

What an intermission. We’re back. Hopefully every 2 weeks. This email spawns from this thread. The process for this will continually evolve but as you'll see, some thoughts are random, and most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to talk about more.

5 Thoughts

1) The over-intellectualization of thought. A tweetstorm.

“Someone had to say it”. Here.

2) We may be in a restaurant group bubble because unlike technology, restaurants aren’t nearly as scalable or profitable.

A decade ago there were a significantly smaller number of restaurant groups with multi-geo expansion goals. Similar to startups, in restaurants we now have poster-children and idols for new restauranteurs with large ambition, we have lots of capital that has flooded into private markets to fund this expansion, and we have clear trends that people can coalesce around (analogous to enabling technologies).

Zuckerberg (web 2.0) could be viewed as Danny Meyer who made tons off of successfully scaling Shake Shack (early displacement of fast food), and some off of USHG (dirty secret here, Shake Shack yelp reviews outside of NYC are significantly worse than in its hometown). Related read on USHG here.

Evan Spiegel (mobile) could be viewed as the Sweetgreen team, who scaled the next main platform, QSR, via trends of healthy, premium, narrative-driven food in a post-McDonald’s/Chipotle world.

David Chang of Momofuku fame fits somewhere between the two generations. He was responsible for pushing a certain aesthetic and lust of international, full-service cuisine in the US, while also being tempted into other paths that failed (Lucky Peach, Ando, Maple). Maybe in an unfair world, he’s Ev Williams (Twitter -> Medium)? The difference being now Chang is on a capitalistic hellbent path for Fuku (his fried chicken sandwich concept) expansion.

These idols, capital sources, and strong trends now have created a moderate bi-furcation of restauranteurs. Either you are a sole proprietor with maybe a sister location within your city or you’re someone who was this, but now with a solid Michelin mention, NYTimes, Infatuation, or Eater review you’re scaling to 2-5 more restaurants and thinking about how quickly you can move to LA, SF, NYC, Miami, or (god forbid) Las Vegas.

Modern Restauranteurs have looked at this niche consumer trend, and general rejection of chains, with a view that their concept scales. The issue is, many don’t. They don’t because of supply chain/economics, they don’t because of operational consistency, and they don’t because in each city they are moving into, there’s someone just like them (or is about to be) to compete with.

Restaurants don’t feel like they should be a power law industry, but I don’t think returns will be as evenly distributed as many fast-expanding restauranteurs believe. There is a glut of restaurants that work in core markets at high prices and continue to try to expand rapidly on full service without understanding the complexity. And this will lead to a graveyard of formerly great restaurants that will scale back if they’re lucky, or die.

3) Small dataset machine learning is more important than you know.

One of the big areas of machine learning I've been focusing on has been small dataset tools. The dirty secret within many creative ML models is that the scale and cleanliness of the data is remarkably high. For example, one of the earliest papers on generating anime faces features a dataset with close to 50k images all with varying (but closely cropped) head poses. More recently, Nvidia’s StyleGAN paper used a new dataset of 70k images (FFHQ) in order to have more variation, leading to better diversity and quality of generation. What we've started to see now is people experiment with new forms of transfer learning to transfer a pre-trained model onto a new domain. This paper showed this with just 25 new images onto a previously trained model at one point. I expect to see this trend only increase as few practitioners have the budgets or time to properly curate a dataset. In addition, many emerging use-cases may traditionally need a dataset size which is impossible to gather.

4) Remember shorting is capped upside, unlimited downside. Also, all VCs sorta short startups implicitly.

Friends know I am fairly active in shorting stocks in my personal public market portfolio and often enjoy talking about that more than longs. Being able to identify faults in public companies, when most external pressure has pushed them to continue to rise in value over the past decade is a valuable skillset and fascinating though process. I think the stance that some in VC take that we are long-only investors is an overly literal one and is either a mis-evaluation of what exactly our job is, or is a marketing ploy.

As venture dollars have flowed, and very few startups are *truly* one-of-a-kind from a business model perspective, venture investors are forced to place bets on outperformance and underperformance across categories, and because we largely operate with the belief that we are investing in duopolistic markets, our companies are often near zero-sum. Thus, while we are only allocating dollars to long positions, we implicitly are making decisions based on being short other businesses.

With respect to public market shorts, the one thing to realize when taking a financial position is that upside is capped in shorts (a stock can only fall 100%, but can grow infinitely). What this means is 2 things.

First, most people shouldn’t short stocks. Markets trend up over time and there’s tons of literature as to why holding and participating in key rally days drastically impacts returns.

Second, sometimes it's dominant in the short to mid-term to take a market neutral stance (a less-correlation reliant version of a pairs trade) so that you are only making an implicit competitive bet. An example of this in practice would be going long Uber and short Lyft at the same time and constantly re-balancing these positions as they develop so that your financial success is merely tied to Uber outperforming Lyft, not Uber ultimately winning.

5) Construction sites feel like the next battlefield for robotics after factory floors.

There are so many startups in this space.

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

Loading more posts…