On My Mind - # 8

Seeking inspiration, CTRL-Labs & Fund Dynamics, Digital goods of tomorrow

On my mind - by Michael Dempsey

This email spawns from this thread.

These thoughts are poorly edited and researched and usually span areas of venture capital, emerging technology, and other random thoughts.

I’d love for this newsletter to be more of a discussion vs. me shouting to the internet, so please email me back and I promise to respond.

5 Thoughts

1) I’m seeking inspiration. Let’s find it together.

vxtacy:
“[ d e s i g n ]
”

Here are a few areas I’ve spent a lot of time in this year:

  • Adversarial attacks on machine learning models

  • AI friends

  • Animation

  • Avatar-first products

  • Creative ML

  • Gender fluidity and its impact on consumer

  • Human pose estimation research

  • Fashion + ML

  • Full-stack robotics companies

  • Future of family planning

  • Modified humans, animals, and plants.

  • Psychedelics and their impact on healthcare

  • Using ML to decode non-verbal communication across animals and humans

Here are other areas I want to continue to talk to people about that I’ve spent some time in this year:

  • Science-based CPG products

  • The past, present, and future of game engines and large scale simulation engines

  • Social VR

  • Projection mapping + holograms

  • Enabling more angel investors

  • Space (non-earth observation related)

If you’d like to chat about any of these or other post-science project ideas, feel free to respond to this email or directly email mike@compound.vc

2) On CTRL-Labs & Fund Dynamics

The tweet above triggered some conversations over the past 24 hours. Friends at Lux, Spark, Founders Fund, GV, and others were all right in a big way, and right on a difficult space to build conviction on. To his credit, Josh Wolfe was a giddy public cheerleader for this company since before he invested (I remember how much he was excited the first time he told me about them). It is easy to be a cheerleader as a VC when the company is at series C+ and has nailed commercialization, it's a lot harder when it’s a Series A neural compute interface startup, so props to Josh.

The point I was making with my tweet was that creeping fund size and round dynamics in 2019 make it really difficult to nail venture returns and change investing dynamics...and this company is a great example of that.

Large “full-stack” firms that view their entry point as majority series A, now must deploy so much capital into a company over its lifespan, gathering more ownership, and thus trading deal multiple for cash-on-cash returns, as they target their 2.5-3x net fund. The dynamics of "will this return the fund" may no longer necessarily apply to these firms. Instead, with larger funds (and opportunity funds), the question weighs capital deployment over initial ownership in a much stronger way than vintages past.

The question becomes: "Can we get enough capital into this company so that if it has the outlier company that we hope to have 1-2x/fund, we'll return the fund (or maybe 75% of it)?"

But well-regarded founders can command substantial capital ahead of traction, and those founders working on massive upside, possible platform companies can further bend round dynamics in a capital abundant world. Specifically, for opportunities that are as "moonshot", capital intensive, and high upside as something like CTRL-Labs is/was, it then makes sense to bend but not break, get into the early rounds with the hopes that you can deploy not only your venture fund into the company, but also your opportunity fund capital. This is something that could have led to five $400M+ fund size firms splitting (to varying degrees) 3 announced rounds ($11M, and two $28M rounds).

Regardless of these dynamics, this is a great investment and as noted, the IRR has got to be incredible.

Two interesting notes in terms of the investors:

  1. You can make the argument that Spark has now had this happen to them 3 times, as early backers of Oculus (alongside Matrix, who also invested in CTRL-Labs) and Cruise as well. So impressive.

  2. Lux on a different vector (massive moonshot, capital intensive) saw this play out successfully (though over more time) with Auris Surgical (sold for $3.4B-$5B+, raised $733M).

Okay, enough armchair VCing, congrats to my friends who invested, and maybe to NYC which hopefully will get a few more deep tech angel investors in the mix.

3) Will digital goods be what kids today, buy as adults tomorrow?

Image result for fortnite skins collection

I recently read This Is Not a T-Shirt by Bobby Hundreds. In the book he makes a point about Japan's role in sneaker collecting culture. Retro AF1s, Dunks, etc. were really sought after in the 2000s because the kids who wanted these shoes in the 1980s were now young professionals with money and were coming to market to "fulfill their fantasies". What is the version of those goods today? What will today’s children lust after 10-20 years from now once they have some discretionary income? Is it goods that already fetch high prices on thee aftermarket that are modeled from the beginning as scarce (a la Yeezys or Supreme), is it digital goods that they grew up with but couldn’t convince their parents to purchase (Roblox goods/Fortnite skins?) And if it IS digital goods, then how will aftermarkets really exist outside of NFTs (maybe NFTs are the answer) when the platforms these digital goods live on become irrelevant or stop functioning?

4) Expectations vs. reality, VC & parenting edition

In the early days of your venture career you often think about all the amazing things you’re going to do to find people, help companies, invest better, etc. Some of them truly do give alpha, while many of them fail for various reasons. As your portfolio grows you start to see where things break in this business and have to be very mindful about the experiments you run to achieve better returns, founder NPS, scale, and more.

Some of these lessons are ones that others who have come before you can teach you, but you really need to learn for yourself.

I wonder if the same thing happens with parenting. You think you’re going to do all of these amazing things for/with your children, but eventually you realize at each age the diminishing (or negative) returns that these idealistic thoughts have, and/or life continues to get in the way of some of these ambitions, and you end up not being able to actually do all of the things that in your childless state you thought you would be able to do one day.

5) Novelty Decay in AI-generated/synthetic content is taking hold

image

I wrote a quick note on how the novelty of both digital celebrities and AI-generated music is decaying and capping the upside of new entrants. I only expect this to accelerate.

Research / Links

  • Game character generation from a single photo - This paper has really interesting implications surrounding 2D photo → 3D asset generation. In general this is an area of research we’ve seen a bunch of people take aim at over the years (most famously, and early, Hao Li’s lab, and more recently everyone from universities to Facebook). There are novel bits and pieces technically, but the main thing is that this is a future that many people have thought about or wanted for some time (Loic at Morphin has often talked about how his origin story of his company started by wanting to put himself in FIFA and other games). Just always cool to see clear science fiction start to push forward towards reality on fairly “mainstream” ideas.

  • How much real data do we actually need: Analyzing object detection performance using synthetic and real data - This paper takes a look at the value that synthetic data can bring to training models. Specifically it realizes something that I don't feel is intuitive to most which is that the open-sourced synthetic data training sets don't generalize very well, nor do they have great diversity of data (these problems go hand in hand). What I've seen spending time in this space is that diversity of data is incredibly misunderstood and underemphasized. Even when looking at something as seemingly "gold standard" years ago as GTA V, researchers and engineers I spoke to at the time realized that the diversity of textures and more were incredibly underwhelming and wouldn't transfer learnings well to a real world environment. What we've started to see now is data expansion via style transfer or just synthesis, essentially utilizing GANs to increase diversity. My feeling on this paper is that it is quite negative for how many perceive synthetic data, especially in areas where highly generalizable perception models are needed (think self-driving), however it also tells me that companies focusing on specific types of perception/understanding are likely far ahead of where current open-source datasets are. I previously wrote about AV simulation and am an investor in AI Reverie.

  • Making the Invisible Visible: Action Recognition Through Walls and Occlusions - Using a combination of visual and non-visual signals to recognize actions through walls and more. Pretty incredible.

  • Adblockradio - Using ML to remove ads from radio feeds. Pretty funny.

If you have any thoughts, questions, or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind - #7

Here's my investment memo template

On my mind - by Michael Dempsey

This email spawns from this thread. The process for this will evolve but as you'll see, some thoughts are random, and most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to talk about more.

1 Thought

Spoiler alert: There was interest so see below for my memo template and high-level thoughts.

1) Thoughts on Investment Memos

I’ve talked a lot about investment memos in the past and how I believe we should be more open/transparent about the overall structure. At times, I’ve even shared the exact internal memo with other friendly investors who were doing diligence for a round I was leading. 

The summary of my thoughts are:

How I Use Memos

I’d say ~70% of the time when I write a memo, I end up offering to make an investment in the company, so sometimes a memo goes nowhere but into my archives.

I start my memo usually after meeting #2 with a company and I’ve found it’s very helpful in figuring out what do I have left to understand about the business, what areas should I push on, and separates the company from the investment (i.e. a good company doesn’t always mean a good investment, often due to price).

At times where I have expertise overlap with my partners on an investment, I also will use the memo to bring them up to speed.

My memos traditionally run long for seed (8–10 pages on average) and have a separate attachment of all of the notes I’ve taken as I’ve been meeting with the company. If we’ve had phone calls/video calls, they are multiple pages and have multiple quotes that the founder said. If our meetings have been in person, I usually distill my notes after the fact, as I don’t love taking handwritten notes, and I don’t want to have electronics out in person.

For deals that move quickly, I almost always still try to write a deeper memo within weeks after we invest, to have that record of thinking.

The Actual Memo - CLICK HERE

Here is my investment memo template in google doc form (note: we don’t use a standardized memo at Compound, each partner has their own version/structure). 

In a future post, if there is interest, I’ll walk through various parts of the memo. Please email or tweet at me all specific questions and I’ll do my best to answer them all in this post.

For now, here are the opening parts broken down:

Summary:

This is straightforward. For information about the “required exit for RTF” section read my post on this equation.

Risk/Mitigations:

I try my best to understand what are they key risks within a given business on both from going Seed to A, but also from becoming a fund-returning investment. The biggest learning as I’ve done more investing is that for many risks, there are no mitigations.

Sometimes founders are first-time founders who have never managed a budget before, others have been at big companies and you worry about their ability to move quickly and efficiently, sometimes they have to build really expensive and high-quality engineering teams. Often for these more amorphous risks, the *honest* mitigation is just a leap of faith, or maybe a small proof point in their past or that came through a reference we did on the founders. In most of these cases I will just write “N/A”.

Other common risks are (but not limited to):

  • Concern on the founding team’s ability to fundraise and the market dynamics within the industry — As an investor often investing in highly-technical businesses that won’t have major proof points at Series A, it’s unfortunately important for us to think about follow-on capital dynamics both in the sense of Have any other VCs made investments in competing companies (essentially boxing them out for follow-on)?, Is there a deep pool of possible strategic investors (these can be helpful for early commercialization or less-valuation sensitive follow-on)? Or even; is there a specific love for this type of technology from international, non-VC investors?

  • Raising enough capital to hit meaningful inflection point (this can be both technical and execution risk roped into one)— I think about investing as giving a company enough money and help to reach an inflection point of value so that they can raise more money (eventually leading to a successful exit). It’s kind of sad when put that way, but that’s the most capitalistic view on what we do as seed investors (along with partner with great founders, help people achieve dreams, whatever else we want to pander about on twitter to make ourselves feel/look better). For some businesses, related to the first bullet, it’s actually unclear *what* the right inflection point will be. My job as a lead seed investor is to help our founders know these metrics at a market level, and at an individual investor level. Unlike in more well known areas like enterprise SaaS where people generally have an idea of growth rate and ARR required to raise a strong A (until a hot company blows those expectations out of the water), the areas I invest in are less clear. For robotics, some Series A investors will want to see lots of pilots and customer data paired with LOIs, for some they just want to see a scalable MVP and an elite team, and others think they want to do series A robotics investing but really don’t and won’t get comfortable unless another firm outbids them.

  • Market Size — I admittedly don’t think too deeply about this nor (despite my hedge fund background) do I care to do a finance-style analysis on TAM, especially not in the businesses that I often am investing in. I have a general view of “can you build a $100M+/year business over the next 4–7 years, or not” and that’s about it. Projecting TAMs for companies that are solving previously unsolved problems is guesswork at best.

  • First-time founders — Again, I worry about this from specific skillset angles like ability to manage capital or inspire people to take less salary to join their company, but there isn’t much mitigation. For a few companies a mitigation has been on the acqui-hire value of the people based on their unique skillsets, but this often decays within industries over time.

More case-specific examples of mitigations include:

  • Simulation is built in-house — In AV-first simulation this was a key risk where I was concerned that behaviors from players like Waymo and Zoox. which had built fully-featured simulation software stacks, could continue to proliferate to any of the long-lasting AV companies over the next few years, as these businesses built up the early revenue before expanding into other verticals. 
    Mitigation:Tech-heavy OEMs like Tesla, GM, Ford may be able to attract software talent but with projected rise of auto OEMs + Tier 1s, many won’t be able to hire software expertise to build. The key factor here is the delta in time between proliferation of AV teams (and thus the business that can be built) vs. when someone solves AV and we see industry contraction.”

  • Slowed market expansion means fast traction will come from large customers early on which may be hesitant to implement at scale. — This was for a company working on a massive, but highly technical industry and selling infrastructure to those building companies within that industry. My concern was that in this industry I had repeatedly seen companies be able to go from proof-of-concept to pilot but very few had expanded into scaled recurring revenue, leading to a bunch of bridge rounds to nowhere as everyone waits for the market to mature.
    Mitigation (anonymized): (Strategic Investor) is investing $xM into this round and plans to pilot and then scale new tech product across multiple business lines. The company has shown an ability already on old product to break through procurement at large Fortune 1000 customers.

Deal Structure/Offer Context

The structure is pretty simple: How much are we investing, what is the valuation, what is the total price, and if there are any additional terms (is another VC fund or studio getting warrants for some reason? Is the liquidation preference non-market? Will the founders hold legal responsibility post an acquisition?)

The context is what matters far more: Who else is competing for this investment? What is it that we feel we must write check size-wise to win? Where in the process are the founders and what do they value most? Will we need to potentially bridge this company so should we hold reserves between seed and A?

There are many variables here that lead to where in our investment range we invest (both on the founder side and our side).

If you have any thoughts, questions, or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind - #6

Animation is eating the world | Commoditization in machine learning | Starting your career in Consumer VC | ML + Animals

On My Mind - by Michael Dempsey

This email spawns from this thread. The process for this will evolve but as you'll see, some thoughts are random, and most are unfiltered or barely edited. Either way, let me know what you like, don't like, or want to talk about more.

Two self promotion thoughts this time!

I wrote about animation (#1).

I was on Erik Torenberg’s podcast (#5).

Main Thoughts

1) Animation is Eating The World

I wrote something on the history of animation, the future of animation, and how different technological breakthroughs have had profound effects at various points in time. This newsletter is focused on animation because I wrote something really long about it. My main view is that animation is vastly undervalued, under appreciated, and on the brink of a new explosion of content that is incredibly valuable to many stakeholders. This piece is a result of lots of research and conversations and copious notes. It’s a long read, but one that I think has lessons that apply to multiple different industries within both tech broadly, consumer specifically, and media of course.

The website I built for the piece is slow to load, but I think visually important to read on, so please wait the full ~10-20 seconds for it to load. Again, it’s a long read so feel free to use the table of contents to skip around and read what interests you.

Also please share it!

2) We have both heavily overestimated and underestimated the commoditization curve in machine learning.

We underestimate ML in certain areas and drastically overestimate it in others, leading to deaths of companies innovating due to the commoditization curve (object recognition as a service companies), and others dying betting on the commoditization curve that never came (Jibo). This has made it incredibly difficult for founders in the ML space to understand what investors want, and for investors to understand what is truly defensible and won’t be pushed down to a near free, general-purpose but horizontal model.

We as investors say hand wavy things like “ability to acquire proprietary datasets” for some vertical ML applications but increasingly areas have shown that this isn’t as advantageous or defensible as some believe. I’m incredibly interested in reading an updated take on Google’s One Model to Learn Them All paper.

Increasingly I’ve distilled my view on some defensibility as an understanding/elite ability to do research adaptation (and then expansion) into a commercializable product. While this sounds simple, it requires a pretty complex blend of research brain and production brain founders that are rare.

Related - It's been fascinating seeing how the "minimum viable implementation" of ML can stay strong for a really long period of time where 10x implementations don't cause any excitement. This specifically I'm thinking of neural style transfer and how applications like Prisma took early steps at this and wow'd consumers (for a brief time). Now we're seeing increasingly scalable or transferrable and bleeding edge research related to this that few people would care about or notice.

3) Is specializing in Consumer early in your VC career a bad idea?

I originally thought the best advice for new VCs is to not do consumer, but now I'm not 100% sure. It may be smart to do consumer because you'll get a super fast feedback loop (though ability to sift signal vs. noise as to why something worked in short-term but didn't generate a long-term valuable business is tough). My early thought as to why you don't do consumer (specifically consumer social or digital consumer businesses) is because the failure rate is just significantly higher/faster, with little process or understanding of failures vs. wins, and often a difficult to support investment thesis if the company fails.

It could be dominant to do consumer though as a junior VC if you can capitalize on a few hot, early deals and then leverage that into a more stable, longer-term role, before your consumer company burns out or sells in a premium acqui-hire.

Within non-digital consumer (think consumer brands/CPG and even consumer healthcare products), we’ve seen an ability to build single-digit $M/year businesses occur at a faster rate, but it’s still incredibly difficult to break past that $10-$20M/year ceiling. I haven’t heard an incredibly compelling vision as to how to understand those types of companies vs. capped-upside companies from some of the elite consumer investors. There’s an argument to be made that for D2C businesses, you’ll have a lower failure rate so you won’t have to burn as much capital making mistakes early on (though in reality, a 1x isn’t too different from a 0x when it comes to burning capital). The issue is, you will also probably have lower upside, so if you’re doing these investments in the scale of a traditional technology VC firm, your results may not be as valued or matter vs. a firm built around the dynamics of D2C businesses. And the lower failure rate today (paired with larger funds that need to pour $$ into companies) can lead to overpriced early rounds with small signs of traction, as full-stack VCs (more on them later) can see an avenue to quickly putting tens of millions of dollars to work on CAC).

On the consumer digital side, many firms seem to be sticking to the playbook of wanting to play the call-option game (write tiny seed checks into companies out of a $250M+ fund with the hopes of leading their A), but I don't know if that really works in a highly competitive series A+ environment.

This may be all a moot discussion though as I’m not sure being a horizontal “consumer investor” means the same thing anymore/is possible at an individual level. What I mean by this is that post-facebook/twitter/zynga/snap we had a rush of people wanting to find the next mobile consumer social win, which led to bets on companies like Houseparty/Meerkat, Secret, Whisper, Peach, and more (I don’t think any of these companies returned a fund, except maybe a Discord, which could but is a fundamentally different product). Now however the explosive consumer companies have looked more like Uber/lyft, bird/lime, hims/Ro, Allbirds, and some others i'm forgetting. I’m not sure it’s the same profile of investor (or even common thread between companies) that will see and get excited by all of those at seed enough to lead. Bird/Lime will feel complex and capital intensive to an investor that loves Hims/Ro’s ability to instead spend their VC money on digital acquisition. Allbirds’ upside will feel capped or moat will feel weak compared to Hims/Ros recurring nature and ability to go incredibly horizontal. Hims/Ro will feel like a regulatory risk down the road to Allbirds’ responsible brand and clear ability to become the next big shoe company, etc.

Back to the point about call-options, I will say that I'm confident that having a mandate to spray and pray at consumer seed is probably the worst of both worlds. Many full-stack firms have started to do this and while you may get some early founder face time with a now-hot series A consumer deal, I'd be surprised if it leads to a meaningful win-rate at A rounds (does anyone want to share data?).

There are a lot of dynamics at play here that I’ve written horribly about above, so let’s get back to the core question of is doing consumer dominant early on for your career in VC? The answer is, it depends. A less cop out answer could be: If you’re a new GP who is now going to be measured on the economics/returns you bring into the fund and you have 2 funds to prove it, maybe not. But if you’re a junior person looking to parlay a brand elevation into a better role (or learn quickly and get out of venture), maybe so.

4) Applying deep learning to identify patterns in animal behavior could lead to new understandings of how they work.

Image result for Up dog

This paper digs into detecting pain of horses via DL. You can start to see a slope where we end up being able to better understand animal health and preference through patterns, similar to how we can identify these things visually in humans with enough pattern recognition (although thus far computer vision and deep learning have proven quite poor at truly understanding emotion from visual cues alone). Nothing concrete here but an interesting rabbithole to go down in academia and an even more interesting future to imagine where we both communicate with animals or have a much deeper understanding of them.

5) I did my first podcast since 2015. I need to be better at this.

I went on Erik’s podcast (listen here) and discussed a bunch of things that I care about including machine learning, robotics, family planning/women’s health, animation, gender fluidity, and more. I definitely spoke too fast and used the word “like” too much but it was a lot of fun. We didn’t go aggressively deep into each of the topics but if you have any thoughts or want to further discuss, feel free to email me or tweet at me and happy to!

Other papers/things I’ve read

  • Generating 3D models of clothing from 2D Images - There are multiple obvious use-cases for this research, but the maybe under-the-radar one is for use on avatars and digital celebrities. One of the bigger issues people likes CJW (creator of Shudu) has talked about is creating digital clothing. Miquela has historically had similar problems and thus their approach has been often not digitally recreating the clothing at all.

  • A dataset for facial recognition in cartoons - Here are some thoughts I tweeted related to this. TLDR is that while these results actually weren’t great, it’ll be interesting to see more datasets emerge surrounding cartoons, as I wrote about 4 issues ago.

  • FaceSpoof Buster - This paper is another in a string that I have continued to read and catalogue related to either tricking facial recognition systems, or identifying the various types of spoofs within them. If you’re interested in this topic I’d recommend reading through some of the related works within this paper.

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind # 5

Small dataset ML, All VCs short startups, We're in a restaurant bubble,

5 things on my mind - by Michael Dempsey

What an intermission. We’re back. Hopefully every 2 weeks. This email spawns from this thread. The process for this will continually evolve but as you'll see, some thoughts are random, and most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to talk about more.

5 Thoughts

1) The over-intellectualization of thought. A tweetstorm.

“Someone had to say it”. Here.

2) We may be in a restaurant group bubble because unlike technology, restaurants aren’t nearly as scalable or profitable.

A decade ago there were a significantly smaller number of restaurant groups with multi-geo expansion goals. Similar to startups, in restaurants we now have poster-children and idols for new restauranteurs with large ambition, we have lots of capital that has flooded into private markets to fund this expansion, and we have clear trends that people can coalesce around (analogous to enabling technologies).

Zuckerberg (web 2.0) could be viewed as Danny Meyer who made tons off of successfully scaling Shake Shack (early displacement of fast food), and some off of USHG (dirty secret here, Shake Shack yelp reviews outside of NYC are significantly worse than in its hometown). Related read on USHG here.

Evan Spiegel (mobile) could be viewed as the Sweetgreen team, who scaled the next main platform, QSR, via trends of healthy, premium, narrative-driven food in a post-McDonald’s/Chipotle world.

David Chang of Momofuku fame fits somewhere between the two generations. He was responsible for pushing a certain aesthetic and lust of international, full-service cuisine in the US, while also being tempted into other paths that failed (Lucky Peach, Ando, Maple). Maybe in an unfair world, he’s Ev Williams (Twitter -> Medium)? The difference being now Chang is on a capitalistic hellbent path for Fuku (his fried chicken sandwich concept) expansion.

These idols, capital sources, and strong trends now have created a moderate bi-furcation of restauranteurs. Either you are a sole proprietor with maybe a sister location within your city or you’re someone who was this, but now with a solid Michelin mention, NYTimes, Infatuation, or Eater review you’re scaling to 2-5 more restaurants and thinking about how quickly you can move to LA, SF, NYC, Miami, or (god forbid) Las Vegas.

Modern Restauranteurs have looked at this niche consumer trend, and general rejection of chains, with a view that their concept scales. The issue is, many don’t. They don’t because of supply chain/economics, they don’t because of operational consistency, and they don’t because in each city they are moving into, there’s someone just like them (or is about to be) to compete with.

Restaurants don’t feel like they should be a power law industry, but I don’t think returns will be as evenly distributed as many fast-expanding restauranteurs believe. There is a glut of restaurants that work in core markets at high prices and continue to try to expand rapidly on full service without understanding the complexity. And this will lead to a graveyard of formerly great restaurants that will scale back if they’re lucky, or die.

3) Small dataset machine learning is more important than you know.

One of the big areas of machine learning I've been focusing on has been small dataset tools. The dirty secret within many creative ML models is that the scale and cleanliness of the data is remarkably high. For example, one of the earliest papers on generating anime faces features a dataset with close to 50k images all with varying (but closely cropped) head poses. More recently, Nvidia’s StyleGAN paper used a new dataset of 70k images (FFHQ) in order to have more variation, leading to better diversity and quality of generation. What we've started to see now is people experiment with new forms of transfer learning to transfer a pre-trained model onto a new domain. This paper showed this with just 25 new images onto a previously trained model at one point. I expect to see this trend only increase as few practitioners have the budgets or time to properly curate a dataset. In addition, many emerging use-cases may traditionally need a dataset size which is impossible to gather.

4) Remember shorting is capped upside, unlimited downside. Also, all VCs sorta short startups implicitly.

Friends know I am fairly active in shorting stocks in my personal public market portfolio and often enjoy talking about that more than longs. Being able to identify faults in public companies, when most external pressure has pushed them to continue to rise in value over the past decade is a valuable skillset and fascinating though process. I think the stance that some in VC take that we are long-only investors is an overly literal one and is either a mis-evaluation of what exactly our job is, or is a marketing ploy.

As venture dollars have flowed, and very few startups are *truly* one-of-a-kind from a business model perspective, venture investors are forced to place bets on outperformance and underperformance across categories, and because we largely operate with the belief that we are investing in duopolistic markets, our companies are often near zero-sum. Thus, while we are only allocating dollars to long positions, we implicitly are making decisions based on being short other businesses.

With respect to public market shorts, the one thing to realize when taking a financial position is that upside is capped in shorts (a stock can only fall 100%, but can grow infinitely). What this means is 2 things.

First, most people shouldn’t short stocks. Markets trend up over time and there’s tons of literature as to why holding and participating in key rally days drastically impacts returns.

Second, sometimes it's dominant in the short to mid-term to take a market neutral stance (a less-correlation reliant version of a pairs trade) so that you are only making an implicit competitive bet. An example of this in practice would be going long Uber and short Lyft at the same time and constantly re-balancing these positions as they develop so that your financial success is merely tied to Uber outperforming Lyft, not Uber ultimately winning.

5) Construction sites feel like the next battlefield for robotics after factory floors.

There are so many startups in this space.

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind # 4

Technology is making us crave the similar, and the unique

5 things on my mind - by Michael Dempsey

I promise I’ll get the right cadence down on this thing at some point. This email spawns from this thread. The process for this will evolve but as you'll see, some thoughts are random, and most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to talk about more.

5 Things

1) Technology is making us crave the similar, and the unique.

This turned from a thought into a longer form blog post which you can read in its entirety here.

—-

Over the past two weeks I've read multiple pieces that are on opposing sides of what we’re craving as consumers and in our digital self-expression. They all speak to a duality of how we are both starved for individuality as well as driven towards homogeny. These points are communicated to us through some interesting trends in technology, social media, and pop culture. Let me walk you through my sequence of consumption here.

Beauty_GAN (not to be confused with BeautyGAN) is a sparsely documented implementation of a GAN that utilizes instagram makeup trends and then generates new styles, which Dazed put on Kylie Jenner's face. The resulting imagery feels very cherry picked (not a rarity based off of my experience with GANs) but the key point of the article is that these dataset sizes and inputs are continually built with human in the loop biases (we'll talk about this again in #4). Put more artistically in the article:

"...Beauty_GAN is like a mirror of popular culture, but the reflection staring back at you might not be what you expected. We teach a machine to see us and what it shows us back is not always what we see ourselves.”

Despite what the GAN mirrors back in terms of how "dystopian" something looks, there are two other related pieces that speak to this homogenization of taking the internet's makeup kingmaker (Kylie) and re-painting the internet's makeup onto her. Or as Dazed wrote:

"One could argue that, of all the beauty imagery we see on Instagram today, Kylie Jenner’s face, her aesthetic, holds the most influence. Every time someone copies her contour or lip liner there’s a further proliferation that happens. She influences what we think of as beautiful, what exists on Instagram. The Beauty_GAN project sees this inputted into a machine, and then lets the machine take over; the machine creates what it thinks is beauty imagery, and then paints it back onto Kylie’s face. And so, the feedback loop closes."

And this feedback loop has proliferated into other forms of celebrities as Telegraph highlighted in this article.

"Lil Miquela, after all, is the ultimate embodiment of homogenised, Instagram-friendly beauty. An ambiguous mix of different ethnicities, with on-trend freckles and a body that can be shaped and moulded depending on the body parts required, she can be everything that consumers desire at any given time. "

But the key here isn’t what makes a character like Miquela or Imma.gram compelling. There’s time decaying first order interest points of “is this a robot or a human?!” and the general intrigue of a synthetic being, but then there’s the natural, more commonplace feeling amongst many influencers of “they are kind of like me, but better.”

And this close similarity is what I believe is akin to the dopamine rushes that gamification experts have hit on for a long time of being partially satiated, but not entirely fulfilled. This phenomenon has partially been described as Selfie Harm. It keeps us wanting more, liking more, swiping more, for something that we know we likely can’t obtain. But what happens when we can?

Image result for selfie harm engadget

When we’re given tools that allow us to have those “on-trend freckles” of Miquela, or the contour and sizing of Kylie’s lips, or the dyed hair of Imma.gram, then what do we crave? Perhaps difference.

This is what I believe we’re seeing in pockets of the internet today. We’re seeing massive share numbers generated by very differentiated and unique AR filters that eschew traditional beauty trends. As one of the creators of these filters says in a Dazed profile:

“These filters can be used in creative new ways that partly break with the expectation of self-depiction on social media…Breaking fixed thought patterns on how we perceive gender and beauty is important and much needed.”

Maybe this is just how we cycle influence. We tire from the popular aesthetic/approach, early-movers push towards a new approach, some subset of the influencers make the jump, while new ones are borne, and on and on we go. Or maybe early pockets of culture are at a pivot point of individuality because for the first time we don’t get to escape and recharge our batteries from the influence.

Or as Oliver Sacks put it best:

“(We) have given up, to a great extent, the amenities and achievements of civilization: solitude and leisure, the sanction to be oneself, truly absorbed, whether in contemplating a work of art, a scientific theory, a sunset, or the face of one’s beloved.”

2) Is it worthless or a secret weapon to be elite at investing in non-technical teams?

I'd really like to learn more about investing in non-technical teams. I think this is a skillset that very few people have, and many probably argue is useless in 2019. I'd imagine there's a big market inefficiency of pricing these deals though if you can nail ability for non-technical founder to manage a technical team and/or hire that manager post-raise. Counter-argument would be, they should be able to inspire enough to get tech lead pre-raise but that's a privileged argument IMO.

3) Are Creative Technologists ideal early engineers?

In some of the areas surrounding computational creativity as well as consumer there's a heavy need for good graphics engineers, 3D artists, and what people are now deeming "creative technologists." It's been amazing to see this cohort of creators emerge with cool projects spanning AR/VR, 3D, and AI/ML, often with a beautiful portfolio of cool freelance projects. Many in this space are naturally drawn towards these people, but I'm increasingly bearish on long-term freelancers being early hires as I worry about pace of iteration and their ability to bang their head against a wall for a multi-month time horizon vs. jumping from interesting project/ tech proof of concepts every few weeks.

4) Who designs the model matters a lot for structuring industry-specific ML models.

This paper matching influencers and brands. The interesting part is it's trying to say how to match influencers that are most similar to brands. This feels like a fundamental misunderstanding in the sense that a lot of influencer marketing is amplification, but also expansion (especially within micro influencer categories). They address this a bit but saying that the algorithm could analyze category types in posts, but it shows that humans in the loop for some of these ML models are increasingly important to get good domain-specific results that can ever be used. In addition the dataset is so small. I'd love to see this expanded to significantly more than 20 accounts but focused on just 3-5 categories.

5) This was a long newsletter with just 4 thoughts.

A few papers/links this week

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

Loading more posts…