On My Mind - #6

Animation is eating the world | Commoditization in machine learning | Starting your career in Consumer VC | ML + Animals

On My Mind - by Michael Dempsey

This email spawns from this thread. The process for this will evolve but as you'll see, some thoughts are random, and most are unfiltered or barely edited. Either way, let me know what you like, don't like, or want to talk about more.

Two self promotion thoughts this time!

I wrote about animation (#1).

I was on Erik Torenberg’s podcast (#5).

Main Thoughts

1) Animation is Eating The World

I wrote something on the history of animation, the future of animation, and how different technological breakthroughs have had profound effects at various points in time. This newsletter is focused on animation because I wrote something really long about it. My main view is that animation is vastly undervalued, under appreciated, and on the brink of a new explosion of content that is incredibly valuable to many stakeholders. This piece is a result of lots of research and conversations and copious notes. It’s a long read, but one that I think has lessons that apply to multiple different industries within both tech broadly, consumer specifically, and media of course.

The website I built for the piece is slow to load, but I think visually important to read on, so please wait the full ~10-20 seconds for it to load. Again, it’s a long read so feel free to use the table of contents to skip around and read what interests you.

Also please share it!

2) We have both heavily overestimated and underestimated the commoditization curve in machine learning.

We underestimate ML in certain areas and drastically overestimate it in others, leading to deaths of companies innovating due to the commoditization curve (object recognition as a service companies), and others dying betting on the commoditization curve that never came (Jibo). This has made it incredibly difficult for founders in the ML space to understand what investors want, and for investors to understand what is truly defensible and won’t be pushed down to a near free, general-purpose but horizontal model.

We as investors say hand wavy things like “ability to acquire proprietary datasets” for some vertical ML applications but increasingly areas have shown that this isn’t as advantageous or defensible as some believe. I’m incredibly interested in reading an updated take on Google’s One Model to Learn Them All paper.

Increasingly I’ve distilled my view on some defensibility as an understanding/elite ability to do research adaptation (and then expansion) into a commercializable product. While this sounds simple, it requires a pretty complex blend of research brain and production brain founders that are rare.

Related - It's been fascinating seeing how the "minimum viable implementation" of ML can stay strong for a really long period of time where 10x implementations don't cause any excitement. This specifically I'm thinking of neural style transfer and how applications like Prisma took early steps at this and wow'd consumers (for a brief time). Now we're seeing increasingly scalable or transferrable and bleeding edge research related to this that few people would care about or notice.

3) Is specializing in Consumer early in your VC career a bad idea?

I originally thought the best advice for new VCs is to not do consumer, but now I'm not 100% sure. It may be smart to do consumer because you'll get a super fast feedback loop (though ability to sift signal vs. noise as to why something worked in short-term but didn't generate a long-term valuable business is tough). My early thought as to why you don't do consumer (specifically consumer social or digital consumer businesses) is because the failure rate is just significantly higher/faster, with little process or understanding of failures vs. wins, and often a difficult to support investment thesis if the company fails.

It could be dominant to do consumer though as a junior VC if you can capitalize on a few hot, early deals and then leverage that into a more stable, longer-term role, before your consumer company burns out or sells in a premium acqui-hire.

Within non-digital consumer (think consumer brands/CPG and even consumer healthcare products), we’ve seen an ability to build single-digit $M/year businesses occur at a faster rate, but it’s still incredibly difficult to break past that $10-$20M/year ceiling. I haven’t heard an incredibly compelling vision as to how to understand those types of companies vs. capped-upside companies from some of the elite consumer investors. There’s an argument to be made that for D2C businesses, you’ll have a lower failure rate so you won’t have to burn as much capital making mistakes early on (though in reality, a 1x isn’t too different from a 0x when it comes to burning capital). The issue is, you will also probably have lower upside, so if you’re doing these investments in the scale of a traditional technology VC firm, your results may not be as valued or matter vs. a firm built around the dynamics of D2C businesses. And the lower failure rate today (paired with larger funds that need to pour $$ into companies) can lead to overpriced early rounds with small signs of traction, as full-stack VCs (more on them later) can see an avenue to quickly putting tens of millions of dollars to work on CAC).

On the consumer digital side, many firms seem to be sticking to the playbook of wanting to play the call-option game (write tiny seed checks into companies out of a $250M+ fund with the hopes of leading their A), but I don't know if that really works in a highly competitive series A+ environment.

This may be all a moot discussion though as I’m not sure being a horizontal “consumer investor” means the same thing anymore/is possible at an individual level. What I mean by this is that post-facebook/twitter/zynga/snap we had a rush of people wanting to find the next mobile consumer social win, which led to bets on companies like Houseparty/Meerkat, Secret, Whisper, Peach, and more (I don’t think any of these companies returned a fund, except maybe a Discord, which could but is a fundamentally different product). Now however the explosive consumer companies have looked more like Uber/lyft, bird/lime, hims/Ro, Allbirds, and some others i'm forgetting. I’m not sure it’s the same profile of investor (or even common thread between companies) that will see and get excited by all of those at seed enough to lead. Bird/Lime will feel complex and capital intensive to an investor that loves Hims/Ro’s ability to instead spend their VC money on digital acquisition. Allbirds’ upside will feel capped or moat will feel weak compared to Hims/Ros recurring nature and ability to go incredibly horizontal. Hims/Ro will feel like a regulatory risk down the road to Allbirds’ responsible brand and clear ability to become the next big shoe company, etc.

Back to the point about call-options, I will say that I'm confident that having a mandate to spray and pray at consumer seed is probably the worst of both worlds. Many full-stack firms have started to do this and while you may get some early founder face time with a now-hot series A consumer deal, I'd be surprised if it leads to a meaningful win-rate at A rounds (does anyone want to share data?).

There are a lot of dynamics at play here that I’ve written horribly about above, so let’s get back to the core question of is doing consumer dominant early on for your career in VC? The answer is, it depends. A less cop out answer could be: If you’re a new GP who is now going to be measured on the economics/returns you bring into the fund and you have 2 funds to prove it, maybe not. But if you’re a junior person looking to parlay a brand elevation into a better role (or learn quickly and get out of venture), maybe so.

4) Applying deep learning to identify patterns in animal behavior could lead to new understandings of how they work.

Image result for Up dog

This paper digs into detecting pain of horses via DL. You can start to see a slope where we end up being able to better understand animal health and preference through patterns, similar to how we can identify these things visually in humans with enough pattern recognition (although thus far computer vision and deep learning have proven quite poor at truly understanding emotion from visual cues alone). Nothing concrete here but an interesting rabbithole to go down in academia and an even more interesting future to imagine where we both communicate with animals or have a much deeper understanding of them.

5) I did my first podcast since 2015. I need to be better at this.

I went on Erik’s podcast (listen here) and discussed a bunch of things that I care about including machine learning, robotics, family planning/women’s health, animation, gender fluidity, and more. I definitely spoke too fast and used the word “like” too much but it was a lot of fun. We didn’t go aggressively deep into each of the topics but if you have any thoughts or want to further discuss, feel free to email me or tweet at me and happy to!

Other papers/things I’ve read

  • Generating 3D models of clothing from 2D Images - There are multiple obvious use-cases for this research, but the maybe under-the-radar one is for use on avatars and digital celebrities. One of the bigger issues people likes CJW (creator of Shudu) has talked about is creating digital clothing. Miquela has historically had similar problems and thus their approach has been often not digitally recreating the clothing at all.

  • A dataset for facial recognition in cartoons - Here are some thoughts I tweeted related to this. TLDR is that while these results actually weren’t great, it’ll be interesting to see more datasets emerge surrounding cartoons, as I wrote about 4 issues ago.

  • FaceSpoof Buster - This paper is another in a string that I have continued to read and catalogue related to either tricking facial recognition systems, or identifying the various types of spoofs within them. If you’re interested in this topic I’d recommend reading through some of the related works within this paper.

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind # 5

Small dataset ML, All VCs short startups, We're in a restaurant bubble,

5 things on my mind - by Michael Dempsey

What an intermission. We’re back. Hopefully every 2 weeks. This email spawns from this thread. The process for this will continually evolve but as you'll see, some thoughts are random, and most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to talk about more.

5 Thoughts

1) The over-intellectualization of thought. A tweetstorm.

“Someone had to say it”. Here.

2) We may be in a restaurant group bubble because unlike technology, restaurants aren’t nearly as scalable or profitable.

A decade ago there were a significantly smaller number of restaurant groups with multi-geo expansion goals. Similar to startups, in restaurants we now have poster-children and idols for new restauranteurs with large ambition, we have lots of capital that has flooded into private markets to fund this expansion, and we have clear trends that people can coalesce around (analogous to enabling technologies).

Zuckerberg (web 2.0) could be viewed as Danny Meyer who made tons off of successfully scaling Shake Shack (early displacement of fast food), and some off of USHG (dirty secret here, Shake Shack yelp reviews outside of NYC are significantly worse than in its hometown). Related read on USHG here.

Evan Spiegel (mobile) could be viewed as the Sweetgreen team, who scaled the next main platform, QSR, via trends of healthy, premium, narrative-driven food in a post-McDonald’s/Chipotle world.

David Chang of Momofuku fame fits somewhere between the two generations. He was responsible for pushing a certain aesthetic and lust of international, full-service cuisine in the US, while also being tempted into other paths that failed (Lucky Peach, Ando, Maple). Maybe in an unfair world, he’s Ev Williams (Twitter -> Medium)? The difference being now Chang is on a capitalistic hellbent path for Fuku (his fried chicken sandwich concept) expansion.

These idols, capital sources, and strong trends now have created a moderate bi-furcation of restauranteurs. Either you are a sole proprietor with maybe a sister location within your city or you’re someone who was this, but now with a solid Michelin mention, NYTimes, Infatuation, or Eater review you’re scaling to 2-5 more restaurants and thinking about how quickly you can move to LA, SF, NYC, Miami, or (god forbid) Las Vegas.

Modern Restauranteurs have looked at this niche consumer trend, and general rejection of chains, with a view that their concept scales. The issue is, many don’t. They don’t because of supply chain/economics, they don’t because of operational consistency, and they don’t because in each city they are moving into, there’s someone just like them (or is about to be) to compete with.

Restaurants don’t feel like they should be a power law industry, but I don’t think returns will be as evenly distributed as many fast-expanding restauranteurs believe. There is a glut of restaurants that work in core markets at high prices and continue to try to expand rapidly on full service without understanding the complexity. And this will lead to a graveyard of formerly great restaurants that will scale back if they’re lucky, or die.

3) Small dataset machine learning is more important than you know.

One of the big areas of machine learning I've been focusing on has been small dataset tools. The dirty secret within many creative ML models is that the scale and cleanliness of the data is remarkably high. For example, one of the earliest papers on generating anime faces features a dataset with close to 50k images all with varying (but closely cropped) head poses. More recently, Nvidia’s StyleGAN paper used a new dataset of 70k images (FFHQ) in order to have more variation, leading to better diversity and quality of generation. What we've started to see now is people experiment with new forms of transfer learning to transfer a pre-trained model onto a new domain. This paper showed this with just 25 new images onto a previously trained model at one point. I expect to see this trend only increase as few practitioners have the budgets or time to properly curate a dataset. In addition, many emerging use-cases may traditionally need a dataset size which is impossible to gather.

4) Remember shorting is capped upside, unlimited downside. Also, all VCs sorta short startups implicitly.

Friends know I am fairly active in shorting stocks in my personal public market portfolio and often enjoy talking about that more than longs. Being able to identify faults in public companies, when most external pressure has pushed them to continue to rise in value over the past decade is a valuable skillset and fascinating though process. I think the stance that some in VC take that we are long-only investors is an overly literal one and is either a mis-evaluation of what exactly our job is, or is a marketing ploy.

As venture dollars have flowed, and very few startups are *truly* one-of-a-kind from a business model perspective, venture investors are forced to place bets on outperformance and underperformance across categories, and because we largely operate with the belief that we are investing in duopolistic markets, our companies are often near zero-sum. Thus, while we are only allocating dollars to long positions, we implicitly are making decisions based on being short other businesses.

With respect to public market shorts, the one thing to realize when taking a financial position is that upside is capped in shorts (a stock can only fall 100%, but can grow infinitely). What this means is 2 things.

First, most people shouldn’t short stocks. Markets trend up over time and there’s tons of literature as to why holding and participating in key rally days drastically impacts returns.

Second, sometimes it's dominant in the short to mid-term to take a market neutral stance (a less-correlation reliant version of a pairs trade) so that you are only making an implicit competitive bet. An example of this in practice would be going long Uber and short Lyft at the same time and constantly re-balancing these positions as they develop so that your financial success is merely tied to Uber outperforming Lyft, not Uber ultimately winning.

5) Construction sites feel like the next battlefield for robotics after factory floors.

There are so many startups in this space.

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind # 4

Technology is making us crave the similar, and the unique

5 things on my mind - by Michael Dempsey

I promise I’ll get the right cadence down on this thing at some point. This email spawns from this thread. The process for this will evolve but as you'll see, some thoughts are random, and most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to talk about more.

5 Things

1) Technology is making us crave the similar, and the unique.

This turned from a thought into a longer form blog post which you can read in its entirety here.

—-

Over the past two weeks I've read multiple pieces that are on opposing sides of what we’re craving as consumers and in our digital self-expression. They all speak to a duality of how we are both starved for individuality as well as driven towards homogeny. These points are communicated to us through some interesting trends in technology, social media, and pop culture. Let me walk you through my sequence of consumption here.

Beauty_GAN (not to be confused with BeautyGAN) is a sparsely documented implementation of a GAN that utilizes instagram makeup trends and then generates new styles, which Dazed put on Kylie Jenner's face. The resulting imagery feels very cherry picked (not a rarity based off of my experience with GANs) but the key point of the article is that these dataset sizes and inputs are continually built with human in the loop biases (we'll talk about this again in #4). Put more artistically in the article:

"...Beauty_GAN is like a mirror of popular culture, but the reflection staring back at you might not be what you expected. We teach a machine to see us and what it shows us back is not always what we see ourselves.”

Despite what the GAN mirrors back in terms of how "dystopian" something looks, there are two other related pieces that speak to this homogenization of taking the internet's makeup kingmaker (Kylie) and re-painting the internet's makeup onto her. Or as Dazed wrote:

"One could argue that, of all the beauty imagery we see on Instagram today, Kylie Jenner’s face, her aesthetic, holds the most influence. Every time someone copies her contour or lip liner there’s a further proliferation that happens. She influences what we think of as beautiful, what exists on Instagram. The Beauty_GAN project sees this inputted into a machine, and then lets the machine take over; the machine creates what it thinks is beauty imagery, and then paints it back onto Kylie’s face. And so, the feedback loop closes."

And this feedback loop has proliferated into other forms of celebrities as Telegraph highlighted in this article.

"Lil Miquela, after all, is the ultimate embodiment of homogenised, Instagram-friendly beauty. An ambiguous mix of different ethnicities, with on-trend freckles and a body that can be shaped and moulded depending on the body parts required, she can be everything that consumers desire at any given time. "

But the key here isn’t what makes a character like Miquela or Imma.gram compelling. There’s time decaying first order interest points of “is this a robot or a human?!” and the general intrigue of a synthetic being, but then there’s the natural, more commonplace feeling amongst many influencers of “they are kind of like me, but better.”

And this close similarity is what I believe is akin to the dopamine rushes that gamification experts have hit on for a long time of being partially satiated, but not entirely fulfilled. This phenomenon has partially been described as Selfie Harm. It keeps us wanting more, liking more, swiping more, for something that we know we likely can’t obtain. But what happens when we can?

Image result for selfie harm engadget

When we’re given tools that allow us to have those “on-trend freckles” of Miquela, or the contour and sizing of Kylie’s lips, or the dyed hair of Imma.gram, then what do we crave? Perhaps difference.

This is what I believe we’re seeing in pockets of the internet today. We’re seeing massive share numbers generated by very differentiated and unique AR filters that eschew traditional beauty trends. As one of the creators of these filters says in a Dazed profile:

“These filters can be used in creative new ways that partly break with the expectation of self-depiction on social media…Breaking fixed thought patterns on how we perceive gender and beauty is important and much needed.”

Maybe this is just how we cycle influence. We tire from the popular aesthetic/approach, early-movers push towards a new approach, some subset of the influencers make the jump, while new ones are borne, and on and on we go. Or maybe early pockets of culture are at a pivot point of individuality because for the first time we don’t get to escape and recharge our batteries from the influence.

Or as Oliver Sacks put it best:

“(We) have given up, to a great extent, the amenities and achievements of civilization: solitude and leisure, the sanction to be oneself, truly absorbed, whether in contemplating a work of art, a scientific theory, a sunset, or the face of one’s beloved.”

2) Is it worthless or a secret weapon to be elite at investing in non-technical teams?

I'd really like to learn more about investing in non-technical teams. I think this is a skillset that very few people have, and many probably argue is useless in 2019. I'd imagine there's a big market inefficiency of pricing these deals though if you can nail ability for non-technical founder to manage a technical team and/or hire that manager post-raise. Counter-argument would be, they should be able to inspire enough to get tech lead pre-raise but that's a privileged argument IMO.

3) Are Creative Technologists ideal early engineers?

In some of the areas surrounding computational creativity as well as consumer there's a heavy need for good graphics engineers, 3D artists, and what people are now deeming "creative technologists." It's been amazing to see this cohort of creators emerge with cool projects spanning AR/VR, 3D, and AI/ML, often with a beautiful portfolio of cool freelance projects. Many in this space are naturally drawn towards these people, but I'm increasingly bearish on long-term freelancers being early hires as I worry about pace of iteration and their ability to bang their head against a wall for a multi-month time horizon vs. jumping from interesting project/ tech proof of concepts every few weeks.

4) Who designs the model matters a lot for structuring industry-specific ML models.

This paper matching influencers and brands. The interesting part is it's trying to say how to match influencers that are most similar to brands. This feels like a fundamental misunderstanding in the sense that a lot of influencer marketing is amplification, but also expansion (especially within micro influencer categories). They address this a bit but saying that the algorithm could analyze category types in posts, but it shows that humans in the loop for some of these ML models are increasingly important to get good domain-specific results that can ever be used. In addition the dataset is so small. I'd love to see this expanded to significantly more than 20 accounts but focused on just 3-5 categories.

5) This was a long newsletter with just 4 thoughts.

A few papers/links this week

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind - #3

Gender Fluidity, Gen Z, and AI to cheer you up.

5 things on my mind - by Michael Dempsey

Apologies for the gap in sending these. Back to our semi-regularly scheduled programming. This email spawns from this thread. The process for this will evolve but as you'll see, some thoughts are random, and most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to dig into more.

5 Things

1) On the rise of gender fluidity and digital identity

I've written a lot about identity, but mostly related to avatars and how they can expand, fragment, and change digital identity. However, when talking to multiple founders in the avatar space, some have lamented about having to anchor the creation UX with a gender choice at the top of the funnel (i.e. choose "male" or "female" to begin). Memoji explicitly doesn't mention gender, allowing you to build your character outside of any implicit societal restrictions. They are one of the only I have seen make this decision.

What is really interesting to me is that these products, often built for Gen Z, could allow for a decreased weight to be placed on gender and how we think about our identities. The internet has always been a special place for self expression, but perhaps when we make users think about how they physically portray their identity digitally, we can capture what actually makes up an identity outside of the first checkbox of gender and second of race. A lot has been written about this, and I admittedly am not the right person to speak at length about this specific topic, but it is on my mind and I thought this article summarized a lot of the impending effects we could see via the rise of gender fluidity in the world over the next few years across both startups and corporates.

2) Parsing the noise of Gen Z

A few weeks ago I tweeted this (above). Based on a few DMs I wanted to elaborate on this thought.

First, I don't think being Gen Z or spending time there with your resident Gen Z advisor really gives you a massively valuable edge. You at best get a snapshot of behavior, but not something with perspective or that is well-informed about how these cohorts will progress in their behavior. The early usage of technology may have a few gravitational centers (self-obsession, for example), but I'm not sure it's a long-term knowledge advantage.

Second - Knowing what will be a fast hit may have been interesting as an investor yesterday but IMO in today's social climate that isn't interesting to me as a VC. My job isn't to invest in the next $50-$150M FB acquihire and because of often the profile of Gen Z focused founders, as well as the dominance of the social platforms, you need something and someone incredibly special to let these things run at a pace that makes the business look bulletproof enough that when facebook paints a target on your feature set, you'll survive.

That said, this cutthroat social climate also may mean that the past training data that I said I wish I had may be irrelevant in a segment where company dominance has never been so strong.

P.S. - I know about Zebra Intelligence. If you’re a brand trying to understand Gen Z today, use it. #sponconbutnotreally

Image result for jetsons robot

3) We need more robot-first environments.

While many people (myself included) are excited about the potential for machine learning/computer vision to bring true scalable autonomy to robotics, in the near to mid-term I feel that multiple use-cases could benefit from a re-thinking of the problem. There are multiple interesting reddit threads and articles that discuss the issues of environmental compatibility with certain types of robots.

The most notorious issues are within self driving cars/autonomous vehicles where you can constrain the problem to robot-friendly environments. I previously called these Hedge Cases. Some argue that city infrastructure is causing issues with AVs, like this specific thread on the infamous unprotected left turn problem or more recently Ford discussing a future without traffic lights.

"This is a little like saying that computers will not be able to be used everywhere because not everywhere gets electricity. That's true but it isn't microsoft's problem...Right now SDC's are the ones that need to accommodate current infrastructure because there are so few of them. But in a couple of years it is infrastructure that will need to take them into account."

What’s more interesting to me is in the realm of consumer robotics. There's a lot to unpack here but my main view is that with the massive amounts of new real estate development happening everywhere, it feels like low hanging fruit to bring true automation into the house. Or as another person in this thread put it:

"We could build robot-friendly houses, just as we built dishwasher-friendly houses and laundry-friendly houses."

4) Sadly, at seed, it’s not always great if VCs feel they understand your entire business.

Still thinking through this…

emoticon

5) AI to cheer you up

I randomly found this github repo and was surprised I haven't seen this experimented with more. The idea of setting a human emotion as a goal for a reinforcement learning model is kind of brilliant. This model essentially uses emojis to seek for the goal of a smile from the user. As dystopian as it is, I can't wait for my “cheer me up” AI. For now I'll just use Try Not To Smile.

A few papers/links this week

If you have any thoughts or feedback feel free to DM me on twitter. All of my other writing can be found here.

On My Mind - #2

5 things on my mind from the week of 12/2

5 things on my mind - by Michael Dempsey

This email spawns from this thread. These will likely evolve but the process will remain the same: Clear out email, filter down to 5 thoughts, send an email. As you'll see, some are random, most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to dig into more.

5 Things

1) I want my AI self as a personal advisor

There have been countless stories about those that are taking the data exhaust that we put out on a daily basis in text messages, emails, chat logs, etc. and creating an AI version of a given person. I'm long AI friends and think this comes at scale in some usable form in the next few years, but when the technology improves even further, I'm really interested in the idea of my AI self as an emotionally normalized advisor. I use the term normalized because if my AI self is created from my real self's data exhaust, it will implicitly have some emotional bias. In times where emotions are running high, it feels valuable to hear how my less-emotionally charged self would act in a given situation. And while we often have our closest friends/family/confidants to rely on in these times, as we've seen across so many industries, interacting with a bot could remove any emotional issues that lead to under-sharing the full situation. I also would wager after some period of time, it'll be a lot harder to proactively tell your AI self that has all non-voice primary data (theoretically) with regards to the situation that they don't understand vs. your friend who only has pieces of information.

2) We're in a window where algorithmically-created products are interesting because of the process of creation.

At the 18:30 mark of this conversation Vijay and I recorded we talk about how something is interesting just because it is created by a machine and has edges/is slightly off. We recently saw the first large-scale implementation of this with a GAN painting which sold at auction for $432k. Outside of this I think there are are multiple opportunities to experiment in this space, with the restricting element being that these generative goods must make their way into the real world, and not just be digital assets. A few examples:

3) Question Masters and Deep Divers in VC

I've been (over)analyzing VC profiles and strategies across a bunch of vectors recently. One of the recent thoughts that came from a fairly complex diligence process was the difference between the Question Master VC vs. the Deep Diver VC.

  • The Question Master (QM) excels at using questions as ammunition to push the founder to take all the moving parts of their business from complex → simple terms.

  • The Deep Diver (I know this name is awful but we’re going with it, DD) excels at understanding complex topics about the business in a short period of time based on prior work/experience.

    In which area does each shine?

    • The QM will likely have a dominant filter on a founder's ability to sell (either to customers, the current fund's partnership, or later-stage VCs) and theoretically will be able to have a wider aperture on the scope of their investments. The QM also runs a lot less of a risk of bias due to entrenched thinking or bad past information. It's quite likely that the QM is an optimist and weighs heavily on founder profile at the early stage + pattern matching of more macro trends in the business/category.

    • The DD will perhaps be able to invest in companies that QMs are unable to fully appreciate the opportunity in and may be able to build credibility with the founder in a shorter period of time during a competitive process. The DD runs the risk of misevaluating their knowledge as an edge in their diligence process or drawing parallels that aren't correct without surfacing them to the founder. The DD also may misjudge the ability for a founder to raise follow-on capital.

      I view myself as the latter and often put the onus on myself to get to a point of complex understanding of the founder and their business/technology. I believe this is best for me (at seed) because it pushes me towards the ability to understand and relate to a founder and their business on a deeper level than other investors can, in a shorter period of time, and specifically at a lower burden than the Question Master.

      I make the distinction here of seed specifically because at this stage funds are competing on axes related to personal fit, ability to help, and often pace. My gut is that as price pushes into the equation more and pace less at Series B+ stages, the QM may be the dominant profile.

    To be clear, I'm acting as if these two VCs can't possibly be one, which is untrue, but discussing gray area outliers isn't helpful for this thought process. In addition, many will argue it’s on the founder to be able to distill their information best. Again, gray area.

4) Are podcasts creating groupthink?

I've noticed a quite acute convergence of thoughts across various social circles recently and I think podcasts are to blame. While we suffer a form of groupthink within our social circles due to the written/visual media we consume, the internet has provided a level of content diversity that we often don’t spend the same amount of time reading the exact same types of things as our peers. Podcasts don’t feel like they have reached that point yet.

As podcasts have risen to prominence, they have filled in fairly similar timeframes for large groups of people (commuting, working out, etc.) while also becoming a main delivery point of information for non-professional knowledge. Because of the lack of programmatic discovery and diversity within the podcast ecosystem (especially within Apple’s main podcast app) I’ve noticed a convergence of thought related to various niche topics across pop culture, finance, tech, sports, and more. People’s opinions are always informed by information they gather, but when a large % of that information is coming from the same 5-10 podcasts, it’s remarkable how conversations become noticeable regurgitations of what you heard a few days ago on the subway.

5) Digital fashion is coming soon and may be world-positive. (see paper commentary below)

A few papers this week

  • AR Costumes: Automatically Augmenting Watertight Costumes from a single RGB Image - This paper from Disney Research talks about utilizing a single RGB image to apply a "watertight" costume automatically in AR to a person. This is pretty interesting from a viewpoint of what the future of both our digital avatars could look like as well as what could be done with future AR filters. We're just starting to really figure out face tracking in a high-fidelity way without using depth sensors (Pinscreen's work here) , but full body tracking is still on the horizon as people get tired of being restricted to shoulders-up modification. The closest comp here for an existing consumer app today would be Octi.

    Despite this lack of progress on the automation side, a digital celebrity named Perl just posted an intriguing video that speaks to a more manual version of this technology in order to allow Instagram users to waste less on fast fashion and instead digitally modify their pictures with single-use outfits. Incredible timing, and something I expect to see productized in the near future.

  • Photo Wake-Up: 3D Character Animation from a Single Photo - This paper from U Washington and Facebook researchers allows 2D images to auto-animate and come to life, which has implications for AR as shown in this supplementary video.

  • Truly Autonomous Machines are Ethical - This paper brought me back to my ethics class in college. It's a compelling (and a bit long) read on various ethical implications and decisions to be made around the treatment, liability, and programming of autonomous robots. I particularly loved this quote: “So, yes, there is risk in attempting to build an autonomous machine, just as there is risk in raising children to become autonomous adults. In either case, some will turn out to be clever scoundrels.”

  • Towards High Resolution Video Generation with Progressive Growing of Sliced Wasserstein GANs

  • Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach - I’m a strong believer that defending ML models against adversarial attacks is a core component to the future of machine learning.

If you have any thoughts or feedback feel free to DM me on twitter.

Loading more posts…