On My Mind - #9

Narratives & Pseudosecrets, Serendipity in VC is BS, on the *mint* meme, Mobile ML research

On my mind - by Michael Dempsey

This email spawns from this thread.

These thoughts are often poorly edited and researched and usually span areas of venture capital, emerging technology, and other random thoughts.

I’d love for this newsletter to be more of a discussion vs. me shouting to the internet, so please email me back and I promise to respond.

Ask - We’re hiring at Compound. If you’d like to work with me or know someone who would be a good fit, please send them my way!

Thoughts

1) Narratives & Pseudosecrets

I wrote a post discussing the importance of narrative building as well as how companies build their narrative around both TAM expansion and sequencing to an ultimate future via what I call Pseudosecrets. I’ve spent a lot of time explaining this phenomenon or principle to founders over the past few years and finally decided to write out my thoughts in long-form.

2) Serendipity in Venture Capital is BS

I also wrote some thoughts on the history of networks within VC, the current state of venture capital, and ideal investing models at the seed stage.

3) Fortnite x Star Wars + VR UIs

Image result for fortnite star wars event

There will be a lot of hot takes surrounding the Fortnite experience, as always, however while many people were *mindblown* around the in-game experience, if you’ve spent time in VR, it’s likely it felt quite familiar. Watching experiences in a 3D, free-roaming world on a flat screen is sub-par, but has been a well-known UI/UX choice for VR developers and is in some ways considered a killer app today. Unlike Fortnite however, in VR you are immersed via head tracking, hand tracking, and a first person view. Ultimately for me, it felt a bit odd watching my character jump around in front of the flat digital screen, in this large world.

Where Fortnite did innovate on the UI was an ability to “focus” with the right click, which allowed the user to be forced into watching/following what Fortnite deemed most important. This is something VR should adapt more often, especially in storytelling centric experiences.

Related to that, the Fortnite x Star Wars experience was cool because of the IP, live nature. and mechanics surrounding user voting, but it also was the first meaningful in-game experience that didn’t progress the Fortnite story. While the metaverse story (read more about this in my Narratives post above) seems to be the goal, and a profitable one at that, I hope that after a reset of a season in terms of vaulting multiple game mechanics, Fortnite continues to innovate on gameplay, and not just on becoming one of the world’s largest native advertising platforms.

4) On the “f**king mint” meme

Image result for fucking mint

Disclaimer: I’m a millennial snipering into Gen Z trends here so take it with a grain of salt.

I’ve been pretty fascinated by the “f**king mint” meme (shout out to trying to avoid spam filters) that has grown on TikTok. Basically the point of the meme is to go around and say self-deprecating/embarrassing/not-so-great things about you, your life, and/or all of your friends and be somewhat OK with it.

I think this meme speaks to something a little more specific going on within gen Z which is self-comfort, open expression, and re-assurance (notice I didn’t say confidence) that millennials perhaps adopted about a decade later in age than this cohort of teenagers has. It’s a small, potentially overfit signal I’ve noticed as we’ve continued to look at other related thesis areas, but one that I think the meme perfectly embodies and could bleed through to other behaviors and purchasing decisions.

5) Mobile compute ML research is underserved. But does it matter?

There are countless research papers that are continually published popping up pushing the limits of what machine learning can do across a myriad of use-cases. The underlying issue with much of this research however is the required compute to make something possible, let alone reproducible.

One could argue that the job of most research labs is to figure out if something is possible, and compute will eventually catch up to make things production-ready at the commercial layer, however I’m not sure we’ve seen this happen as quickly as we’d like as an industry, largely due to the clout coming not from efficiency but power of algorithms.

One area I’d love to see increased publishing and experimentation on is ML at the edge, specifically within mobile phones. If we eventually believe that these mobile supercomputers will turn from bundled interface + compute to possibly compute-centric devices (think post-AR hardware) then we should also be pushing the boundaries on what type of algorithms we can efficiently run.

Examples of compelling mobile-centric ML research I’ve seen recently include this paper which tackles real-time monitoring of drivers via mobile phones, as well as one on mobile action recognition. I hope we see more in the future.

6) AI generated art novelty decay happened even faster than I thought

Katsuwaka of the Dawn Lagoon, (2019) created by Obvious Art. Courtesy of Sotheby's.

Two months ago I wrote about novelty decay AI generated art and more. Specifically I said:

The novelty of “this was made by AI” or “this is digital” will continue to exist, but at an increasingly decaying premium as time goes on. The novelty premium of our favorite artist’s lives may only compound as we see them grow, change, and we build deeper personal connections to them, their tribes, and their ups and downs.

It looks like that decay could have happened significantly faster than even I anticipated with the recent mediocre performance in AI generated art. I’d wager that this decay in value comes partially from the tiring of the group (Obvious) as well as the way in which this art is generated. Once we see a meaningfully different technical approach to art generation, perhaps we then will see another pop in prices. With that said, I don’t believe we’ll see lasting value unless a larger artist is able to manage both the story surrounding the process and their own personal journey, alongside their pieces.

Research / Links

  • A Mobile Manipulation System for One-Shot Teaching of Complex Tasks in Homes - This is a really interesting paper (and video) that walks through a team using VR to train a mobile, in-home robot on specific tasks. The success rate ends up being around 85% across the board, which is also not good enough (despite a 99.6% success rate due to ability to correct errors) however the other problem is the time to complete which is 20x slower on average (and some tasks were 100x slower than a human!). I imagine the eventual future of an in-home robot is going to draw on some form of imitation learning that maps humans more closely to the robot (or vice versa) so that learned tasks can be scaled by more human teaching. Not sure when that future will come though.

  • When equity factors drop their shorts (article here): On the lack of value short trading positions create.

  • Generating High-Resolution Fashion Model Images Wearing Custom Outfits - Generating fashion images from scratch in high resolution. I've spoken to a few different people about this task of generating stock fashion images with GANs. Over the past 6 months we've seen incredible pace as full-sized human body synthesis has improved drastically and thus we can now combine models of pose understanding, and GANs in order to tie together new types of synthesis. What has been funny is people doubting the ability that pace is going to happen in this field specifically. While in some industries I'm a hardcore skeptic on tech people automating or innovating from the outside, when it comes to fashion and stock imagery (and other non-scalable, image related practices within e-commerce) after spending a little time with fashion industry founders, I'm beginning to believe stronger that this innovation may be one that comes from outside the industry vs. an operator from within.

  • Neural Voice Puppetry: Audio-driven Facial Reenactment - Take audio, push it to deepfake. Pretty cool.

  • Generating Animations from Screenplays - Multiple researchers continue to try to crack this utopian vision of inputting text and outputting realistic 3D scenes. Very few have been able to do it at high -degrees of freedom, high quality of art, and by enabling emotion. The argument to be made is that we can speed up the initial work and go from creation -> tuning, or have a more granular storyboarding pipeline if it’s automatic, but many artists just get annoyed by poor implementations of animation that they are then forced to re-do vs. create.

If you have any thoughts, questions, or feedback feel free to DM me on twitter. All of my other writing can be found here.

Loading more posts…