5 things on my mind from the past week.

On My Mind - #2

5 things on my mind from the week of 12/2

5 things on my mind - by Michael Dempsey

Michael Dempsey@mhdempseyI just cleared out my inbox and so figured I'd start the end of the "work week" with 5 things that are on my mind, largely inspired by my now-archived inbox.

This email spawns from this thread. These will likely evolve but the process will remain the same: Clear out email, filter down to 5 thoughts, send an email. As you'll see, some are random, most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to dig into more.

5 Things

1) I want my AI self as a personal advisor

There have been countless stories about those that are taking the data exhaust that we put out on a daily basis in text messages, emails, chat logs, etc. and creating an AI version of a given person. I'm long AI friends and think this comes at scale in some usable form in the next few years, but when the technology improves even further, I'm really interested in the idea of my AI self as an emotionally normalized advisor. I use the term normalized because if my AI self is created from my real self's data exhaust, it will implicitly have some emotional bias. In times where emotions are running high, it feels valuable to hear how my less-emotionally charged self would act in a given situation. And while we often have our closest friends/family/confidants to rely on in these times, as we've seen across so many industries, interacting with a bot could remove any emotional issues that lead to under-sharing the full situation. I also would wager after some period of time, it'll be a lot harder to proactively tell your AI self that has all non-voice primary data (theoretically) with regards to the situation that they don't understand vs. your friend who only has pieces of information.

2) We're in a window where algorithmically-created products are interesting because of the process of creation.

At the 18:30 mark of this conversation Vijay and I recorded we talk about how something is interesting just because it is created by a machine and has edges/is slightly off. We recently saw the first large-scale implementation of this with a GAN painting which sold at auction for $432k. Outside of this I think there are are multiple opportunities to experiment in this space, with the restricting element being that these generative goods must make their way into the real world, and not just be digital assets. A few examples:

3) Question Masters and Deep Divers in VC

I've been (over)analyzing VC profiles and strategies across a bunch of vectors recently. One of the recent thoughts that came from a fairly complex diligence process was the difference between the Question Master VC vs. the Deep Diver VC.

  • The Question Master (QM) excels at using questions as ammunition to push the founder to take all the moving parts of their business from complex → simple terms.

  • The Deep Diver (I know this name is awful but we’re going with it, DD) excels at understanding complex topics about the business in a short period of time based on prior work/experience.

    In which area does each shine?

    • The QM will likely have a dominant filter on a founder's ability to sell (either to customers, the current fund's partnership, or later-stage VCs) and theoretically will be able to have a wider aperture on the scope of their investments. The QM also runs a lot less of a risk of bias due to entrenched thinking or bad past information. It's quite likely that the QM is an optimist and weighs heavily on founder profile at the early stage + pattern matching of more macro trends in the business/category.

    • The DD will perhaps be able to invest in companies that QMs are unable to fully appreciate the opportunity in and may be able to build credibility with the founder in a shorter period of time during a competitive process. The DD runs the risk of misevaluating their knowledge as an edge in their diligence process or drawing parallels that aren't correct without surfacing them to the founder. The DD also may misjudge the ability for a founder to raise follow-on capital.

      I view myself as the latter and often put the onus on myself to get to a point of complex understanding of the founder and their business/technology. I believe this is best for me (at seed) because it pushes me towards the ability to understand and relate to a founder and their business on a deeper level than other investors can, in a shorter period of time, and specifically at a lower burden than the Question Master.

      I make the distinction here of seed specifically because at this stage funds are competing on axes related to personal fit, ability to help, and often pace. My gut is that as price pushes into the equation more and pace less at Series B+ stages, the QM may be the dominant profile.

    To be clear, I'm acting as if these two VCs can't possibly be one, which is untrue, but discussing gray area outliers isn't helpful for this thought process. In addition, many will argue it’s on the founder to be able to distill their information best. Again, gray area.

4) Are podcasts creating groupthink?

I've noticed a quite acute convergence of thoughts across various social circles recently and I think podcasts are to blame. While we suffer a form of groupthink within our social circles due to the written/visual media we consume, the internet has provided a level of content diversity that we often don’t spend the same amount of time reading the exact same types of things as our peers. Podcasts don’t feel like they have reached that point yet.

As podcasts have risen to prominence, they have filled in fairly similar timeframes for large groups of people (commuting, working out, etc.) while also becoming a main delivery point of information for non-professional knowledge. Because of the lack of programmatic discovery and diversity within the podcast ecosystem (especially within Apple’s main podcast app) I’ve noticed a convergence of thought related to various niche topics across pop culture, finance, tech, sports, and more. People’s opinions are always informed by information they gather, but when a large % of that information is coming from the same 5-10 podcasts, it’s remarkable how conversations become noticeable regurgitations of what you heard a few days ago on the subway.

5) Digital fashion is coming soon and may be world-positive. (see paper commentary below)

A few papers this week

  • AR Costumes: Automatically Augmenting Watertight Costumes from a single RGB Image - This paper from Disney Research talks about utilizing a single RGB image to apply a "watertight" costume automatically in AR to a person. This is pretty interesting from a viewpoint of what the future of both our digital avatars could look like as well as what could be done with future AR filters. We're just starting to really figure out face tracking in a high-fidelity way without using depth sensors (Pinscreen's work here) , but full body tracking is still on the horizon as people get tired of being restricted to shoulders-up modification. The closest comp here for an existing consumer app today would be Octi.

    Despite this lack of progress on the automation side, a digital celebrity named Perl just posted an intriguing video that speaks to a more manual version of this technology in order to allow Instagram users to waste less on fast fashion and instead digitally modify their pictures with single-use outfits. Incredible timing, and something I expect to see productized in the near future.

  • Photo Wake-Up: 3D Character Animation from a Single Photo - This paper from U Washington and Facebook researchers allows 2D images to auto-animate and come to life, which has implications for AR as shown in this supplementary video.

  • Truly Autonomous Machines are Ethical - This paper brought me back to my ethics class in college. It's a compelling (and a bit long) read on various ethical implications and decisions to be made around the treatment, liability, and programming of autonomous robots. I particularly loved this quote: “So, yes, there is risk in attempting to build an autonomous machine, just as there is risk in raising children to become autonomous adults. In either case, some will turn out to be clever scoundrels.”

  • Towards High Resolution Video Generation with Progressive Growing of Sliced Wasserstein GANs

  • Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach - I’m a strong believer that defending ML models against adversarial attacks is a core component to the future of machine learning.

If you have any thoughts or feedback feel free to DM me on twitter.

On My Mind #1

5 things on my mind from the week of 11/26

5 things on my mind - by Michael Dempsey

Michael Dempsey@mhdempseyI just cleared out my inbox and so figured I'd start the end of the "work week" with 5 things that are on my mind, largely inspired by my now-archived inbox.

This email spawns from this thread. These will likely evolve but the process will remain the same: Clear out email, filter down to 5 thoughts, send an email. As you'll see, some are random, most are unfiltered or poorly edited. Either way, let me know what you like, don't like, or want to dig into more.

Disclaimer: The world doesn’t need another newsletter, so feel free to unsubscribe guilt-free if you’re mistakenly getting this email.

5 Things

1) Unedited thoughts on passing due to founder-VC fit

By being thesis-driven you can really easily fall into a trap where you over-index on ideas you love because you've been looking for a solution for a long time. Especially on weird areas I’m passionate about that are just getting to a point of commercial viability.

So how does this happen? You search for a long time, tweet some thoughts, maybe write a blog post, then you find someone online and email them, or someone says "hey I think you'd like this". Screaming into the abyss of the internet has worked. Then you meet the founder and you just don't connect with them or believe in them. And you have this weird rationalization of "well...maybe they'll figure it out" or "I don't like a lot of people." But eventually you have that "shit I just can't do this" moment and have to write a pass email, and you want to be respectful, but you also are literally passing on the company because despite believing in the market, the technology, etc. you don't believe in the founder(s) . So you write this email that tries to point out the flaws you see in the pitch/business outside of the founding team (to be helpful), but then you feel like a fraud because in reality you'd 100% invest if someone else you connected with was pitching this exact company with the same flaws. And then you hit send and dread the time where you'll have to do this again in the next few days/weeks/months.

Image result for detect deepfakes

2) Truth Algorithms aren't helpful for Fake News

Since 2016 I've been shouting about the manipulation of digital assets and how to go about detecting them. I wrote this, then this, and have this tweet thread. A lot of research and conversations have pushed me to a single statement that I haven't heard an answer for: "a lie can travel half way around the world while the truth is putting on its shoes." As VCs, we're drawn to this idea that technology can solve all problems, but we're being too idealistic to think that using some form of ML to label "true vs. false" or implementing confidence levels of given stories, images, etc. is going to have a material impact.

What's even more clear is that at the pace in which information proliferates now, there is real economic cost to being able to track the speed at which lies make their way around the world. This has been shown already in the algorithmically controlled world of high frequency trading where a misinterpretation of a headline has flash crashed markets and individual stocks.

Let's use the above quote and say we have a "Lie Wildfire", which in practice could be coordinated networks of humans, botnets, or whatever else designed to create and spread lies. Then let's say we have a "Truth Algorithm", which is likely in practice a set of algorithms trained to detect manipulation, generated assets, or outlier pieces of content on the internet. As we shift to an increasingly algorithmically controlled world, I'm sure there are many non-financial market related "flash crashes" that will happen as the Truth Algorithm tries to chase down the Lie Wildfire, aimed at burning down economic, political, or commercial truths.

3) Creative AI is improving and we need more datasets

Lately I've been trying out some of the open-source models that researchers have been releasing surrounding various creative AI use-cases. Two main things have stood out. First, there has been a steep acceleration in the quantity and quality of output of papers in the space, across use-cases ranging from character and face generation to real world→ digital recreation and more. For example, when looking at coloring of line art we saw things like DeepColor, then PaintsChainer, and now I'd imagine the state of the art would be something closer to Style2Paints.

The second noticeable point I'll make is that it's clear when trying to train your own model the majority of the publicly available datasets are based off of anime/manga which have a very unique style from both a color and shading perspective. This makes moving out of that style tough (for an amateur like me). I'd love to see someone open-source a dataset around other styles of animation.

Side note: A tangential dataset to checkout would be SketchyScene which can be used for experimenting with colorization.

4) People lie about Space.

I feel like since humans have been to Space, it is now this sector where so few people understand what is included in the overall "stack" and the difficulties in doing things that seem fairly simple. Because of this people just claim whatever they want within reason and get good coverage with little pushback. First mining asteroids, now robot controlled space stations.

5) Hair is really important for digital characters (see paper commentary below)

A few papers this week

If you have any thoughts or feedback feel free to DM me on twitter.

Loading more posts…