Lost Short Essays From Twitter - #1
Prompting as a Meta + On AI Transparency and Alignment + On Worldcoin, mega-funds, and value destruction
Every now and then I do this stupid thing where I start writing a tweet and turn it into a longer-form tweet/short essay of sorts. These often get lost so with the end of the year approaching, I’m sending a few of them to those who opted to get this newsletter.
The three below are on topics across AI Prompt Engineering, AI Alignment, and Mega-funds in Crypto:
Prompting as a Meta
Originally published here
There's been a belief that in the near future prompting will go away as models begin to take natural language and turn that into prompts to generate outputs.
It makes sense in theory, AI should use AI to generate better prompts for AI. In the same way we've abstracted away a lot of things for users who don't know how to code, we will do the same for users who don't know how to prompt.
It's likely that for some *majority* of users, prompting will slowly dissipate. However increasingly I'm not sure at the professional level if this is the correct take (and naturally "professional" quality is a window that shifts over time).
Perhaps instead prompting is a meta like any other that may constantly evolve but will always have people who master the meta of a given model or product for maximum efficacy, and maximum "quality".
On AI Transparency and Alignment
Originally published here
There's a lot of talk about what disclosures the frontier model developers should have on their approaches, with many focusing on self-improvement next (prior people were focused on parameter size and Mixture of Experts approaches).
It's pretty clear why from a capitalistic sense it is dominant to not be transparent as this would erode competitive edges as well as the amount of time that a given lab has an edge (this likely is collapsing to single digit months now with the hopes by the labs that a few labs break away in '24 and '25).
That said, if as a company you are claiming to care about alignment and working on it deeply, it feels obvious to me that it is not a top priority if you aren't willing to disclose high level things about the model over time.
I'm a pretty strong capitalist so not sure I have strong views on if this is "moral" or not, but having bounced in and out of diving deep into the AI Alignment world over the years, it does feel like we're going to start seeing labs needing to have strong, defined views on transparency gradients.
Appendix:
- You can currently bet on Manifold and a few other prediction markets if GPT-5 will be self-improving (you also have long been able to speculate on GPT-4 parameter size, the funny part is the bet only resolves if there's common knowledge or OpenAI discloses...idk how we define common knowledge nor do we know how we define parameter size if it's Mixture of Experts of 250B parameter models).
- This all mattered less when research was open and humans were transient because you had stronger data leakage. As we know, AI research is closing down more, and interestingly I think the talent is flowing less between the large AI Labs. That's not to say that there isn't some flow, but compared to 2016-2020 it seems to have slowed quite a bit. We'll likely see another wave of movement as vc-backed labs consolidate and fail in next few years, as well as politics OR progress create a more clear tiering of labs, even amongst OpenAI, Anthropic, MSR, DeepGoog, and Apple.
On Worldcoin, Mega-Funds, and Crypto Value Destruction.
I'm actually not a Worldcoin hater and think the pitch for "proof of person" is interesting and something I’ve written about for years
That said, it was kinda an absurd move to try to push worldcoin's high FDV low float token $WLD out at ~$20B valuation.
But also if as an investor you assume you nuke 50%+ on that move no matter what before any unlock, at least then you end up at $10B FDV (where we are today, probably going lower) instead of a lower figure. It is financially dominant, regardless of how any of us feel about it.
We've seen this playbook run already on things like $APE tbh and we're going to keep seeing this as massive crypto funds have to justify allocating $50M-$100M+ into projects which means valuations on private rounds get to $1-$5B (and then you need 2-3x on token launch to make a return, factoring in slow bleed into unlock).
It's 2023, interest rates are 5%+, crypto is in a 2 year bear market, and we're somehow basically still seeing the SPAC and ZIRP playbook run all over again...Except unlike equities, it's way harder to short alts in size and the crypto markets are far less efficient so squeezes can be manipulated far easier and the bleed is "slow" as emissions leak out over a 3 year lockup period (this assumes these lockups are honored and not hedged out or sold OTC).
I'm incredibly biased, and this isn't meant to aggressively flame some peers but It's pretty hard to make the argument that oversized crypto venture funds aren't damaging to the entire space at this point and aren't structurally setup to dump, in size, on retail, while nuking any token their portfolio launches, which in turn destroys the long-term vision of the project, and creates further overhang for the entire space.
The market simply cannot, at scale, support the valuations over the next 2-3 years that these rounds necessitate, and if the argument is it will in the next few years (as we believe), then these venture firms should simply lock their tokens for 5+ years and earn rewards along the way for participating in the network.