Our stance on Generative AI

Generative AI is a big deal.

Published On Aug 26, 2024

Back in 2020 I wondered when GPT would get attorney-client privilege. At that time, GPT was a niche technology that could complete sentences. In the intervening six years, we’ve begun to wonder whether language is thought. Today, AI can beat humans at most narrow tasks, from playing chess to writing code to folding proteins to writing letters of recommendation.

Perhaps most importantly, it’s the first software tool with zero learning curve: It will literally teach anyone who can speak or write how to use it. Even if the technology doesn’t advance at all from where it is today, it’s a staggering advance in human cognition.

We worry that AI will flood all our channels with fakery, leading to the Dead Internet. But AI news has already flooded our timelines and conferences. So here’s what we’re thinking about how AI in general—and generative AI in particular—relates to FWD50.

 


A new kind of Chatham House Rule

In 1919, as the world laid down arms from World War One British diplomat Lionel Curtis launched the British Institute of International Affairs, an institution devoted to better understanding between nations. 

Eight years later, the organization created a rule for meetings that was intended to encourage participants to speak freely, without fear of attribution. Under this rule, “participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.”

What’s now known as the Royal Institute of International Affairs is located in Chatham House, and this rule became known as Chatham House Rule.

A century-old cohort of diplomats could hardly anticipate the cheap, widespread recording, streaming, and training technologies of 2024, or the constant distraction of digital devices. So we’re updating this rule in some of our gatherings—such as the Executive Cohort—with what we’re calling FWD50 House Rule:

No digital recording, no attribution without permission, no livestreaming, and no use or training of LLMs on what’s discussed. Participants are expected to be present and will surrender their electronic devices except at predefined Screen Time breaks.

 


Content guidance

We received hundreds of amazing talk ideas in this year’s Call for Proposals. A majority of them focus on generative AI. While AI is definitely a game-changer for every part of society, we want to avoid three kinds of content:

  • Platitudes: Vague, clever aphorisms nobody can actually act on.

  • Hucksterism: Pitching digital snake-oil, from simple “wrappers” that just put a front-end on Large Language Models, to unsubstantiated claims and demos. Everyone’s eager to cloak themselves in the AI mantle, often creating features in search of customer rather than the other way around.

  • Pitches: AI is real and valuable, but the market is also breathless. There will be billions of dollars lost in the coming years: Many products break the law (copyright, anyone?) or lack a business model, while others will simply lose in the inevitable acquisition and consolidation of the market. Those companies are clamoring to stand out in the noisiest market in history.

So what do we want? Outside the tech industry itself, the conversation is increasingly practical. We’ve accepted that AI is here to stay, which forces us to answer myriad questions:

  • Who’s liable for AI errors, and is insurance a thing? FWD50 alum Ramy Nassar has a great post on this over on LinkedIn.

  • How do we watermark and sign blended human/AI content? I offered a suggestion in an op/ed in Wired last year; if you bump into the paywall you can check out a more detailed explanation on Substack.

  • If an employee uses AI to automate part of their job, should they be fired or promoted? Now that algorithms are indistinguishable from some knowledge work, and with agentic AI on the horizon, what should we do about “rogue IT” at the individual level?

  • How does AI change existing processes? AI can automate many mundane tasks, and there are promising examples of chatbots and turning everyone into analysts. At the same time, existing processes may come “under attack” by generative AI when they rely on something as a proxy for value (for example, a grant application gets much easier to write, so granting processes may be overwhelmed by generated proposals.)

  • What happens when software development is basically free? Is AI-assisted data analysis “good enough” to supplant an analyst? Will non-technologists start writing code? Even though it’s been decades since I coded, I used Claude and Google’s Appscript to develop an activation for Startupfest. The cost of developing software is about to plummet, ushering in an era of transient personal micro-apps tailored to very narrow use cases that were previously ignored.

  • How does someone get recourse? One of the big differences between a machine and a human is that humans have recourse. When someone feels marginalized by an algorithm, what can they do about it? This isn’t a new problem (I wrote about redlining, big data, and civil rights 12 years ago for O’Reilly) but AI exacerbates it dramatically. In theory, AI frees up the majority of use cases so humans can manage the exception, but if we aren’t careful, automation becomes an accountability sink behind which bad laws and lazy lawmakers can hide.

 


Our use of AI as organizers

We think organizations should be transparent about how they plan to leverage this remarkable technology. We plan on having an AI policy alongside our Privacy Policy and Code of Conduct, but we’re still learning and figuring out the details.

Here are some of our internal guidelines:

  • How we use AI: We’ve used AI to help us with some tasks, including data analysis, writing code, and proofreading. We don’t like its output at the moment for marketing copy or communications—it “smells fake” somehow. That may change, and if it does, we’ll update our policies.

  • How we’re transparent: When we do use AI, we’ll be transparent about it (as in the case of the activation I mentioned earlier.)

  • What we expect from our community: We don’t accept AI-generated proposals for talks (although we definitely received some.) Our logic here is that if you’re relying on an algorithm to describe your idea, that idea probably isn’t well developed enough to take the stage. If an AI can generate a proposal for a talk, we’d prefer that it give the talk as well (seriously, if you want to have an AI give a talk, we’re interested.)

  • AI can help with accessibility: From captioning to classification to live translation, AI promises to level the playing field for those at a disadvantage in many ways. We’ll continue to explore and experiment with those technologies, and share what we’ve learned with others, to use tech to make society better for all.

Notice at collection Your Privacy Choices