SoC #54: ChatGPT And Why AI Makes Me Both Excited and Concerned

5 minute read

I'm Lisa 👋 Welcome to this week's edition of Stream of Consciousness - the newsletter for product leaders who want to build products and their careers more consciously, in ways that are inclusive, holistic, ethical, accessible, and sustainable.



Subscribe Now

Receive Stream of Consciousness in your inbox every Monday


Last week ChatGPT-4 was released with much fanfare, including demonstrating exiting new features like the ability to generate responses with 8x more text, the ability to take a rough hand-drawn sketch of a web page and turn it into a functional website, and the ability to help you sort through complicated tax questions.

It can also do things like pass the Bar Exam and place in the 90th percentile (GPT-3.5 placed around the bottom 10%) and generate text that analyzes the content of images (although this isn't publicly available yet because of concerns about how this may be misused).

You can check out the full developer demo video here and also the release notes.

GPT-4's release also came with significant warnings from Open AI's CEO Sam Altman as seen in this ABC news interview with Rebecca Jarvis and from other employees in one of their launch videos which include statements like:

While I'm grateful that OpenAI is addressing these concerns and potential downfalls of the technology openly (like disclosing the known limits of GPT-4 and writing policies around use to promote the tool being used for good) vs. not addressing them at all, I believe more can be done to find a happy medium to both gather the appropriate context to develop GPT as a relevant tool based on real-world data and make sure that it helps humans far more than it harms us.

Specifically:

1) Make training data more transparent

The training data used for ChatGPT is protected by OpenAI and largely remains a black box.

Understandable from a competition standpoint, yet not being able to see the dataset and understand its intricacies means we are being exposed to various forms of bias that we are not aware of.

For example:

Additionally, many users don't think about or aren't aware of the knowledge cutoff dates that each model was trained on:

The GPT 3.5 dataset (the most recent unpaid publicly available version of GPT at the time of publishing) was trained on data that had a cutoff in Sept 2021, while GPT-4 training concluded in August of 2022.

This means that the model you are using may not be able to access certain data or be aware of certain events that may have occurred after that date.

Here's an example using GPT-3.5:

While GPT will spit out this cutoff date given certain inputs, it doesn't always inform the user.

For example, I asked: "What are the biggest problems in the world right now" and its knowledge cutoff date was not mentioned in its response.

So I asked...

I initially found out about this by accident after asking a very specific question about the current product management job market.

This could be made more explicit to the user up-front (i.e. By including a disclaimer on the main chat screen you land on after logging in) in addition to within chat responses where the cutoff date could impact the accuracy of its response (i.e. When words like "current" and "today" are used in the question).

2) Use specific examples up-front in the user flow to explicitly highlight potential risks and known gaps within the version of GPT that the user is on

OpenAI has been clear about the fact that they do internal testing (pre-training and fine-tuning) and also that keeping things constrained to a petri dish would mean not fully understanding the breadth of real-world implications and ways to mitigate these.

I agree with this, and also think more can be done to pull out specific examples in plain English to help users better understand what they are in for before asking GPT questions.

For example, on March 15th, OpenAI released the GPT-4 "System Card" - a 60-page document that outlines safety challenges, OpenAI's safety processes, and known gaps.

Here's a summary of the safety challenges uncovered:

It also includes specific examples of what they fixed - specific updates that were made to protect users (i.e. See pg. 8 of the System Card).

The conclusion reached through this testing?

"We demonstrate that while our mitigations and processes alter GPT-4’s behavior and prevent certain kinds of misuses, they are limited and remain brittle in some cases. This points to the need for anticipatory planning and governance."

In one launch video, employees talk about implementing "Internal guardrails around adversarial usage, unwanted content, and privacy concerns," iterating that it's a model that they need to train.

But a lot of the specifics around what terms like "adversarial usage" actually mean in practice are relatively buried, are worded in complicated terms, and are not easy for users to find.

If links showing specific examples for known limitations were shown on the main chat page when you log-in to use GPT, users would be better-informed and be able to understand these limitations in plain English. This would both help build more trust with the product and the company.

3) Involve a broader decision-making team

Potentially, a lot more could be done to address known safety issues prior to release if OpenAI hadn't shifted from an open-source not-for-profit to a closed-source, profit-generating entity.

But we are where we are.

As ABC News' Rebecca Jarvis points out, "Its (Chat GPT's) behaviour is very contingent on humans to choose what they want its behaviour to be. Therefore, the choices that humans are making and feeding into the technology will dictate what it does."

Right now a single entity is responsible for making these choices, which have the potential to impact humanity significantly.

While OpenAI has made efforts to gather public input, they are still the gatekeepers.

We don't make choices this way in any other industry outside of tech, and to me, this is terrifying.

Entrepreneur Justin Welsh wrote recently, "You have to master natural intelligence before artificial intelligence becomes useful."

If OpenAI's mission is to "Ensure that artificial general intelligence benefits all of humanity." bringing representatives of all walks of human life and natural intelligences into the decision-making process vs. just having a single team of gatekeepers would be advantageous.

While AI tools like ChatGPT hold massive potential to transform our lives in positive ways, we should be slapping a massive "HANDLE WITH CARE" sticker on them in their current state and as we are building them to minimize and prevent direct and indirect harms.

I couldn't have summarized this better than Kevin Roose in his recent New York Times article on GPT-4:

"The worst A.I. risks are the ones we can’t anticipate. And the more time I spend with A.I. systems like GPT-4, the less I’m convinced that we know half of what’s coming."

What do you think of GPT-4 and AI being released into the wild? Let me know!

Conscious Bytes 📰

BRAILLE 2.0: Check out The Monarch, a new device with a tactile display that makes images touchable for braille users.

DISCORD PRIVACY: Discord recently updated its privacy policy amidst backlash after it released new AI features that supercharge its existing chatbot, Clyde, with a boost from OpenAI. The only problem? Discord quietly updated its privacy policy along with the changes. Users noticed...and were not happy.

MAKING ACCESSING APPS A CONSCIOUS CHOICE: How many times have you gotten lost in an Instagram wormhole? Unpluq is a company that's on a mission to change that, with a physical tag that is used to unlock specific apps to help you focus on what matters most. Check out a recent interview with Founder Caroline Cadwell here.

Soulwork 💜

  • Soul is one of my favourite Pixar movies + it's got a great message. If you're struggling with the rat race and hustle culture and feeling like you need more meaning in your life, I highly suggest you take a break to watch this (on your own or with your kiddos).

Thanks for Reading!

If you're looking to improve as a conscious product leader and achieve outcomes in your career and the products you are building more intentionally, there are 3 ways I can help you:

  1. The Product Manager's Career Guide: If you are an early stage product manager with 0-5 years of experience and want to design your career more consciously this is for you. Get clarity on what to focus on and why, define the trajectory you are moving towards, and use my practical tools and downloadable templates to help you iterate on the systems you are using to reach your goals.

  2. Stitch: The Best Resources for Product Managers: The most comprehensive resource for anyone in product looking to save hundreds of hours Googling to search for anything from "How to do customer interviews", "How to get promoted", "Roadmap templates", "How to build accessible products", "Stakeholder alignment", "Product pricing", and more. Over 2000 practical resources organized by product area in a single .pdf.

  3. 1:1 Coaching and Feedback: Schedule a 30-minute or 60-minute video session where we tackle the most challenging problem you are facing right now, or ask me for product-specific feedback on something you're building.

Have a great week!

-Lisa ✨

Headshot of Lisa Zane against yellow background. She is wearing a black button up shirt and has long brown hair, brown eyes, and olive skin.

How Was This Edition?

Login or Subscribe to participate in polls.

If this was helpful, you can support me by forwarding it to a friend who you think might also like it or by supporting my work through a small donation.

Interested in sponsoring Stream of Consciousness and promoting yourself to over 1K Product Managers, Senior Product Managers, VPs and Directors of Product?