- Stream of Consciousness
- Posts
- SoC #60: The Top Ethical Issues Tech is Facing
SoC #60: The Top Ethical Issues Tech is Facing
10 minute read
I'm Lisa đź‘‹ Welcome to this week's edition of Stream of Consciousness!
Subscribe to join 1503 conscious product leaders who want to put their time and energy into building more conscious products (and careers).
Throughout history, every time new technology was introduced, the world and society at large had to adapt.
Take:
I believe right now we’re at another critical inflection point - one that will impact the way in which we live, think, relate, and work both rapidly and exponentially.
Lately, AI has been in practically every headline, every day.
I recently watched CBS’ full interview with “the Godfather of AI” Geoffrey Hinton where he discusses how AI evolved and his concerns about where it’s going and kept thinking - “this is just scratching the surface.”
The interview inspired me to focus this edition on three major buckets of ethical issues tech is facing in this edition - I’d love to hear your feedback on these and any others you think should be included in this list.
1) Persuasion
The Center for Humane Technology exists today to “align technology with humanity’s best interests” and has focused extensively on making “persuasive technology” more humane - technology that influences or change people’s attitudes, behaviours, beliefs, and/or actions.
We’ve mostly focused on how this has played out with social media and search engines (covered in-depth in the movie The Social Dilemma) and how it has caused known externalities including direct negative impacts to our mental health and attention spans, misinformation, coded biases that directly reflect our own individual and societal biases, eroding democratic functioning (i.e. the Cambridge Analytica scandal), and not-insignificant neurological consequences like physiological addiction, influencing language development, and changing how we process emotional signals.
We’re in a state right now where individual corporations are deciding what is “good” vs. “bad” for all users of their products (us!) and playing moral roulette every single day.
This is likely going to shift into high gear incredibly quickly with how rapidly AI is both evolving and being adopted in a largely unregulated way.
In this part of the interview, Hinton discusses how AI has the ability to collect, absorb and digest exponentially more information that a human brain can - but one thing that it doesn’t have is a consistent world view (or any world view at all).
If we’re becoming more and more reliant on tech that we are treating it as a “second brain” but that also operates in a way that is fundamentally different than how our brain interprets complex information from multiple senses to decide what we believe, what we do, and how we do it, I worry that we will be pulled away from listening to our own instincts and will become over-reliant on an incomplete technology, in a way that resembles trying to reach a destination in a boat, only to realize it’s full of holes and the navigation system that was “trusted” is actually very broken.
AI systems are able to manipulate our emotions, beliefs, and behaviour without our knowledge or consent and has the ability to exploit us, especially if we are vulnerable.
AI deepfakes are on the rise. Our inability to discern between images that were AI generated vs. taken in real life has been made blatantly obvious as of late, with the Selena Gomez Met Gala photo fooling millions and an AI-generated photo that was selected as the winner of the Sony World Photography Awards.
Biases are embedded in the models used to train the data - the knowledge cutoff dates mean that there are often missing pieces of current information in the datasets, and the data that they were trained on regardless of its cutoff date is already biased because it mimics our own societal biases.
For example, if you type in, “greatest leaders of all time”, you get a list that looks like this that is inherently gender biased:
It has already influenced our autonomy, privacy, expression, and decision-making, and all of these things are moving at a pace that is too fast for policymakers to keep up with, leaving us incredibly vulnerable, often without our consent.
I’m extremely concerned about what our future will look like with the combination of AI, social media, automation, and hardware and robotic integration in terms of how we are persuaded on an individual and societal level.
In Hinton’s words, “There’s always going to be things you didn’t think of.”
2) “Tech Emissions”
The tech revolution has made parts of our lives easier, reducing friction, improving productivity and more.
But it’s also come at a cost to the environment.
The energy required in order for technology to function is significant.
The byproducts of technology are contributing to the pollution of our water, air, and soil.
We need power for data centres and tech infrastructure that is contributing to greenhouse gas emissions.
Many tech companies are still reliant on carbon-intensive manufacturing processes and supply chains.
We are adding electronic waste like computers, smartphones, and tablets that contain hazardous materials to the environment when they aren’t disposed of properly.
Many manufacturing processes release toxic chemicals into the environment.
We literally now have plastic circulating in our bloodstreams.
In Avatar-esque fashion, we are depleting natural resources like lithium and cobalt to power batteries, metals and rare earth elements to manufacture smartphones, computers, lasers, and other emerging technologies, tungsten and copper to produce electronic components, fossil fuels to power manufacturing processes, and more.
While many tech companies are striving for net-zero and sustainable solutions, tech carbon emissions are still in the range of millions of metric tonnes per year and are comparable to the aviation industry according to a recent study.
The UN has now deemed carbon removal as “essential” along with radical emissions cuts to keep global temperatures from rising 2°C - a result that would mean more frequent extreme weather events, heatwaves, droughts, air that is more difficult to breathe, crop failures, food shortages, rising sea levels that threaten where we can actually live, heat-related illnesses, major changes in biodiversity, economic collapse, and more that David Wallace Wells describes in vivid detail in his book The Uninhabitable Earth.
I do not want to see these things play out in real life.
3) Designing Tech for our Actual Population and Physical Makeup
Much of the tech that is available to use today wasn’t actually designed based on datasets that represent our true populations that are filled with different genders, ethnicities, and varying levels of physical abilities and disabilities.
While technology has exponentially increased how productive we can be and expanded what falls within the realm of possibility for the human race, it has also come at some significant costs.
Most things that are produced that we use in our day to day lives are designed based on human factors and ergonomics to “fit the measurements of the human body”.
However, many of the datasets used are historically male-centric and often don’t include many ethnicities or people with disabilities.
I have particular interest in this area because I live with a disability (Ehlers-Danlos Syndrome) and have non-standard body proportions as a result which have impacted the way literally everything fits me and the choices I have when purchasing products.
You might not think this is a big issue, but it can lead to very significant consequences. For example, crash test dummies were traditionally only built based on the “standard reference male”, making women 47% more likely to be injured in a car accident.
While feminist advocate Caroline Criado Perez discusses many of these issues and more in her book on data bias called Invisible Women, I would argue there is significant room for improvement in the datasets we use in design to incorporate even more diversity beyond binary gender - specifically, spectrums of gender, ethnicity, and ability/disability.
There are companies that are making accessing this type of population data easier for designers and product teams now (i.e. Fable).
Additionally, the technology we use today often doesn’t match our physiological needs.
We were built to move, yet so many of our tech products have been designed to keep us stationary (i.e. computers).
Physical inactivity is now a leading cause of disease and disability according to the World Health Organization.
Not only are our bodies not being moved as they truly need to be - also, our eyes are bearing the brunt of our screen use and our visual systems are being damaged in the process.
In addition to problems that are more widely known like generalized eye strain and the fact that looking at a screen means we blink less and our eyes are less lubricated than they should be, there are less commonly talked about impacts that computers and screen use have on our visual systems.
Our eyes were built to work together binocularly to focus on focal points at different distances.
How flexible the lens within each eyeball is contributes to this. Similar to a muscle, your lens changes shape to bring things into focus and needs to be worked throughout its range to maintain its function.
If we are spending high percentages of time focusing on objects at the same distance (i.e. at a computer screen or at our phone or VR or AR device), it can cause problems in how quickly our lenses can change shape when attempting to focus on objects that are either closer or farther away.
Researchers have also found that the number of hours spent in front of a screen directly correlates to the likelihood of developing convergence insufficiency and eye teaming issues that can cause your eyes to be unable to work properly together in a coordinated fashion when looking at objects and can cause headaches, blurred or double vision, poor hand-eye coordination, motion sickness and dizziness, and a plethora of other symptoms that aren't fun and often require intensive and expensive functional vision therapy to correct.
While I can’t predict the future, I believe that if we don’t continue to take proactive steps towards improving these three areas, the impacts of technology will not be net-positive in the grand scheme of our lives.
I agree with what entrepreneur Ayah Bdeir recently said in a piece by Wired:
“We need to convince people to invest more in responsible tech.”
What do you think?
Conscious Bytes đź“°
THE SUSTAINABILITY CONUNDRUM: Many smaller companies can’t afford to prioritize sustainable product development over simple staying afloat. Plan A recently published this article that shows businesses how they can balance the two with limited resources by mapping out a sustainability budget.
THE FUTURE OF SMARTPHONES: How do you think smart phones will evolve? Here are some hot takes from tech leaders.
PROBLEMATIC HEROISM IN TECH: “The Godfather of AI” is all over the headlines right now, and I’ve mentioned him several times in this newsletter - but what about the people who speak up during their employment to try to change the trajectory of what they are building and its impacts to make more conscious changes in the moment? Radhika Dutt recently published this intriguing take on accidental heroes and villains in tech.
Soulwork đź’ś
✨ Self-doubt and imposter syndrome are covered a lot, but this is the first time I’ve seen this take on how to deal with them - creating different characters for yourself during your day.
Thanks for Reading!
If you're looking to improve as a conscious product leader and achieve outcomes in your career and the products you are building more intentionally, I’d love to help you through:
Have a great week!
-Lisa ✨
How Was This Edition? |
If this was helpful, you can support me by forwarding it to a friend who you think might also like it or by supporting my work through a small donation.
Interested in sponsoring Stream of Consciousness and promoting yourself to 1.5K Product Managers, Senior Product Managers, VPs and Directors of Product?