• The Deep View
  • Posts
  • ⚙️ Current harms and the real-world impacts of algorithmic decision-making

⚙️ Current harms and the real-world impacts of algorithmic decision-making

Good morning. British Prime Minister Keir Starmer vowed at COP29 to reduce the U.K.’s greenhouse gas emissions by 81% by 2035.

But, as we’ve talked about in the past, Starmer is also trying to build out plenty of AI infrastructure across the country, which poses all sorts of risks to the climate, in both energy consumption, grid destabilization, water consumption and emissions.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🤖 AI for Good: Robotic surgery

  • 🚘 Waymo expands operation in LA

  • 🏥 ICU nurse on AI and the misplaced war on efficiency

  • 🚨 Current harms and the real-world impacts of algorithmic decision-making

AI for Good: Robotic surgery

Source: Johns Hopkins University

Researchers at Johns Hopkins University recently trained a robot to perform several surgical tasks. They said that the robot performs these tasks just as skillfully as human surgeons. 

The details: The researchers trained a machine-learning model on hundreds of videos of surgeries performed by human surgeons. The videos were captured and collected by da Vinci surgical robots, which are currently operated by human surgeons. 

  • Using the model, they trained a da Vinci robot to perform three key tasks: manipulating a needle, lifting body tissue and suturing. 

  • Importantly, the researchers said the robot performed well at tasks it wasn’t specifically trained to do, which marks a significant departure (and improvement) from previous attempts to design autonomous surgical robots. 

"All we need is image input and then this AI system finds the right action," lead author Ji Woong Kim, a postdoctoral researcher at Johns Hopkins, said. "We find that even with a few hundred demos, the model is able to learn the procedure and generalize new environments it hasn't encountered."

The next step for the team is to train the system to perform entire surgeries, rather than just specific surgical tasks. You can watch the robot in action here

But: The impact of hallucination — which is a key flaw of machine learning — was not addressed by the researchers; it is unclear how that would be mitigated. There is also a risk here of skill atrophy due to long-term over-reliance. 

This game-changing device measures your metabolism

Lumen, the world's first hand-held metabolic coach, quickly and easily measures your metabolism with your breath.

Based on this daily breath measurement, Lumen’s app lets you know if you’re burning fat or carbs, then gives you tailored guidance to improve your nutrition, workouts, sleep and even stress management.

Lumen helps you build a personalized, scientifically sound health plan based on your data.

Waymo expands operation in LA

Source: Waymo

Self-driving unit Waymo on Tuesday expanded its Los Angeles-based operation to anyone in the city. 

The details: Waymo started operating in L.A. earlier this year, and said Tuesday that around 300,000 people had signed up for the waitlist. Waymo has been gradually opening up accessibility to this list, and said that thus far, 98% of recent riders were satisfied with the experience. 

  • Riders can now take an autonomous taxi ride across 80 square miles of the city (L.A. spans roughly 500 square miles in total). 

  • This comes shortly after Waymo announced that it’s total fleet is now delivering 150,000 paid trips each week. 

But: While Waymo has stayed relatively safe so far, it has been scaling slowly and its fleet size is quite small (only 700 vehicles in total, as of September). Hallucinations, meanwhile, have not gone away, and remain a constant safety risk with self-driving cars. 

I spoke recently with Missy Cummings, a professor of autonomous systems at George Mason University, who expressed concern that this early safety data will not scale up as Waymo’s operations broaden (particularly with its sights set on future highway operation). 

Automate competitive and market research with Klue AI

Klue AI automates competitive/market intelligence to provide real-time insights and recommendations you can action today:

  • Noise Reduction: Filters out 90% of noise to surface actual intel

  • Summarize Alerts: Summarize any article in alerts to get to “why it matters” faster

  • Review Insights: Summarizes competitor reviews for positive or negative sentiment

  • How to protect yourself from government surveillance (Wired).

  • OpenAI employees flock to former CTO’s new startup (The Information).

  • Phony X accounts are meddling in Ghana’s election (Rest of World).

  • This climate startup is challenging Tesla in the race to electrify big rigs (CNBC).

  • Donald Trump’s EPA pick wants to ‘make America the AI capital of the world’ (The Verge).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

  • FormsFlow: Maximize efficiency with AI-powered form automation and analytics.

  • Blue Prism: Streamline workflows with AI-enhanced, scalable automation.

ICU nurse on AI and the misplaced war on efficiency

Source: Created with AI by The Deep View

There has been a rush, recently, to integrate generative AI into hospitals and medical institutions. The idea of these automated note-takers and pattern-recognization systems is broadly to improve patient outcomes while reducing clinician burnout, according to Microsoft, which recently released a suite of medical AI products

In the rush for integration, the flaws inherent to these systems — privacy, discrimination and hallucination — are often overlooked, as are the impacts of these systems. 

What happened: Michael Kennedy, a neuro-intensive care nurse in San Diego, told Coda that the integration is already underway, and it’s weakening the autonomy and intuition of medical professionals

  • “The reasoning for bringing in AI tools to monitor patients is always that it will make life easier for us, but in my experience, technology in healthcare rarely makes things better. It usually just speeds up the factory floor, squeezing more out of us, so they can ultimately hire fewer of us,” Kennedy said.

  • “‘Efficiency’ is a buzzword in Silicon Valley, but get it out of your mind when it comes to healthcare,” he said. “When you’re optimizing for efficiency, you’re getting rid of redundancies. But when patients’ lives are at stake, you actually want redundancy. You want extra slack in the system. You want multiple sets of eyes on a patient in a hospital.”

Kennedy said that his hospital recently introduced an AI alert system that pings the nurses to make sure they’ve completed certain tasks. While the intention might be to broaden the safety net, the reality behind this is that “critical thinking is being shifted elsewhere — to the machine. I believe the tech leaders envision a world where they can crack the code of human illness and automate everything based on algorithms. They just see us as machines that can be figured out.”

This could result in a majorly ill-equipped cadre of younger nurses. 

And then what happens when the power goes out? 

Current harms and the real-world impacts of algorithmic decision-making

Source: Created with AI by The Deep View

When we talk about the threat of artificial intelligence, there is a sect of researchers who like to split it into two categories: current harms and future risks. To many of these researchers, the ‘current harms’ category is far more important than the ‘future risks’ one; future risks — beyond being difficult to quantify — are based, at least in part, on dramatic hypothetical scenarios, which serve to draw attention away from active harms. 

One of the most prominent examples of these active harms has to do with algorithmic discrimination, which has been going on for more than a decade

One of the greatest impacts of algorithmic discrimination — or simply of failed and flawed algorithms — is in the arena of medical decision-making. 

What happened: In 2018, the U.K. nationalized its liver transplant system, introducing an algorithm — the Transplant Benefit Score (TBS) — to prioritize the list of recipients. 

  • The algorithm is designed to predict how long each patient might live with (and without) a given liver; the difference between those two numbers makes up their TBS. Patients with higher TBS scores are given a higher priority than those with lower scores. 

  • A 2023 investigation by the Financial Times found that the algorithm is age-discriminatory in its predictions — Paul Trivedi, a hepatologist at the University of Birmingham, told the FT at the time: “If you’re below 45 years, no matter how ill, it is impossible for you to score high enough to be given priority scores on the list.”

A May 2024 study published in The Lancet confirmed the FT’s investigation, finding that “the U.K. liver allocation algorithm prioritizes older patients for transplantation by predicting that advancing age increases the benefit from liver transplantation.”

An important element of this, revealed in the FT’s investigation, is that there is no opportunity for clinical override of the algorithm, and there is no appeals process to the algorithm’s decisions. 

Princeton University computer science professor Arvind Narayanan — in collaboration with  Angelina Wang, Sayash Kapoor and Solon Barocas — recently found that the reason behind this discriminatory flaw in the algorithm is simple: it predicts the likelihood of a patient surviving five years with and without a liver; this five-year cap completely blunts any potential prioritization for younger patients with life-threatening diseases. 

  • This, Narayanan said, was pointed out by a patient group in 2015, three years before the algorithm went into effect. 

  • Narayanan said this age bias could be mitigated in a few different ways, the main one of course being a larger, more inclusive dataset. The system, he said, could also stop using age as a factor entirely. 

Going deeper: Though there might be a few other alternatives to predictive algorithms in this context — simpler statistical formulas, a larger pool of donor organs, etc. — the reality is that prioritizing organ transplant lists has always been an ethical challenge. And, as Narayanan said, “human judgment doesn’t scale.” 

The health system needs to make efficient and ethical use of a very limited and valuable resource, and must find some principled way of allocating it to many deserving people, all of whom have reasonable claims for why they should be entitled to it. There are thousands of potential recipients, and decisions must be made quickly when an organ becomes available.

Arvind Narayanan

The U.K.’s Liver Advisory Group has the power to enact some change here; the group has yet to release information from its latest meeting, held in May 2024, so it’s unclear if it is planning to address the issue. The group did not return a request for comment. 

Humans make mistakes. But humans — who are often using their emotional and moral awareness to try their best — are held accountable for their failings. 

Machines are not. 

For this reason alone, decision-making ought to stay human, or closely human-supervised. Machines scale, but machines are cold and flawed in ways that people don’t tend to think about. 

One of the most frustrating things about AI is that, in many cases, it’s an unsupervised global experiment — with tons of ethical and moral complexities — where normal civilians must act as unwilling or unaware test subjects. This algorithm has had a legitimate impact. And it is still in operation. 

And it’s not just with predictive algorithms, either; young girls, for instance, had to experience digital abuse and harassment — enabled by generative AI — before developers realized they should add guardrails to their systems. And those guardrails by no means solved the problem.

We aren’t given a choice. 

Perhaps predictive algorithms can become the best way to prioritize organ transplants. But they’re just not good enough, not explainable enough and not transparent enough to serve that function today.

“The rapid adoption of AI for medical decision-making requires a whole-of-society ethical debate,” Narayanan said. “This isn’t about specific algorithms but about the bundle of unexamined assumptions behind their claim to efficacy and thus to legitimacy.”

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 2 (Left):

  • “Reviewing the photo for proper concert etiquette. There is a woman in the front row in Image 1 that looks like her face is cut off.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on the Vatican’s new AI project:

36% of you said the project is a big deal, since you’re unable to visit the Basilica in person.

Huge deal:

  • “To me, this is nothing but good regardless of your stance on the Catholic Church. As an atheist, I can still appreciate the artistry and architectural significance of the space and this seems a wonderful way to preserve it not only for current viewing by those who aren’t able to visit, but in the event of a catastrophic incident like a fire.”

Would you let a robo-surgeon operate on you without human oversight?

Login or Subscribe to participate in polls.