- The Deep View
- Posts
- ⚙️ MIT professor on achieving fair AI
⚙️ MIT professor on achieving fair AI
Good morning. The second episode of our new podcast — The Deep View: Conversations — is out. For this one, we took it on the road to catch up with Dr. Eric Xing, the president of the Abu Dhabi-based AI university MBZUAI.
Give it a watch or a listen, or whatever you do with podcasts.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
AI for Good: IBM & NASA release open-source weather model
Source: IBM
IBM recently collaborated with NASA to design a general-purpose climate AI model. The model was designed to be easily customizable to a variety of practical weather and climate functions.
Last week, they open-sourced the completed model on Hugging Face.
The details: The model was trained on 40 years of historical weather data provided by NASA, a process that took several weeks and dozens of GPUs to complete, according to IBM.
Now that it is complete, the model can be quickly tuned to meet different use cases, including targeted local weather forecasts, extreme weather predictions and global climate simulations.
In a new paper, the researchers described three forecasting applications they achieved by fine-tuning the system.
One involves something called downscaling, in which the model focuses in on low-resolution data for more detail, enabling localized early warning systems for extreme weather. Another involves hurricane forecasting and the third focuses on improving gravity wave predictions, which influence cloud formation and global weather patterns.
Make Onboarding Easier with Guidde’s Simple Software Guides
The toughest part about onboarding new employees is teaching them how to use the software they’ll need.
Guidde makes it easy.
How it works: Guidde’s genAI-powered platform enables you to quickly create high-quality how-to guides for any software or process. And it doesn’t require any prior design or video editing experience.
With Guidde, teams can quickly and easily create personalized internal (or external) training content at scale, efficiently sharing knowledge across organizations while saving tons of time for everyone involved.
Paper: The problems of scale
Source: Created with AI by The Deep View
There has been a prevailing idea across the AI sector that the path to more powerful models is paved by the ever-increasing scale of datasets and compute.
Increasingly, some have begun to push back against that idea — Princeton computer scientists Dr. Arvind Narayanan and Sayash Kapoor wrote in June that, not only are scaling laws misunderstood, but no seemingly exponential trend can continue indefinitely.
“If LLMs can't do much beyond what's seen in training, at some point, having more data no longer helps because all the tasks that are ever going to be represented in it are already represented,” they said.
Cognitive scientist Gary Marcus has likewise been arguing for years that scale is not helping; he told me recently that deep learning is resulting in diminishing returns, suggesting that constantly increasing size does not lead to better outcomes.
A new paper — from Meredith Whitaker, Dr. Sasha Luccioni and Gael Varoquaux — claims that not only is scale not the solution, but that scale poses a number of severe consequences.
The details: The most important finding here is that small, narrow models often perform as good or better than massive models, suggesting that, for specific tasks, a small model is often all that is needed.
This is coupled with the consequences of scale, namely that it is both environmentally and economically unsustainable (bigger models require more compute — and therefore more electricity — to run, making them too costly, both on company balance sheets and for the environment).
A result of this enormous cost is a loss of competition across the field; small startups and academic labs cannot afford to build models of this size, meaning power gets concentrated among those few companies that can.
This ties in with Luccioni’s regular refrain to be more thoughtful when utilizing AI, rather than plugging in the most energy-intensive models as a solution that could have been solved far more efficiently.
AI Startups get up to $350,000 in credits with Google Cloud
For startups, especially those in the deep tech and AI space, having a dependable cloud provider is absolutely vital to success.
Fortunately, Google Cloud exists. And it offers an experience — the Google for Startups Cloud Program — specifically designed to make sure startups succeed.
The program importantly offers eligible startups up to $200,000 in Google Cloud Credits over two years. For AI startups, that number is $350,000.
Beyond the additional cloud credits eligible AI startups get access to, AI startups will also get access to Google Cloud’s in-house AI experts, training and resources.
This includes webinars and live Q&As with Google Cloud AI product managers, engineers and developer advocates, in addition to insight into Google Cloud’s latest advances in AI.
Program applications are reviewed and approved based on the eligibility requirements here.
Peak is approaching, but is your customer service team ready?
DigitalGenius is the AI Concierge used by Reebok, Olipop, Honeylove, and AllSaints to keep customers happy by fully resolving queries in seconds, at any time and in any channel. Speak to us today to be fully ready for peak.*
Boost your software development skills with generative AI. Learn to write code faster, improve quality, and join 77% of learners who have reported career benefits including new skills, increased pay, and new job opportunities. Perfect for developers at all levels. Enroll now.*
Fidelity has cut its estimate of X’s value by 79% since Musk’s purchase (TechCrunch).
HPV vaccine study finds zero cases of cervical cancer among women vaccinated before age 14 (Stat).
Why it’s time to take warnings about using public Wi-Fi, in places like airports, seriously (CNBC).
Exclusive: ByteDance plans new AI model trained with Huawei chips, sources say (Reuters).
Udemy is opting teachers in to genAI training (404 Media).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Hiring the best Machine Learning Engineer just got 70% cheaper.
Today we are highlighting AI talent for you, courtesy of our partner, Athyna. If you are looking for the best bespoke tech talent, these stars are ready to work with you—today! Reach out here if we can make an introduction to these talents and get a $1,000 discount because you’re a reader of The Deep View!
Report: AI and the complexities of cybersecurity
Source: Created with AI by The Deep View
We’ve talked often here about the cybersecurity element of artificial intelligence. Like all things in this sector, it's multi-faceted; AI as a technology poses myriad cybersecurity risks and challenges, while simultaneously acting as a power tool for cybercriminals and cybersecurity teams alike.
Team8’s 2024 CISO survey found somewhat mixed results when it comes to AI and cybersecurity.
The details: 70% of CISO respondents consider AI to be a major security threat; 85% view the tech as a “key enabler” for security.
75% of respondents are most concerned about AI-enhanced phishing campaigns, an attack that has increased by about 1,000% in recent quarters, according to the report.
There is additional concern about deepfake fraud and network infiltration.
57% of respondents said that the biggest challenge organizations face in defending AI systems involves a lack of expertise; 56% said it involves that tricky balance between security and usability.
Exclusive Interview: MIT professor on achieving fair AI
Source: Swati Gupta
A key pillar of the ethics of artificial intelligence involves algorithmic fairness, an area of AI research that has recently gained steam as concerns over discriminatory algorithms continue to mount.
And while ChatGPT might have directed our collective attention to viral instances of algorithmic bias, the issue has been ongoing for years; before we had generative AI chatbots, we had machine learning algorithms running under the hood of plenty of major processes, plagued by the same issues of bias and discrimination that we see today.
In 2016, for example, ProPublica found that COMPAS — an algorithm used by the U.S. judicial system to predict the likelihood of a criminal reoffending — was racially biased, predicting that black defendants had a higher likelihood of reoffending than white defendants.
Between 1999 and 2015, hundreds of British sub-postmasters were wrongly prosecuted for stealing due to incorrect information from the post office’s Horizon algorithm.
These are just a couple of prominent examples. As pointed out in the White House’s Blueprint for an AI Bill of Rights, there is no shortage of highly impactful instances of algorithmic discrimination, especially in the context of high-stakes decision-making.
The basic problem, according to Dr. Swati Gupta, an associate professor of operations research and statistics at MIT Sloane, is that large language models (LLMs) are trained on massive swaths of data. And that data contains human biases (both implicit and explicit).
Gupta’s focus is on optimization and machine learning, specifically algorithmic fairness.
Skeptical of fully autonomous algorithmic decision-making, Gupta told me that the goal of her work in algorithmic fairness is to enable the humans who might be collaborating with an algorithm to better understand whether they can trust the output of that system.
Quantifying errors: “My hope is that we are able to understand ways of querying these systems, of asking the right questions,” she said. “That can help us estimate in kind of a collaborative sense, what are the things that we don't know about the system, where the system is very inaccurate, where the system is potentially biased, and guide our decisions based on that.”
The idea is to inject AI-assisted decision-making with a level of actionable transparency and interpretability; people need to know how the model came to its conclusion, why it did so and what the error rate of that conclusion is.
Such an approach — similar to confidence scores — doesn’t erase ongoing problems of bias and hallucination; instead, it equips users with the necessary ability to know when to trust a model and when not to trust a model (output accompanied by a high risk score can be safely ignored by a potential user or decision-maker).
With risk predictions and error scores, “you would know when to take authority in that decision, or responsibility in that decision.”
How to know what we don’t know: The key obstacle to developing truly trustworthy systems is figuring out a way to know what we don’t know; if, for example, a genAI HR system presents me with its top three candidates, how can I know for sure that it didn’t discount a candidate who would have been a good fit?
This — understanding what input is missing from the output — is one of the key reasons I am highly skeptical about the idea of using genAI myself.
“But what if my input space was larger, and there was some nefarious thing that bent it,” Gupta said. “Can I actually use noise around my input to understand what I might be missing?”
It’s a domain of research that’s in its early stages, but Gupta wants to see much more focus on it, as it has further implications for the role of genAI as a curator of news and information, something that could simply enhance each individual’s personal “bubble.”
“It's this question, I think, how do I know what I do not know,” she said. “I think that's the question I would like us to address.”
Which image is real? |
🤔 Your thought process:
Selected Image 2 (Left):
“Lettering on the windfoil, intricate reflections of the water, vehicle and random people on the beach, all signify ‘real’ and not ‘AI.’”
Selected Image 1 (Right):
“Typical horizon fog.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on HICCAP:
A third of you said you might use something like this; 20% said you definitely would and 20% said you definitely wouldn’t. A little more than 20% said they could see the potential value of the system in schools.
In schools:
“But with a significant amount of required PD to accompany it.”
Do you think transparency & trust are the missing element in genAI? |