- The Deep View
- Posts
- ⚙️ ChatGPT in classrooms, a dangerous road
⚙️ ChatGPT in classrooms, a dangerous road
Good morning. When the closing bell rang Monday, it marked the third straight day of an Nvidia sell-off, with the semiconductor losing close to $500 billion in value.
But on Tuesday, it made up some ground, climbing around 6%. The Nvidia story is not over yet.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
AI for Good: A shield against solar storms
Source: NASA
Last month, the Earth was racked by a series of solar flares and coronal mass ejections (CMEs), which combined to create the strongest solar storm to hit the planet in 20 years, according to NASA.
Down here on the ground, the result was a widespread display of auroras across the globe; NASA said this particular storm may compete with some of the lowest-altitude sightings of the aurora on record over the past 500 years.
As wondrous as auroras are, the solar storms that cause them can disrupt and damage power grids and satellites, which poses a pretty significant threat to our satellite-dependent, electricity-dependent society.
Source: NASA
But NASA has been employing AI as an early-warning system for solar storms, giving people on the ground enough time to prepare.
NASA last year announced a new computer model that combines AI with satellite data to analyze measurements of solar wind.
The model is able to predict where on Earth a solar storm will strike, with 30 minutes of advance warning.
How it’s helped: With this most recent incident, NOAA’s Space Weather Prediction Center sent notifications to operators of satellites and power grids, allowing them time to mitigate the impacts.
NASA said that many of its missions were able to enter “safe mode” to avoid damage during the storm.
5M Job Seekers Use This Time-Saving Resume Tool
A great resume can change your life. Kickresume uses AI to help you write one.
All you have to do is enter your job title, click “Use AI Writer” and the AI will generate a number of bullet points for your work experience subsection
If you don’t like these bullet points, you can either edit them or delete them and click the button again.
If you like the bullet points but feel like that section is still too short, simply click the button again and the AI will add more phrases to it.
Amazon is cooking up a new ChatGPT competitor
Source: Amazon
Massive online retailer and Big Tech giant Amazon has decided that there are not enough generative AI chatbots on the market. The company, according to BI, has been secretly working on a genAI chatbot designed to compete with OpenAI’s ChatGPT.
Fun fact: Amazon started as an online bookseller. I guess in an alternate timeline, Barnes & Noble rules over a tech empire in its stead.
The details: BI, citing internal documents and anonymous sources, reported that the model — codenamed “Metis” — would conversationally provide text and image answers, while also providing source links, suggesting follow-ups and generating images.
Amazon also wants the model to function autonomously as an agent, for example, “turning on your lights and booking a flight for you.”
Amazon is targeting a tentative launch of September, around the same time it is introducing a line-up of new LLM features to its Alexa voice assistant.
Amazon declined to provide comment on the report, saying it doesn’t comment on “rumor.”
It is worth noting that as more companies strive to play LLM catch-up, the datacenters that power them are consuming a steadily growing amount of energy.
"Technically it will work, I guess, but the question is if it's already too late," a source told BI.
New York City, USA | Friday, July 12th, 2024 |
Use code IMPACTAITODAY for $200 off VIP Passes before the end of today*
Want to adopt GenAI for software development but don’t know where to start? This buyer's guide helps engineering leaders cut through the noise*
MTV News website goes dark; a year after shutting down, the site’s archives have been shuttered (Variety).
404 Media paid a freelancer to create a news site with AI, loaded up with plagiarized content in an experiment to prove how easy it is (404 Media).
College dropouts raise $120 million to take on Nvidia’s AI chips (CNBC).
Microsoft hit with an EU antitrust charge over Teams app (Reuters).
Report: Central banks need to ‘embrace’ AI
Source: Bank for International Settlements
The Bank for International Settlements (BIS) — the central bank of central banks — said in a Tuesday report that central banks should “embrace” AI, both in anticipating its impact on the economy and in harnessing genAI tools for their own operations.
The details: BIS said that, since AI tools are exceptionally good at identifying patterns and deriving insights from massive amounts of data, central banks can leverage them to enhance real-time predictions of inflation and other economic variables.
The bank added that AI tools can be used to sift through data for financial system vulnerabilities, which could allow “authorities to better manage risks.”
Though the report said that AI should not be regarded as a “substitute” for the human workers of central banks, it made very little mention of the myriad harms and threats being actively posed by the technology.
The report did mention issues of hallucination, cybersecurity and bias/discrimination, saying that some attacks could lead “to heightened operational risks among financial institutions.”
Heralding genAI as a productivity booster, BIS added that on the macroeconomic level, AI could displace large swaths of workers, reducing household consumption and demand for labor.
“AI could have implications for economic inequality,” the report reads. “Displacement might eliminate jobs faster than the economy can create new ones, potentially exacerbating income inequality. The ‘digital divide’ could widen, with individuals lacking access to technology or with low digital literacy being further marginalized.”
The conversation is the code
Creating data analytics that can transform your company’s strategy shouldn’t require coding expertise.
With DataChat, it doesn’t. That’s why we at The Deep View, along with other SMBs and Fortune 100s, use DataChat. With DataChat, you don’t have to be an expert Data Scientist or know anything about Python, SQL, or coding to get insights from your data. DataChat uses built-in data science and GenAI to answer your questions.
Business users: Just ask questions in plain English and instantly receive actionable analytics and insights on your data.
Analysts: Work in a familiar spreadsheet UI that includes no-code machine learning and predictive analytics.
DataChat never shares your data with LLMs. It also prioritizes transparency, documenting each step it takes ensuring your team can validate results and understand its thought process.
Unlock the value of your data today with DataChat. Get started today.
ChatGPT in classrooms, a dangerous road
Source: Created with AI by The Deep View
Not long after its release — and despite the many ethical concerns, opacities and lack of regulatory policies concerning its development and deployment — ChatGPT and other genAI chatbots made their way into the classroom.
It was an unsurprising development; the tech has been billed as a technological paradigm that might prove as groundbreaking as the internet (plus, it’s great for turning in essays students have no interest in writing).
A study, published this month by Impact Research, found that 59% of teachers, 70% of K-12 students, 75% of undergraduate students and 68% of K-12 parents have a “favorable” view of AI chatbots.
Majorities of almost each of those groups said AI has had a “positive impact” in the classroom, a view shared by just under 50% of teachers polled.
Teachers, according to the report, are using genAI for lesson planning and test prep, while students are using it to write essays and study for exams.
Only 32% of teachers polled said their school has a policy around AI use; only 25% of them have received any training on AI.
The subreddit for Character AI, meanwhile, contains pretty rampant mentions of the word “addiction,” with one user recently declaring that they are quitting the app because “I’ve used it in school instead of doing work and for that now I’m failing.”
Alongside this relatively rapid rate of adoption, the National Education Policy Center (NEPC) in March issued a lengthy policy brief that called for a pause in said adoption until there is sufficient oversight.
The authors identified a number of areas that are causes for concern:
The proliferation of misinformation at both the teacher and student levels;
An amplification of student performance bias;
The locking in of schools into expensive tech stacks (& the subsequent reallocation of budgets into technology, away from teacher/student resources);
A decrease in student privacy amid the likelihood of enhanced surveillance/data leakages.
We’re going to focus on the first two. Let’s start with performance bias. Already, there exist tools to identify AI-generated writing. The problem is they don’t work well. And a growing number of students have been falsely accused of turning in AI-written work, something that is difficult to disprove.
The brief found that “such accusations are disproportionately biased against non-native English speakers, who tend to write in simpler sentences that AI flags as suspicious.”
And on misinformation, the shift from a textbook to a chatbot throws the misinformation floodgates wide, considering the critical fact that these models produce text “that seems convincing even though it might contain false information.”
The reasons behind this false information are multi-fold. In part, it has to do with the data, and we’ve seen numerous examples of poor data leading to hallucinations (AI Overviews, anyone?) And in part it has to do with the architecture of LLMs; they are probabilistic generators that have no understanding of the words they string together.
Misinformation is not good as a general rule of thumb. But the idea of proliferating, unchecked misinformation in school systems is perhaps worse.
A recent UNESCO report found that there have been numerous documented instances of chatbots either generating misinformation or amplifying existing misinformation about the Holocaust.
Audrey Azoulay, UNESCO’s director-general, said in a statement: “If we allow the horrific facts of the Holocaust to be diluted, distorted or falsified through the irresponsible use of AI, we risk the explosive spread of antisemitism and the gradual diminution of our understanding about the causes and consequences of these atrocities.”
She said it is vital that students “grow up with facts, not fabrications.”
To me, the core of this issue should focus on the ‘why’ more than the ‘how.’
Why does school exist? And why do these AI tools exist?
What is the true purpose of reading a book, writing an essay, learning physics or designing a curriculum?
And what is the true purpose of AI systems that promise to erase those processes? (… generate revenue to justify the VC hype, perhaps).
In answer to that first set of questions, though, the purpose of learning in school is to learn how to learn. Reading novels and learning physics isn’t meant to train up little novelists and physicists, but to teach critical thinking and to broaden horizons.
The point of school — frustrating as it may often be — is the process.
There is little education happening if that process is gone.
The idea of automation is to eliminate the process.
That unnerves me.
Schools ought to tread carefully here, thinking about what specific functions an AI system might be automating, and whether it is a service to the student to automate those functions.
Which image is real? |
A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
What do you think about the role of AI in education?Let us know if you're a teacher, student, parent (or someone else), and how you feel about it. |