• The Deep View
  • Posts
  • ⚙️ Cops are starting to use AI to write their reports

⚙️ Cops are starting to use AI to write their reports

Good morning. Yesterday, WSJ and Bloomberg reported that both Apple and Nvidia are in talks to join OpenAI’s latest funding round, which would value the company at above $100 billion.

So there’s that.

We’ll see what ends up happening, but such a move would really cement the Apple/OpenAI partnership.

In other news, hope you all have a happy Labor Day weekend!

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Advanced earthquake prediction

Source: Unsplash

Earthquake prediction is notoriously difficult. As one recent research paper put it: “Even though scientists do have a good understanding regarding how they occur, determining which early warning signs translate into when, where, and with what magnitude an earthquake will hit is not a trivial task.”

In an effort to get a better handle on earthquake prediction, scientists have been turning to machine learning. 

The details: Last year, an AI algorithm developed by researchers at the University of Texas at Austin correctly predicted 70% of earthquakes before they happened during a seven-month trial in China. 

  • The model was trained on five years of seismic recordings, then was set up to detect statistical anomalies in real-time seismic data. 

  • The model successfully predicted 14 earthquakes within 200 miles of where it estimated them to occur and accurately predicted the magnitude of these quakes. It missed one and issued eight false predictions. 

Why it matters: “Predicting earthquakes is the holy grail,” Sergey Fomel, one of the researchers, said in a statement. “We’re not yet close to making predictions for anywhere in the world, but what we achieved tells us that what we thought was an impossible problem is solvable in principle.”

The toughest part about onboarding new employees is teaching them how to use the software they’ll need. 

Guidde makes it easy. 

How it works: Guidde’s genAI-powered platform enables you to quickly create high-quality how-to guides for any software or process. And it doesn’t require any prior design or video editing experience. 

  • With Guidde, teams can quickly and easily create personalized internal (or external) training content at scale, efficiently sharing knowledge across organizations while saving tons of time for everyone involved. 

OpenAI, Anthropic sign safety deal with US government

Source: NIST

OpenAI and Anthropic on Thursday signed deals on AI safety with the U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST). 

The details: As part of the agreements, NIST will receive access to new OpenAI and Anthropic models before and after their public release. NIST will collaborate with both companies regarding capability evaluations and safety risks and mitigation. 

  • NIST will additionally provide both companies with feedback regarding potential safety improvements to their models. 

  • “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” Elizabeth Kelly, director of the U.S. AI Safety Institute, said in a statement. 

It’s not clear if OpenAI and Anthropic, as part of the agreements, will be required to incorporate safety suggestions made by the Institute. A NIST spokesperson did not clarify this point to me, saying instead that the agreement represents an important “first step.”

Some context: The timing of this is conspicuous; OpenAI has been lobbying against a safety bill (California’s SB 1047) that is close to being signed into law. Anthropic now supports the bill, but that support came only after it was watered down, in part, by incorporating Anthropic’s suggestions. 

Put AI to Work

AI has become every organization’s top challenge—and their top opportunity. Beyond the hype, it’s critical to understand exactly how to leverage AI technologies to orchestrate your business processes more efficiently.

Download Camunda’s guide to learn how to turn AI promises into tangible automation outcomes using generative, predictive and assistive AI.

  • AI-powered video editing startup OpusClip raised $30 million in seed funding.

  • AI sales company RepAI Technologies has raised $8.2 million in seed funding.

  • What the recent CrowdStrike outage misses is the aspect of human emotions related to deployments, specifically the fear of breaking production. Read more here.*

  • Chatsimple: Turn your anonymous website visitors into qualified leads on autopilot. Boost lead gen by 3X. Build an agent for free. *

  • Dell beats estimates as server sales soar 80%, riding AI wave (CNBC).

  • Brazil judge blocks Starlink accounts as X suspension deadline looms (Reuters).

  • Apple in talks to invest in OpenAI (WSJ).

  • Inside Ford’s private off-road track where it tests its wildest electric machines (The Verge).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

Take 2: Google relaunching tool for image generation of people

Source: Google

Google this week said that Imagen 3, its latest image generation model, is rolling out to Gemini apps. As part of the announcement, Google said that users will soon be able to generate images of people. 

This was the big, controversial feature Google pulled in February. 

The details: Earlier this year, Google’s image generators were ridiculed on social media for their a-historic depictions — for instance, of racially diverse Nazi soldiers. Acknowledging the “inaccuracies,” Google blocked its generators from generating images of people as a safeguard. 

  • “We’ve worked to make technical improvements to the product, as well as improved evaluation sets, red-teaming exercises and clear product principles,” Dave Citron, a senior director of product for Gemini, said in a blog post on Imagen 3. 

  • He added that Imagen 3 would not support the generation of photo-realistic, identifiable people, depictions of minors or gory, violent and sexual scenes. 

Citron said the rollout will be gradual. 

“Of course, as with any generative AI tool, not every image Gemini creates will be perfect, but we’ll continue to listen to feedback from early users as we keep improving,” he said. 

The field: This second attempt at generating images of people — complete with careful safeguards — comes shortly after xAI’s Grok 2 was upgraded with image generation capabilities. The big difference is that Grok purposefully and noticeably has few if any guardrails. 

Cops are starting to use AI to write their reports

Source: Axon

Police officers spend hours every week writing up reports. And they’d rather not. 

In April, Axon — the billion-dollar developer of tasers and bodycams — unveiled its answer to the problem of report writing: Draft One, a tool that leverages a combination of generative AI and bodycam recordings to automatically generate police reports. 

By June, a police department in Frederick, CO said it became the first police department in the world to go live with Draft One. Other departments — including in Oklahoma City and Fort Collins, CO — have quickly followed suit. 

The details: Axon pitched Draft One as an “immediate force multiplier” for law enforcement agencies. The company said that U.S. departments are understaffed and its officers — who spend around 15 hours a week on paperwork — are overworked. 

  • Axon said it conducted a double-blind study of Draft One, with results suggesting that Draft One produces reports of a higher quality than those written by officers. 

  • Axon says that officers are required to review and sign off on reports; draft reports also include certain placeholders that officers are required to fill in manually. 

The company says that the system is only meant to be used for minor incidents, but there is nothing that would prevent an officer from using it for something more serious. Indeed, according to KUNC, departments in Lafayette, Indiana and Fort Collins, CO have been given the green light to use the system for anything. 

The many, many problems: To start with, large language models (like the OpenAI GPT-4 Turbo model that serves as Draft One’s backbone) are predictive text generators. Incapable of reasoning — and known to have issues including bias and hallucination — these systems are designed to predict the next token following a text input. 

  • As law expert Andrew Ferguson wrote in an article analyzing the impacts of Draft One: “Police reports memorialize justifications for police authority. Police reports not only record what a suspect does, but what the police officer did. This latter point is important.”

  • “The police report should not be a predictive guess of what a police officer might have done (because an LLM has been trained on other similar events), but an explanation of why this particular human police officer used legal authority on this particular human suspect.”

Ferguson argued that reports generated by AI undermine the “moral weight of justification.” The LLM swore no oath and was granted no law enforcement authority. Even with officers in the loop, Ferguson said that the narrative will be “heavily influenced by pre-programmed prompts.” 

Ferguson added that such systems would likely complicate trials, since the report reliability is automatically in question. For a judge to find probable cause, he said, they must assert confidently that none of the problems known to plague AI generation have occurred. 

Matthew Guariglia, a senior policy analyst, wrote that the integration of AI into “narratives of police encounters might make an already complicated system even more ripe for abuse.”

  • “Police reports … reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not,” Gauriglia said. “Policing …is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency or accountability.”

Cops, meanwhile, have found the system to be both easy to use and a massive time-saver. 

Ohhh boy. I don’t like this at all. Aside from all the technical and ethical concerns outlined by Ferguson, this feels like a very slippery slope. It’s tremendously concerning even with humans in the loop. If departments decide eventually to take humans further out of the loop, that is a major cause for alarm. 

  • Similar to the issue of AI in schools, this poses the big question: why do we do things? Police reports, as Guariglia said, aren’t meant to reveal what happened, but what police “imagined to have happened.” The distinction is critical. 

This isn’t an HR presentation or a cluttered email inbox. Some things are too important to outsource to a predictive text generator. 

Anyway, this report was written by a human. 

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “This one's pretty obvious. #2 only has half a handlebar, the bike is twisted slightly so the wheels don't align, and the rider doesn't appear to have hands.”

Selected Image 1 (Left):

  • “Fake one: the guys’ feet are both behind the pedal assembly. If one is behind, the other should be in front since pedals oppose one another. Also no visible sprocket on the bike.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on Nvidia:

Around 26% of you said the market reaction to Nvidia was a knee-jerk reaction (the stock closed Thursday down another 6%). 15% said the AI bubble is bursting; 18% said it can’t go up forever and 19% said it’s time to buy.

It can’t go up forever:

  • “Expecting the competition to influence the supply and demand going forward.”

What do you think about Draft One?

Login or Subscribe to participate in polls.