Residents in Frederick, Longmont and Boulder have received false incident notifications from emergency alert services that use AI to summarize dispatch audio.
Colorado Hometown Weekly reported that on Jan. 30 some Frederick area residents received a notification claiming firefighters were battling a “commercial blaze” downtown.
The Frederick-Firestone Fire Protection District later said an emergency notification app had misinterpreted radio traffic from a training exercise simulating a structure fire downtown.
“This incident is a good reminder of the importance of verifying information through multiple reliable sources before sharing or acting on it,” the district wrote in a post.
Summer Campos, a spokesperson for the district, said she was unsure how the app had access to the channel firefighters were using.
Campos said the district plans to use a tactical channel that does not air publicly.
CrimeRadar is an app that uses AI to summarize publicly available dispatch audio, according to its website.
The service delivers alerts through its website and through a mobile app that sends push notifications.
In Longmont, CrimeRadar sent an alert reporting an apartment fire on Wednesday, linked audio did not include a location and the city said it had not received reports of any apartment fires.
Rogelio Mares, a spokesperson for the city, said Longmont police radio transmissions are encrypted and Longmont Fire radio traffic is aired.
In Boulder, the app reported that a firefighter was taken to the hospital after a medical emergency and stated the firefighter’s condition was not disclosed.
Jamie Barker, a Boulder Fire-Rescue spokesperson, said no Boulder firefighters were injured that day.
“It took information that it heard incorrectly, and then it summarized it incorrectly, and then also made an assumption,” Barker said.
Barker said dispatch communications can be incomplete and unverified when crews are not yet on scene.
“The scanner can be an exceptional tool and resource, but the scanner also only ever tells half of the story,” Barker said.
Casey Fiesler, a professor of information science at the University of Colorado Boulder who researches AI ethics, said false alerts can be harmful when a system is presented as accurate.
“If someone gets an alert saying that there’s a fire, that’s going to be very upsetting,” she said.
“People often think that machines are less biased or more accurate than humans, so for this reason, I just think it’s really, really important that systems like this have very strong disclaimers about how information might not be accurate,” Fiesler added.
CrimeRadar said in a statement it is “constantly improving” its system to “ensure higher precision.”
The company posts a disclaimer above its AI-generated summary when users click into an alert, stating: “Not official report. AI-generated from public dispatch audio. Verify with official sources.”
“Our goal is to make communities safer by making emergency information accessible, which is why our disclaimer has been a core feature since day one,” the CrimeRadar team wrote.
“It serves as a constant reminder to users that dispatch calls are unconfirmed and to always rely on official sources for final confirmation.”
Nextdoor uses the AI-powered Samdesk to generate alerts and has also sent incorrect information to Boulder County communities, according to the report.
Dionne Waugh, a Boulder police spokesperson, said a Nextdoor alert last fall reported an active shooter at a federal facility in Boulder, generated community concern and led Boulder dispatch to receive multiple calls.
Waugh said the alert was false and the AI had scraped information from an old online post.
“We understand how distressing a false alert can be for residents, and we regret the concern these incidents caused,” a Nextdoor spokesperson wrote in a statement.
Nextdoor said alerts are processed through a review system and “augmented by human oversight as needed.”
The company said it added verification layers for alerts involving incidents like mass shootings and wildfires and gave local public agencies access to an alerts map so local experts can flag or correct inaccuracies within their jurisdictions.
Nextdoor and CrimeRadar said they remove posts when inaccuracies are identified.
Waugh said removals do not clarify whether an incident occurred.
Fiesler said corrections can be hard to spread once misinformation circulates.
Police and fire departments have urged residents using services like CrimeRadar and Nextdoor to verify information with public safety agencies and traditional media outlets.