Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new
Techdirt Daily Newsbrief
Stories from Monday, December 30th, 2024
Embodied Is Actually Trying To Release ‘Moxie’ Robots To The Open Source Community
from the actually-doing-it dept
by Dark Helmet - December 30th @ 7:39pm
A couple of weeks back, we discussed the implosion of startup company Embodied and the resulting bricking of its $800 “emotional support” robots designed for children. Like many other stories about IoT-type products, the post focused on how these robots would cease functioning as designed once the backend support infrastructure for the shuttered business was shut down. As often happens with stories like this, there were several comments pointing out that the company could publish its source code and allow an open source community to pick up the slack here, so that at least these robots wouldn’t become $800 paperweights.
But what doesn’t typically happen in these stories is seeing a company actually make the effort to do exactly that. But that seems to be what Embodied is planning, with the company announcing an update and a plan to all the open source community to build its own backend software for the devices.
Embodied CEO Paolo Pirjanian shared a document via a LinkedIn blog post today saying that people who used to be part of Embodied’s technical team are developing a “potential” and open source way to keep Moxies running. The document reads:
“This initiative involves developing a local server application (‘OpenMoxie’) that you can run on your own computer. Once available, this community-driven option will enable you (or technically inclined individuals) to maintain Moxie’s basic functionality, develop new features, and modify her capabilities to better suit your needs—without reliance on Embodied’s cloud servers.”
The notice says that after releasing OpenMoxie, Embodied plans to release “all necessary code and documentation” for developers and users.
The company is also pushing a final update to the devices that will allow them to support the OpenMoxie setup.
Now, all of this certainly isn’t a perfect solution. If people miss getting the update, their robots will still end being bricks. There is no committment from anyone at all that the open-sourced code and OpenMoxie are going to be dutifully maintained. And who knows what the quality of OpenMoxie will be compared with what the company itself had been providing.
Still, this isn’t an ideal solution for parents who invested in an emotional support toy for their kid and may not have the know-how or time to keep it alive after Embodied closes. While Embodied is doing better than other firms that have bricked or otherwise changed smart device capabilities after release, it remains a disappointing and possibly illegal trend among tech companies pushing products only to alter their functionality or stop supporting their software after taking people’s money.
But at least Embodied is trying to do something about all of this. As the quote above notes, that’s a far cry from what has happened in plenty of other cases, where customers simply get cut off from the functionality of the thing they thought they bought, without any real concern from the companies doing the cutting.
As I said in the previous post, the better solution in the long run would be some sort of consumer protection laws. While we wait for that to probably never come to be, however, this is at least a good step in the right direction by the folks at Embodied.
Read More | 5 Comments
2024: AI Panic Flooded The Zone, Leading To A Backlash
from the the-doomerism-went-too-far dept
by nirit.weiss-blatt - December 30th @ 3:32pm
Last December, we published a recap, “2023: The Year of AI Panic.”
Now, it’s time to ask: What happened to the AI panic in 2024?
TL;DR – It was a rollercoaster ride: AI panic reached a peak and then fell down.
Two cautionary tales: The EU AI Act and California’s SB-1047.
Please note: 1. The focus here is on the AI panic angle of the news, not other events such as product launches. The aim is to shed light on the effects of this extreme AI discourse.
2. The 2023 recap provides context for what happened a year later. Seeing how AI doomers took it too far in 2023 gives a better understanding of why it backfired in 2024.
2023’s AI panic
At the end of 2022, ChatGPT took the world by storm. It sparked the “Generative AI” arms race. Shortly thereafter, we were bombarded with doomsday scenarios of an AI takeover, an AI apocalypse, and “The END of Humanity.” The “AI Existential Risk” (x-risk) movement has gradually, then suddenly, moved from the fringe to the mainstream. Apart from becoming media stars, its members also influenced Congress and the EU. They didn’t shift the Overton window; they shattered it.
“2023: The Year of AI Panic” summarized the key moments: The two “Existential Risk” open letters (first by the Future of Life Institute and second by the Center for AI Safety), the AI Dilemma and Tristan Harris’ x-risk advocacy (now known to be funded, in part, by the Future of Life Institute), the flood of doomsaying in traditional media, followed by numerous AI policy proposals that focus on existential threats and seek to surveil and criminalize AI development. Oh, and TIME magazine had a full-blown love affair with AI doomers (it still has).

– AI Panic Agents
Throughout the years, Eliezer Yudkowsky from Berkeley’s MIRI (Machine Intelligence Research Institute) and his “End of the World” beliefs heavily influenced a sub-culture of “rationalists” and AI doomers. In 2023, they embarked on a policy and media tour.
In a TED talk, “Will Superintelligent AI End the World?” Eliezer Yudkowsky said, “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us […] It could kill us because it doesn’t want us making other superintelligences to compete with it. It could kill us because it’s using up all the chemical energy on earth, and we contain some chemical potential energy.” In TIME magazine, he advocated to “Shut it All Down“: “Shut down all the large GPU clusters. Shut down all the large training runs. Be willing to destroy a rogue datacenter by airstrike.”
Max Tegmark from the Future of Life Institute said: “There won’t be any humans on the planet in the not-too-distant future. This is the kind of cancer that kills all of humanity.”
Next thing you know, he was addressing the U.S. Congress at the “AI Insight Forum.”
And successfully pushing the EU to include “General-Purpose AI systems” in the “AI Act” (discussed further in the 2024 recap).
Connor Leahy from Conjecture said: “I do not expect us to make it out of this century alive. I’m not even sure we’ll get out of this decade!”
Next thing you know, he appeared on CNN and later tweeted: “I had a great time addressing the House of Lords about extinction risk from AGI.” He suggested “a cap on computing power” at 10^24 FLOPs (Floating Point Operations) and a global AI “kill switch.”
Dan Hendrycks from the Center for AI Safety expressed an 80% probability of doom and claimed, “Evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation.”[1] He warned that we are on “a pathway toward being supplanted as the Earth’s dominant species.” Hendrycks also suggested “CERN for AI,” imagining “a big multinational lab that would soak up the bulk of the world’s graphics processing units [GPUs]. That would sideline the big for-profit labs by making it difficult for them to hoard computing resources.” He later speculated that AI regulation in the U.S “might pave the way for some shared international standards that might make China willing to also abide by some of these standards” (because, of course, China will slow down as well… That’s how geopolitics work!).
Next thing you know, he collaborated with Senator Scott Wiener of California to pass an AI Safety bill, SB-1047 (more on this bill in the 2024 recap).

A ”follow the money” investigation revealed it’s not a grassroots, bottom-up movement, but a top-down movement heavily funded by a few Effective Altruism (EA) billionaires, mainly Dustin Moskovitz, Jaan Tallinn, and Sam Bankman-Fried.
The 2023 recap ended with this paragraph: “In 2023, EA-backed ‘AI x-risk’ took over the AI industry, AI media coverage, and AI regulation. Nowadays, more and more information is coming out about the ‘influence operation’ and its impact on AI policy. See, for example, the reporting on Rishi Sunak’s AI agenda and Joe Biden’s AI order. In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.”
2024: Act 1. The AI panic further flooded the zone
With 1.6 billion dollars from the Effective Altruism movement,[2] the “AI Existential Risk” ecosystem has grown to hundreds of organizations.[3] In 2024, their policy advocacy became more authoritarian.
Note that these “AI x-risk” groups sought to ban currently existing AI models.
Llama 2 was trained with > 10^23 FLOPs and thus would have been banned.
All those proposed prohibitions claimed that past thresholds would bring DOOM.
It was ridiculous back then; it looks more ridiculous now.
“It’s always just a bit higher than where we are today,” venture capitalist Krishnan Rohit commented. “Imagine if we had done this!!”
In a report entitled “What mistakes has the AI safety movement made?”, it was argued that “AI safety is too structurally power-seeking: trying to raise lots of money, trying to gain influence in corporations and governments, trying to control the way AI values are shaped, favoring people who are concerned about AI risk for jobs and grants, maintaining the secrecy of information, and recruiting high school students to the cause.”
YouTube is flooded with prophecies of AI doom, some of which target children. Among the channels tailored for kids are Kurzgesagt and Rational Animations, both funded by Open Philanthropy.[5] These videos serve a specific purpose, Rational Animations admitted: “In my most recent communications with Open Phil, we discussed the fact that a YouTube video aimed at educating on a particular topic would be more effective if viewers had an easy way to fall into an ‘intellectual rabbit hole’ to learn more.”
“AI Doomerism is becoming a big problem, and it’s well funded,” observed Tobi Lutke, Shopify CEO. “Like all cults, it’s recruiting.”

Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).

Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.
2024: Act 2. The AI panic started to backlash
In 2024, AI panic reached the point of practicality and began to backfire.
– The EU AI Act as a cautionary tale
In December 2023, European Union (EU) negotiators struck a deal on the most comprehensive AI rules, the “AI Act.” “Deal!” tweeted European Commissioner Thierry Breton, celebrating how “The EU becomes the very first continent to set clear rules for the use of AI.”
Eight months later, a Bloomberg article discussed how the new AI rules “risk entrenching the transatlantic tech divide rather than narrowing it.”
Gabriele Mazzini, the EU AI Act Architect, and lead author, expressed regret and admitted that its reach has ended up being too broad: “The regulatory bar maybe has been set too high. There may be companies in Europe that could just say there isn’t enough legal certainty in the AI Act to proceed.”
How it started – How it’s goingIn September, the EU released “The Future of European Competitiveness” report. In it, Mario Draghi, former President of the European Central Bank and former Prime Minister of Italy, expressed a similar observation: “Regulatory barriers to scaling up are particularly onerous in the tech sector, especially for young companies.”
In December, there were additional indications of a growing problem.
1. When OpenAI released Sora, its video generator, Sam Altman reacted about being unable to operate in Europe: “We want to offer our products in Europe … We also have to comply with regulation.”[6]

2. “A Visualization of Europe’s Non-Bubbly Economy” by Andrew McAfee from MIT Sloan School of Management exploded online as hammering the EU became a daily habit.

These examples are relevant to the U.S., as California introduced its own attempt to mimic the EU when Sacramento emerged as America’s Brussels.
– California’s bill SB-1047 as another cautionary tale
Senator Scott Wiener’s SB-1047 was supported by EA-backed AI safety groups. The bill included strict developer liability provisions, and AI experts from academia and entrepreneurs from startups (“little tech”) were caught off guard. It built a coalition against the bill. The headline collage below illustrates the criticism of the bill as it would strangle innovation, AI R&D (Research and Development), and the open-source community in California and around the world.

The bill was eventually rejected by Gavin Newsom’s veto. The governor explained that there’s a need for an evidence-based, workable regulation.

You’ve probably spotted the pattern by now. 1. Doomers scare the hell out of people. 2. It supports their call for a strict regulatory regime. 3. Those who listen to their fearmongering regret it.
Why? Because 1. Doomsday ideology is extreme. 2. The bills are vaguely written. 3. They don’t consider tradeoffs.
2025
– The vibe shift in Washington
The new administration seems less inclined to listen to AI doomsaying.
Donald Trump’s top picks for relevant positions prioritize American dynamism.
The Bipartisan House Task Force on Artificial Intelligence has just released an AI policy report stating, “Small businesses face excessive challenges in meeting AI regulatory compliance,” “There is currently limited evidence that open models should be restricted,” and “Congress should not seek to impose undue burdens on developers in the absence of clear, demonstrable risk.”
There will probably be a fight at the state level, and if SB-1047 is any indication, it will be intense.
– Is the AI panic going to be further backlashed?
This panic cycle is not yet at the point of reckoning. But eventually, society will need to confront how the extreme ideology of “AI will kill us all” became so influential in the first place.

——————————-
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
——————————-
Endnotes
Read More | 14 Comments
‘Free Speech Absolutist’ Elon Musk Suspends Critics On ExTwitter, Asks People To Be Nicer
from the free-speech-relativist dept
by Mike Masnick - December 30th @ 1:07pm
The inevitable has happened and Elon has started banning and suppressing the speech of folks who were “on his team,” leading to many suddenly realizing that maybe he wasn’t such a free speech supporter after all.
Look, we’ve spent the better part of the last three years pointing out that Elon Musk does not understand free speech and has often worked directly against basic principles of free speech. He has filed numerous lawsuits that seek to suppress speech. And even if you want to claim he somehow took a more “free speech” approach to running ExTwitter than his predecessors, you’d still be wrong.
He has regularly banned journalists who anger him or shut down reporting that challenges his political allies. He has repeatedly throttled links to sites he views as competitive and recently admitted to suppressing posts with links to news sources.
And, of course, when it matters most for free speech, in pushing back against government attempts at suppression, Musk has shown that he’s a pushover for authoritarian demands, so long as he is supportive of the government in question. While he has occasionally stood up to when he ideologically disagrees with the government, these seem to be the exceptions that prove the rule.
Even Elon’s own ExTwitter transparency report admits that under his watch, account suspensions have tripled compared to what they were pre-Musk.
There is no measure under which you can say that Elon is a bigger supporter of free speech than the previous management of Twitter, except in the very, very narrow category of “allowing bigoted Elon Musk fans to be loudly disruptive on the platform.”
And now, even that is coming back to bite him a bit.
In the last week, a bunch of MAGA folks called out Elon for his support for H1B visas and other attempts to bring in high-skilled tech workers to the US. Given that many of the MAGA supporters have spent much of the last two years falsely claiming that Elon was “bringing free speech back,” it was almost amusing to watch them slowly realize that he’s willing to suspend them or to take away their premium features on the site when he gets angry with them.
The most prominent account was Laura Loomer, whose biggest claim to fame seems to be her ability to get banned from platforms.

Musk then used the favorite trick to justify account suppression not being an attack on free speech by redefining spam to mean something… totally unrelated to spam.

Musk’s explanation raises more questions than it answers. This is Elon retconning a justification for the suppression of certain accounts. First, he claims that the algorithm is set to “maximize unregretted user-seconds,” a made-up, impossible-to-calculate stat that he’s talked about for a while now. He then claims that the way the algorithm does this is by rating certain accounts based on how frequently other paying accounts mute or block them. But then he adds a caveat: if he discovers a brigading campaign by accounts to mute/block other accounts in an attempt to suppress their reach, ExTwitter can magically parse out the real mutes/blocks from the fake brigaded ones, and declare some accounts to be “spam.”
This is all a lot of nonsense for Elon to be able to suppress any speech he wants and try to justify it as spam (just like he’s done in the past by redefining “doxxing.”) Of course, as with Elon’s ever-changing definition of doxxing to justify his own actions, I imagine that his legion of fans will continue to buy into his nonsense definition of spam.
Well, except for those MAGA faithful who are now furious that their faces are being eaten by the Leopards Eating Faces Party they supported.
In other words, Musk reserves the right to unilaterally decide which blocks and mutes are “legitimate” and which are not, based on criteria known only to him. This arbitrary and opaque process is a far cry from a principled commitment to free speech.
(Also, I won’t even get into how his tweet misunderstands the whole “live by the sword/die by the sword” line, but will leave that as an exercise for readers).
The end result of this, though, came down to Musk pleading with people to stop being such assholes on his site he took over specifically to unban people for being assholes.

I mean, it’s not like we didn’t warn Elon exactly how this would go. And, it’s not like we haven’t written about how content moderation teams aren’t about ideology. They just wish everyone would stop being jerks, which is the key to any site that allows user-generated content.
I know that I’m banging the drum over this over and over again, but it’s because there are still a ton of people insisting, falsely, that Elon Musk has some sort of principled take on free speech, when it’s been made clear over and over and over and over again that his take is based entirely on his own whims of what he wants, and not any actual understandable conception of free speech.
No matter how many times Musk is caught red-handed suppressing speech he doesn’t like, a vocal contingent will likely continue to buy into the myth of him as a “free speech absolutist.” But for anyone willing to look objectively at his actions rather than his words, the reality is undeniable. Elon Musk’s “free speech” posture is nothing more than a flimsy rhetorical cover for his own desire to control the discourse.
Yes, he has every right to do this on his own platform, but so too did the operators of Twitter before him. Musk may draw the lines of content moderation slightly differently than the previous team, but he certainly seems to draw them much more arbitrarily according to his personal whims.
Read More | 113 Comments
41 Percent of Americans Live Under Age Verification Laws Targeting Porn
from the not-a-free-society dept
by Michael McGrady Jr - December 30th @ 11:15am
Age verification laws saw an unfathomable renaissance in 2024. It’s quite frightening to see a political class of predominately far-right Christian nationalists implement the anti-porn vision of Project 2025 without President-elect Donald Trump yet entering the White House.
These laws coming out of state legislatures are scripted like how Russell Vought, a controversial architect of Project 2025 and one of Trump’s closest Christian nationalist allies, described in a viral undercover video revealing how age verification laws serve as a “back door” ban on porn.
As of this writing, nearly 139 million U.S. residents live in states with age verification laws on the books that specifically target adult entertainment platforms like Pornhub.com or xHamster.com.
That is slightly over 41 percent of the country’s total population. They reside in 19 predominantly Republican-held states, which President-elect Trump won during the 2024 Presidential Election. Virginia going blue this past election is the one exception, despite having their age-gating law.
Several of these states will also have age verification laws in effect on Jan. 1, 2025. States with laws entering force include Florida (HB 3), Tennessee (SB 1792), and South Carolina (H. 3424). Georgia’s age verification law (SB 351) will enter force on July 1, 2025.
The parent companies of platforms like Pornhub have geo-blocked or will geo-block these states.
The U.S. Supreme Court is scheduled to hear oral arguments on Jan. 15 in a case challenging the state of Texas and its age verification law, House Bill 1181. That lawsuit was brought by the Free Speech Coalition and a plaintiff class of the operators of the world’s largest adult websites.
The Free Speech Coalition additionally filed new federal lawsuits in Tennessee and Florida.
In the lawsuit filed in Tennessee particularly, the Free Speech Coalition and its fellow plaintiffs – online sex education providers, pleasure product retailers, and fan platforms – not only highlight the clusterfuck of censoring protected speech but the fact that violators could face a felony.
How can Republican elected officials justify these laws when they say they support “freedom” and the First Amendment rights of their constituents?
The truth is that they can’t justify these laws. And most of them know that.
Considering all of this, the reason far-right folks are successful in presenting anti-porn laws as so-called “public health” or “public safety” measures is that they excel at fearmongering and manipulating their base into believing in bigoted and outlandish falsehoods about sexuality.
What can be done? Resisting age verification laws and other content restrictions presented by the far-right as “protections” for minors or family values is paramount to the activism agenda in 2025. Lawsuits and lobbying can only go so far. Age verification laws are not only unpopular, but urging grassroots-level organizing that transcends the political spectrum is what we need to see.
A perfect example of this can be seen among the coalition of organizations urging the Supreme Court to kill Texas HB 1181 through amicus briefs and in the representation of the Free Speech Coalition and the porn companies. Counsel for the American Civil Liberties Union took up the case along with the Free Speech Coalition’s private attorneys due to the civil liberties overlap.
The Cato Institute, Institute for Justice, and Foundation for Individual Rights and Expression joined the Electronic Frontier Foundation, the Woodhull Freedom Foundation, and many other civil society groups in urging the court to rule against Texas and protect freedom of expression.
Here’s to 2025 and fighting Trump-emboldened far-right Christian nationalism.
Cheers, folks.
Michael McGrady covers the tech and legal sides of the online porn business.
Read More | 54 Comments
Daily Deal: Microsoft Visio Professional 2021 for Windows
from the good-deals-on-cool-stuff dept
by Daily Deal - December 30th @ 11:10am
Visio is Microsoft’s ultimate tool for diagramming. Large, complex data can be too technical and overwhelming. That’s where Visio comes in. With dozens of premade templates, starter diagrams, and stencils, presenting is way easier. Flowcharts, org charts, floor plans, diagrams, and more! Create easy-to-understand visuals with confidence. It’s on sale for $19.97 for a limited time.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Read More | 1 Comment
Mindlessly ‘Deregulating’ U.S. Telecom Contributed to The Worst Hack In U.S. History
from the when-the-check-comes-due dept
by Karl Bode - December 30th @ 9:35am
For the better part of thirty years telecom giants (and the consultants, think tanks, and lobbyists paid to defend them) have fought against every effort at coherent federal oversight. It didn’t matter whether it was modest privacy standards or basic pricing transparency, the argument that was if you stripped away coherent state and federal government oversight of telecom, free market magic would happen.
Not only is U.S. broadband uncompetitive, patchy, expensive, with bad customer service as a result, lax oversight and privacy/security standards has resulted in a steady parade of hacks and leaks, culminating recently in the worst hacking intrusions the U.S. has ever seen. Chinese hackers deeply infiltrated nine major U.S. ISPs to spy on high profile targets, and the government and U.S. telecoms are still trying to assess the damage months later. (Why, it’s almost as if corruption is a national security risk.)
Because the “Salt Typhoon” hackers were very careful about wiping logs it’s been difficult to assess the full scale of the intrusion or whether intruders are still in sensitive systems. Officials believe intruders could still be rooting around the networks of the nine compromised ISPs. They also state the hack was because telecoms “failed to implement rudimentary cybersecurity measures across their IT infrastructure.”
The U.S. reporting on the hack has been…interesting.
The story has seen a fraction of the press attention reserved for the TikTok moral panic. And very few news outlets are willing to draw a direct line between the telecom industry’s relentless “deregulatory” lobbying (read: corruption) and the intrusion, despite U.S. officials making it very clear in statements:
“When I talked with our U.K. colleagues and I asked, ‘do you believe your regulations would have prevented the Salt Typhoon attack?’, their comment to me was, ‘we would have found it faster. We would have contained it faster, [and] it wouldn’t have spread as widely and had the impact and been as undiscovered for as long,’ had those regulations been in place,” [White House Cybersecurity chief] Anne Neuberger said. “That’s a powerful message.”
The FCC is poised to hold meetings next month to address whether it should shore up its cybersecurity oversight of telecoms. But at the helm of those conversations will be new Trump FCC boss Brendan Carr, who has never stood up to major telecoms on any issue of importance, ever. And the looming Trump-court-backed defeat of net neutrality also curtails the FCC’s authority on cybersecurity.
Again, the U.S. Congress has repeatedly proven too corrupt to pass meaningful telecom reform. Regulators are routinely stocked with revolving door careerists too worried about their next career move to stand up to telecoms. And the corrupt U.S. Supreme Court just neutered what’s left of regulatory independence, ceding most reforms to a Congress too corrupt to act.
The Salt Typhoon hack comes after years and years of officials freaking out about the security risks of Chinese-made Huawei telecom hardware. Though when the worst hack in U.S. history finally arrived it was courtesy of lax domestic oversight, domestic deregulation, domestic corruption, domestic laziness, and outdated administrative passwords.
Read More | 8 Comments
FTC Orders ‘Gun Detection’ Tech Maker Evolv To Stop Overstating Effectiveness Of Its Glorified Metal Detectors
from the same-old-stuff-only-much-more-expensive dept
by Tim Cushing - December 30th @ 5:22am
Updated: This post has been updated, as the original potentially overclaimed both what the FTC settlement said regarding what Evolv could market as well as Evolv’s response to it (suggesting it would try to limit the settlement it agreed to). We regret the misleading descriptions and have updated the article accordingly.
Evolv might be new to the game but it’s already made a name for itself. And not a good one.
It was an integral part of New York City Mayor Eric Adams’ ongoing run of public failures. The mayor announced Evolv would be placing its “gun detection” tech in the city’s subways, despite the public admission of Evolv CEO Peter George (during a call with investors) that the tech wouldn’t work all that well in subways.
“Subways, in particular, are not a place that we think is a good use case for us,” George said, due to the “interference with the railways.”
He probably meant interference from the railways, but the end result of Evolv’s trial run could probably be described as “interference with the railways” just as accurately.
A pilot program testing AI-powered weapons scanners inside some New York City subway stations this summer did not detect any passengers with firearms — but falsely alerted more than 100 times, according to newly released police data.
Through nearly 3,000 searches, the scanners turned up more than 118 false positives as well as 12 knives, police said, though they declined to say whether the positive hits referred to illegal blades or tools, such as pocket knives, that are allowed in the transit system.
On one hand, CEO Peter George definitely didn’t oversell the tech’s effectiveness when he expressed his reluctance to deploy it in city subways. On the other hand, it would appear Evolv’s sales force has overstated the tech’s effectiveness so often, the Federal Trade Commission has been forced to step in. Here’s more from Matthew Guariglia and Cooper Quintin of the EFF:
The Federal Trade Commission has entered a settlement with self-styled “weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly” and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums, does far less than they’ve been claiming.
The FTC alleged in their complaint that despite the lofty claims made by Evolv, the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects that are not weapons.”
Evolv is selling metal detectors with some unproven AI stapled to them. Because there’s AI involved, the company has no qualms about selling its metal detectors for up to five times the going rate of regular, non-AI-tainted metal detectors. If customers balk at the markup, that’s where the salespeople step in to, apparently, overstate the accuracy of Evolv’s tech and its presumed effectiveness in reducing violent crime by detecting weapons.
Here’s what the settlement [PDF] prevents Evolv from making misrepresentations about in its marketing materials, advertising, or anything connected with pitching its products to potential customers:
A. the ability to detect weapons;
B. the ability to ignore harmless personal items;
C. the ability to detect weapons while ignoring harmless personal items;
D. the ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags;
E. weapons detection accuracy, including in comparison to the use of metal detectors;
F. false alarm rates, including comparisons to the use of metal detectors;
G. the speed at which visitors can be screened, as compared to the use of metal detectors;
H. labor costs, including comparisons to the use of metal detectors;
I. testing, or the results of any testing; or
J. any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other automated systems or tools
It also instructs the company to inform all of its educational facility customers that they can cancel their contracts immediately and pay only what’s owed through the point the contract is cancelled.
The only upside for Evolv is that this settlement only applies to its Evolv Express product and only to its marketing to customers in the educational field. It’s still open season elsewhere, but this settlement contains admissions by the company that it misled these particular customers, which should make other potential customers in other areas (hospitals, subways, etc.) far more wary of trusting Evolv’s effectiveness assertions.
Read More | 17 Comments
You're Subscribed to: Techdirt Daily Newsletter using the address: snoqualmie2@bukanimers.com
newsletters@techdirt.com
Floor64, Inc.
370 Convention Way
Redwood City, CA 94063