Signals & Safeguards – Issue 7 • Wednesday, April 29, 2026

A concise weekly scan of surveillance, privacy, cybersecurity, and the safeguards public officials should keep in view.

Signals & Safeguards newsletter masthead

At a glance

  • Section 702 reauthorization is now teeing up for House floor action under a closed rule, after warrant-related reform amendments were blocked in Rules and the April 30 deadline remained one day away.
  • Florida investigators documented thousands of license-plate-reader searches tracking protesters, while Bend just turned on its own automated camera system; Oregon’s new ALPR law gives the public new tools to ask the procurement and retention questions that matter.
  • The most consequential surveillance architecture this week is the kind operated by private vendors with light access controls — an AI cybersecurity model leak, a $130 million IRS data-linkage platform, and global telecom-tracking infrastructure running since at least 2022.

Section 702 reauthorization moves toward the House floor without warrant votes

As Issue 7 went to publication, the House Rules Committee had reported a closed rule for S. 1318, teeing up leadership’s Section 702 reauthorization for possible House floor action today. The House Majority Leader’s schedule lists S. 1318, the Foreign Intelligence Accountability Act, as legislation considered pursuant to a rule. The rule matters because it does not simply set debate time; it determines what amendments the House will actually be allowed to vote on.

The answer, for now, is narrow. The Rules report provides for consideration of S. 1318 under a closed rule, with the text of Rules Committee Print 119-27 considered adopted as modified only by the amendment printed in Part C. That Part C amendment, offered by Rep. Rick Crawford, is an oversight and penalties clarification: it directs the intelligence community inspector general to determine whether referred queries violate law, rules, or regulations or constitute abuse of authority, and clarifies that criminal penalties apply to query-procedure violations relating to U.S.-person queries.

What did not make it through Rules is the more consequential reform fight. A motion to make in order a Biggs amendment creating a warrant requirement for covered U.S.-person queries of Section 702-acquired information was defeated 6-6. A motion to make in order a Massie amendment prohibiting reverse targeting under Section 702 was defeated 4-7. Another Massie amendment narrowing the expanded definition of electronic communication service provider was also defeated. In practical terms, the warrant fight reached the Rules Committee, but not the House floor.

That procedural outcome changes the framing from yesterday’s version. The story is no longer just that Section 702 was stalled while leaders scrambled. The story is that leadership’s three-year extension is moving forward through a rule that blocks separate floor votes on warrant-related amendments. The core substantive question remains the same: whether the government should be able to search Americans’ communications collected under a foreign-intelligence authority without first getting judicial approval.

One important detail still tempers the cliff framing: most surveillance authorities under Section 702 can continue through March 2027 under existing certifications even if Congress fails to act before April 30. That does not make the deadline meaningless, but it does make the procedural drama somewhat narrower than the rhetoric suggests. The practical urgency is real; the deeper signal is the pattern. When reform is offered, the fight often happens at the procedural gate before the public ever sees a clean floor vote.

Why it matters for Bend: the warrant fight has been deferred for six issues of this newsletter. Whether S. 1318 passes today, stalls again, or becomes part of another procedural bargain, the local lesson does not change. Federal guardrails remain contested and fragile. That makes local procurement rules, retention limits, access logs, and public oversight more important, not less, because local governments cannot assume that federal law will provide the missing friction later.

Florida documented thousands of plate-reader searches against protesters. Bend just turned on its own cameras.

Treasure Coast Newspapers published the strongest piece of investigative reporting on automated license plate readers in years. Reporter Jack Lemnus reviewed more than five million Flock Safety searches across Florida and found that dozens of police departments, sheriff’s offices, and campus police had used the system to track drivers tied to protests including No Kings, 50501, Hands Off, and immigration-enforcement actions.

Three Treasure Coast agencies that publicly say they do not actively participate in immigration enforcement ran at least 25 immigration-related searches in 2025; statewide, immigration-related Flock queries rose 82% from 2024 to 2025. Sebastian Police, Vero Beach Police, and Port St. Lucie Police were among the named departments using Flock cameras; Stuart Police uses a similar Vigilant Solutions system. Some agencies declined to provide camera counts.

The TCPalm investigation lands inside a larger national pattern. The Electronic Frontier Foundation’s analysis of approximately 12 million Flock search logs, first reported by 404 Media, found that more than 50 federal, state, and local agencies ran protest-related searches across a ten-month window covering the 50501 movement, Hands Off! protests, and the June and October No Kings demonstrations. In one widely discussed case from last year, a Texas sheriff’s deputy in Johnson County searched 83,000 Flock cameras nationwide tracking a woman who had sought an abortion in Illinois, with the search reason logged as “had an abortion, search for female.” The newsletter has flagged similar use-case drift since Issue 1.

Oregon now provides one of the strongest state-level frameworks in the country to ask the questions Florida’s records expose. Senate Bill 1516, signed into law by Governor Tina Kotek on March 31 with an emergency clause, took effect immediately. The law restricts how law enforcement uses ALPR systems, limits sharing, requires audit logging, and — most significantly — creates a private right of action allowing Oregonians to sue private companies that sell or otherwise improperly use ALPR data.

The Oregon Law Center documented that in June 2025, agencies outside Oregon searched the networks of Oregon’s local law enforcement agencies hundreds of times on behalf of ICE. A University of Washington report from October 2025 found that Border Patrol had access to at least ten Washington police departments’ camera databases without explicit authorization. Both findings sit directly under SB 1516’s new framework.

Surveillance from above is part of the same story. The Intercept documented that LAPD’s Drone as First Responder program flew 32 drone flights over the March 28 No Kings protest in downtown Los Angeles, including nine that began before any dispersal order. Drones lingered over the Metropolitan Detention Center and the Little Tokyo intersection for hours. Skydio’s own marketing materials say its X10 drones can read license plates from 800 feet and identify individuals from more than 2,500 feet.

The Bend hook lands directly inside this national arc. Bend Police activated four new red-light and speed cameras on Wednesday, April 15, operated under contract by Verra Mobility. In approximately five days — through 7:54 a.m. on Monday, April 20 — the cameras logged 352 events, a rate of roughly 70 per day, with red-light events outnumbering speed events roughly two-to-one. Tickets are not yet being issued during the 30-day warning period; ticketing begins May 15. Camera locations are SE Reed Market Road and SE 3rd Street, NE 27th Street and NE Neff Road, and SE Powers Road and US Highway 97.

Why it matters for Bend: Bend’s deployment is happening under SB 1516’s new legal framework, which gives the council and the public tools they did not have a month ago. The questions that matter are not whether the cameras catch red-light runners. The questions are: what data does Verra retain, for how long, and where? Who has access? Are the cameras capturing license-plate data on every passing vehicle, including non-violators? Are there logs of who queries the system and why, and is anyone reviewing those logs? Florida shows how “missing people and stolen cars” can quietly become “people who attended a protest.” Oregon’s law gives Bend clearer footing to answer those questions before the warning period ends.

The most consequential surveillance architecture this week is operated by private vendors

Three separate stories this week describe the same structural pattern: surveillance and security infrastructure increasingly operated by private contractors with weak access controls, broad scope, and limited public accountability, even as it is integrated more deeply into government and financial systems.

Anthropic confirmed an unauthorized-access incident at Claude Mythos Preview through one of its third-party vendor environments. Mythos was launched April 7 as part of a curated rollout to enterprise and government partners and publicly described as too dangerous for general release because of its software-vulnerability-finding capabilities. The breach occurred on launch day: a small group reportedly obtained access through a third-party Anthropic contractor whose credentials were apparently shared, then used educated guesses about Anthropic’s URL naming patterns to reach the model. Whatever the merit of the model’s marketing, the substantive question is the same one this newsletter asks of every vendor-mediated system: vendor controls are the actual perimeter.

Anthropic also responded to Senator Ron Wyden’s letter on AI surveillance access by saying its policy bars unauthorized surveillance and analysis of bulk-domestic-collection data, while acknowledging an exception for a small number of national-security customers using models for foreign-intelligence analysis in accordance with law — including foreign intelligence that includes incidentally collected U.S.-person information. That phrase tracks the same Section 702 architecture now in front of Congress.

The Intercept reported on Palantir’s Lead and Case Analytics platform, used by IRS Criminal Investigation since 2018. The IRS has paid Palantir more than $130 million for a platform that links tax records, Affordable Care Act data, bank statements, FinCEN data, and cryptocurrency wallet data. Social-relationship mapping is core to the design: the system analyzes networks of people, including calls, texts, emails, and IP-address relationships, and helps investigators establish new relationships among actors.

Citizen Lab published Bad Connection, documenting two global telecommunications-surveillance campaigns. One combines SS7 and Diameter signaling to track mobile-subscriber locations; the other uses SIMjacker attacks and has logged more than 15,700 location-tracking attempts since October 2022. The structural finding is the most important: these vulnerabilities are inherent to global telecommunications design and business practices, not simply software bugs.

Why it matters: capabilities the public might assume are tightly held by accountable government agencies are often operated through private vendors with substantial access, weak access controls, and customer lists the public cannot easily see. Mythos leaked through a contractor on day one. Palantir’s LCA platform has expanded across administrations without sustained public deliberation. The telecom-tracking infrastructure documented by Citizen Lab has operated for years despite repeated public reporting. Together, they describe the actual perimeter of modern surveillance, and where its real controls and failures live.


Warning Signals

These items point toward where surveillance systems and governance fights may be heading next. The strongest signals this week describe the gap between marketing and actual data flow — workplace tools quietly becoming AI training data, child-safety frameworks becoming identity-verification infrastructure, and pushback against federal practice producing visible but limited change.

Signals section header

Workplace data is becoming AI training data through three different routes

Three pieces of reporting this month describe the same shift through different mechanisms. Atlassian announced that starting August 17, customer metadata and in-app data from Jira, Confluence, and other cloud products will be used to train its AI tools by default — with the opt-out tiered by paywall. Free and Standard customers cannot opt out of metadata collection at all; Premium turns in-app collection off by default but keeps metadata mandatory; only Enterprise customers can opt out of both. The change affects roughly 300,000 customers, with retention periods up to seven years.

Reuters separately reported that Meta’s Superintelligence Labs has begun installing keystroke and mouse-movement tracking on employee computers to generate training data, based on internal memos and on-the-record vendor confirmation. Forbes reporter Anna Tong documented an emerging market in which defunct startups sell their Slack archives, email threads, and code libraries to AI developers through brokers like SimpleClosure’s Asset Hub, which has processed nearly 100 deals in the past year. In none of the three cases do the workers whose communications become training data typically have notice or consent.

Why it matters for Bend: government and HIPAA-regulated organizations are exempt from Atlassian’s new policy, but smaller Oregon contractors, school districts, and nonprofits running Free or Standard tiers are not. The procurement and IT-policy questions are immediate: what tools are being used, what data is being retained, and whether vendor AI defaults have changed underneath ordinary work.

Identity-verification mandates keep arriving disguised as child-safety laws

A Boston Globe op-ed by Evan Greer of Fight for the Future and Nathalie Marechal of Northeastern’s Institute for Information, the Internet and Democracy makes the substantive case against pending Massachusetts proposals to ban under-14s from social media and require age verification — an argument that applies equally to similar bills in other states. The unstated price of these laws is mandatory identity infrastructure for all users, not just minors. Earlier this year, hackers stole 70,000 Discord users’ data from the company’s age-assurance vendor — a concrete harm, not a hypothetical. California’s A.B. 1709, currently being fast-tracked through the Assembly, would extend the ban to age 16 and create a new state e-Safety Advisory Commission to enforce it.

The op-ed authors note that the Department of Homeland Security has already used administrative subpoenas this year to demand information about anonymous social-media accounts that monitor and criticize ICE — exactly the kind of accountability journalism that requires the anonymity these laws would weaken. The risk is not that child safety is unimportant. The risk is that a child-safety frame can normalize account-to-identity linkage for everyone.

Why it matters for Bend: Issue 4 covered the OpenAI-funded Parents & Kids Safe AI Coalition shaping legislation that mirrored the company’s own product positioning. The pattern continues: child-safety language is being used to build identity infrastructure that will affect every user, while biometric-data centralization, breach risk concentration, and age-verification vendors are underweighted in the debate.

Federal practice gets pushed back from multiple directions, with mixed but real results

This was an unusually concentrated week of pushback against federal enforcement and surveillance practices. The American Civil Liberties Union, Common Cause, and other plaintiffs filed a lawsuit on April 21 challenging the Justice Department’s demand that all 50 states and the District of Columbia turn over voter-registration records, after 12 states complied and DOJ sued 30 states; five states — Michigan, Oregon, California, Massachusetts, and Rhode Island — have had cases dismissed. The Electronic Frontier Foundation filed suit on April 22 against DHS and ICE over administrative subpoenas issued without judicial approval to tech companies including Amazon, Apple, Google, Meta, Reddit, and X — subpoenas DHS withdrew when challenged.

NBC News reported on April 24, based on two senior DHS officials and two immigration attorneys, that ICE has verbally instructed field offices to stop entering homes without judicial warrants and has drastically curtailed arrests inside immigration courthouses, reversing the policy memorialized in a May 2025 memo Issue 3 covered when its legal authority first came into question. New York Times reporter Elizabeth Williamson disclosed that the FBI had investigated her after her February 28 article on FBI Director Kash Patel’s security arrangements for his girlfriend; FBI agents queried federal databases for her information before the Justice Department ended the investigation, citing no legal basis. Separately, the Ninth Circuit Court of Appeals issued a 3-0 decision on April 22 prohibiting California from enforcing part of its federal-officer identification law on Supremacy Clause grounds, a ruling with direct implications for Oregon’s HB 4138 mask-ban law passed in March.

Why it matters for Bend: the pushback is real and produces visible change, but it is also uneven and reversible. ICE’s rollback was verbal, not memorialized. The DOJ stopped the FBI investigation only after agents had already queried databases. The Ninth Circuit ruling makes Oregon’s mask-ban law substantially more vulnerable than it was a month ago. State and local guardrails still matter, but a verbal change can be verbally reversed.

Florida AG opens criminal probe of OpenAI over FSU shooting

Florida Attorney General James Uthmeier announced a rare criminal investigation of OpenAI on April 21 over the April 17, 2025 FSU shooting that killed two people and wounded six. Per Uthmeier’s announcement and court filings reportedly including more than 200 AI messages entered into evidence, the suspect used ChatGPT for guidance on weapons selection, ammunition, and timing. Uthmeier said that if a person had given the same guidance, the office would be charging that person with murder. OpenAI responded that the model provided factual responses to questions with information available across public sources online.

The case follows a separate lawsuit over a February 2026 mass shooting in British Columbia, in which the Wall Street Journal reported that OpenAI’s internal safety systems had flagged the shooter’s account and company leaders considered alerting law enforcement before deciding not to. The policy problem is not only content moderation. It is how companies, prosecutors, and legislatures define responsibility when high-risk interaction logs already exist, internal systems flag danger, and public reporting later reveals the company had enough context to consider intervention.

Why it matters: Issue 5 covered OpenAI’s support for an Illinois bill that would limit the company’s liability for catastrophic model harms. The Florida case is the kind of harm such a bill would shield against.

A comprehensive bipartisan-curiosity surveillance bill enters Congress

Rep. Thomas Massie introduced H.R. 8470, the Surveillance Accountability Act, on April 23. The bill would require warrants for almost all federal searches including digital searches; close the third-party-data loophole by requiring warrants for data held by ISPs, banks, cloud providers, and data brokers regardless of vendor consent; explicitly cover biometric data including facial scans and gait analysis, ALPR data, and vehicle-movement patterns; and create a Bivens-style federal cause of action with attorney’s fees against federal employees who violate Fourth Amendment rights.

It is, in effect, a legislative compendium of the policy concerns this newsletter has documented across six issues. Massie has libertarian-Republican credentials but bills introduced under his name often do not advance. The proposal’s main significance is what it tells us about where the bipartisan civil-liberties center could converge if either party builds toward it. Co-sponsors include Rep. Lauren Boebert and Rep. Warren Davidson; the Davidson connection is notable given his support this week for Speaker Johnson’s much narrower Section 702 reauthorization.

Why it matters: the same week a comprehensive warrant-based bill enters the House, a far narrower 702 reauthorization with no warrant requirement is the only legislation in serious negotiation. The contrast is the story.

Driver populations get GPS and impairment-detection mandates

Maryland passed bipartisan legislation requiring repeat-offender drivers facing license suspension to install Intelligent Speed Assistance technology — GPS-aware systems that prevent vehicles from exceeding the local speed limit. If signed by Gov. Wes Moore, the law takes effect October 1. The federal Department of Transportation, meanwhile, faces a September 2027 statutory deadline to mandate advanced drunk and impaired-driving prevention technology in all new passenger vehicles under Section 24220 of the 2021 Infrastructure Investment and Jobs Act.

NHTSA missed its November 2024 rulemaking deadline and now describes, in a February 2026 Report to Congress, accuracy problems serious enough that even 99.9% detection accuracy would generate millions of false-positive vehicle restrictions per year. Both mandates are sold as discrete safety interventions for narrow populations. Both establish GPS, sensor, and biometric infrastructure that becomes standard in passenger vehicles regardless of driver classification.

Why it matters for Bend: Issue 6 flagged that device defaults are becoming the new front line for identity and behavior infrastructure. These are the vehicle equivalents.

Direction of travel: identity infrastructure is being built underneath consumer software, vehicles, and child-safety legislation while courts and watchdogs catch up after the architecture is already in place.

“Ultimately, arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”

Edward Snowden

Practical habits that lower risk

These are the practical protections and patching habits that stood out most clearly this week. The strongest safeguards are still disciplined ones: faster patching, narrower data, and fewer assumptions about what default settings actually do.

Safeguards section header

CISA’s April KEV deadline passed Monday — but the patch picture is widening, not narrowing

CISA added eight more vulnerabilities to its Known Exploited Vulnerabilities catalog on April 13, with a federal remediation deadline of April 27. The additions included flaws in Cisco SD-WAN Manager and Syncro Zimbra, alongside an actively exploited Microsoft Exchange vulnerability tied to Storm-1175 Medusa ransomware operations. Microsoft’s April Patch Tuesday separately addressed 165 vulnerabilities, the second-largest monthly batch on record.

Practical safeguard: treat KEV deadlines as a minimum floor, not the whole patch strategy. Systems that face the public internet, email, identity, remote access, and vendor management deserve faster review than ordinary monthly patch cycles.

NIST narrows the scope of CVE analysis at the National Vulnerability Database

NIST announced that it will prioritize enrichment only for CVEs in CISA’s KEV catalog, federal-government software, or critical software under Executive Order 14028. CVEs outside those criteria will still be listed, but many will no longer receive automatic enrichment with the metadata organizations use to judge severity.

Why it matters: organizations that rely on NVD enrichment should add other sources to their patch workflow, especially for software outside federal use. The default authoritative source is still valuable, but it is no longer comprehensive enough to carry the whole risk picture.

A privacy-tool vulnerability defeats Tor’s “New Identity” reset

Researchers Dai Nguyen and Martin Bajanik disclosed CVE-2026-6770, a Firefox/Tor vulnerability involving the IndexedDB.databases() API. Any website could derive a stable identifier by creating named databases and observing returned ordering. The identifier persisted across websites and across privacy resets users would expect to clear it, including Tor Browser’s New Identity reset.

Practical safeguard: keep privacy tools updated, and do not treat a privacy promise as the same thing as a guarantee. The lesson is not that Tor is unsafe; it is that privacy depends on implementation details that can fail quietly.

macOS updates are also social-engineering safeguards now

Apple released macOS Tahoe 26.4.1 on April 9, building on a late-March security update addressing WebKit issues, Mail privacy problems, iCloud sensitive-data exposure, and Crash Reporter behavior. Apple also added Terminal protection against potentially harmful pasted commands. Within days, Jamf Threat Labs documented a ClickFix-style attack using the applescript:// URL scheme to open Script Editor instead of Terminal and bypass the new protection.

Practical takeaway: keep macOS current; leave Background Security Improvements on automatic install; and treat prompts asking you to paste commands or install software outside a normal update flow as suspicious by default. A recruiter, meeting invite, verification step, or update prompt can be part of the attack path.

Bottom line: the strongest safeguards this week are still the unglamorous ones — faster patching when exploited vulnerabilities land, less default trust in vendor relationships, shorter retention periods, narrower access, and concrete audit-log questions asked before infrastructure becomes routine. The law is in effect now. The cameras already are too.

“Experience should teach us to be most on our guard to protect liberty when the government’s purposes are beneficent. Men born to freedom are naturally alert to repel invasion of their liberty by evil-minded rulers. The greatest dangers to liberty lurk in insidious encroachment by men of zeal, well-meaning but without understanding.”

Olmstead v. U.S., 277 U.S. 438 (1928) (dissenting) — Louis D. Brandeis

Disciplined safeguards are deliberately boring. That is why they last.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *