Author: Jonathan

  • Surveillance • Privacy • Local Oversight

    Bend’s Flock Data Shows Why Surveillance Oversight Must Be Built Into the System

    Recent reporting about Bend’s Flock Safety cameras and Oregon’s sanctuary-law lawsuit point to the same lesson: privacy rules only work when the databases enforce them.

    The most important surveillance story in Bend this week is not hypothetical. According to The Source Weekly, federal immigration officials made 279 queries into Bend’s Flock Safety data in the first three weeks after the cameras went live. Bend Police reportedly did not authorize those searches, and it remained unclear what, if any, data may have been retrieved.

    That should concern anyone who cares about local control, public accountability, or civil liberties.

    The issue is not simply whether a particular search was improper. The larger problem is governance. When a surveillance system is installed for one stated purpose, it can quickly become useful to other agencies, vendors, or outside users who were not central to the original public debate.

    If city officials cannot later answer basic questions — who searched the system, why they searched it, what they saw, whether the search was tied to a case number or documented purpose, and whether any result was shared — then the public is being asked to trust a system that cannot be independently verified.

    That is not meaningful oversight. That is a blind spot.

    Audit logs are not a technical detail. They are the oversight system.

    Surveillance oversight often gets discussed as if it is mainly a policy question: Should the city approve the tool? What is the stated purpose? What does the vendor promise? What does the agency say it intends to do?

    Those questions matter, but they are not enough.

    Once a system is live, the real questions become operational:

    • Who can access the system?
    • Can outside agencies search it?
    • Can federal agencies search it?
    • Can vendors access camera feeds, search tools, dashboards, support logs, or stored data?
    • Does every search leave a usable record?
    • Can elected officials or independent reviewers verify how the system was used?
    • Are searches tied to a case number, warrant, emergency, or documented purpose?
    • Are improper searches detectable?

    Without those controls, the public has no practical way to know whether the system is being used as promised.

    This does not require assuming bad faith by local officials. In fact, it shows why good-faith intentions are not enough. A system can be approved by people with narrow intentions and later become available to people with very different priorities. That is why access controls, audit logs, outside-agency limits, vendor-access limits, and public reporting must exist before a system goes live.

    “You must first enable the government to control the governed; and in the next place oblige it to control itself.”
    — James Madison, The Federalist No. 51 (1788)

    Oregon’s sanctuary-law lawsuit is also about database access

    The Bend Flock story now sits inside a larger Oregon data-sharing dispute.

    OPB reported that a lawsuit filed in Multnomah County Circuit Court alleges Oregon State Police allowed federal immigration authorities to access Oregonians’ data through shared law-enforcement databases for years, despite Oregon’s sanctuary laws. Oregon State Police denies wrongdoing.

    The Source Weekly also reported that the complaint alleges federal immigration authorities queried state-run data about Oregonians 1.4 million times between February 2025 and February 2026, an average of 3,835 queries each day.

    The practical lesson is straightforward: a sanctuary policy is only as strong as the database permissions behind it.

    If an agency is legally barred from using data for immigration enforcement, the system should not rely on informal restraint. It should include technical controls that prevent improper access, audit logs that expose improper use, and consequences when policy and practice diverge.

    A privacy rule that lives only on paper is fragile. A privacy rule built into permissions, logging, retention limits, and reporting is much harder to ignore.

    Why this matters for Bend

    Bend residents’ information may pass through city, county, state, vendor, and law-enforcement systems. That information can include license plate scans, police records, driver information, public records, utility records, permits, emergency-service data, and other civic information collected for ordinary public purposes.

    The danger is that data collected for one purpose can become useful for another.

    A license plate reader approved for local public-safety purposes can become searchable by outside agencies. A state database can become an immigration-enforcement tool. A vendor platform can create access points that city officials did not fully understand when the contract was approved. A policy promise can fail if the technology underneath it does not actually enforce the rule.

    That is why surveillance oversight cannot stop at approval. Local officials need to know who can search local systems, what outside agencies can access, what vendors can see, how long data is retained, and whether every search leaves a record that can be reviewed.

    The safeguard standard should be simple

    Before Bend approves, renews, or expands any surveillance or sensitive data system, the city should be able to answer:

    • Who can access the data?
    • Who can search the system?
    • Can outside agencies access it?
    • Can federal agencies access it?
    • Can vendors access it?
    • Is every search logged?
    • Are searches tied to a documented purpose?
    • How long is data retained?
    • Can improper access be detected?
    • Can elected officials and the public receive meaningful reports about system use?

    If those questions cannot be answered clearly, the system is not ready.

    This is not about being anti-technology. It is about making sure public technology remains accountable to the public. Tools that collect sensitive information should not depend on trust alone. They should be designed so that misuse is difficult, detectable, and provable.

    Bottom line

    The Bend Flock story and the Oregon sanctuary-law lawsuit point to the same conclusion: access is the story.

    Who can search the data? Who can receive it? Who can share it? Who can buy it? Who can combine it with other systems? Who can turn an ordinary resident’s movement, record, or routine interaction with government into an investigative lead?

    Privacy safeguards fail when access is undefined, unlogged, or routed through systems the public cannot see.

    The best safeguards are practical and boring by design: collect less data, connect fewer systems, limit outside access, require documented purposes, log every search, review the logs, shorten retention, make vendor access visible, and make misuse provable.

    Surveillance oversight works best when restraint is built into the system before the data becomes too useful to give up.


    Related reading:

  • Signals & Safeguards – Issue 9

    Issue 9 • Wednesday, May 13, 2026

    Signals & Safeguards newsletter masthead

    A concise weekly scan of surveillance, privacy, cybersecurity, and the safeguards public officials should keep in view.

    At a glance

    • Bend’s Flock experience shows why outside access and audit logs are not technical details. They are the oversight system.
    • Oregon’s sanctuary-law lawsuit shows that privacy rules must be enforceable at the database level, not just written into policy.
    • Location data, DMV records, platform data, and biometrics are becoming immigration-enforcement inputs.
    • Courts and lawmakers are beginning to test whether digital searches need stronger limits.

    Bend’s Flock data shows why audit logs are not optional

    The strongest local surveillance story this week is not hypothetical. According to The Source Weekly, federal immigration officials made 279 queries into Bend’s Flock Safety data in the first three weeks after the cameras went live. Bend Police did not authorize those searches, and the reporting says it was unclear what, if any, data may have been retrieved.

    That is the core governance problem. A surveillance system can be installed for one stated purpose, then become useful to agencies, vendors, or outside users who were not central to the original public debate. If officials cannot later answer who searched the system, why they searched it, what they saw, and whether the results were shared, then the public is being asked to trust a system that cannot be independently verified.

    This does not require assuming bad faith by local officials. It shows why good-faith intentions are not enough. Access controls, audit logs, outside-agency limits, vendor-access limits, and public reporting have to exist before a system goes live.

    Why it matters for Bend: Bend’s own experience shows that surveillance oversight cannot stop at approval. Councilors and staff need to know who can search local systems, what outside agencies can access, what vendors can see, and whether every search leaves a usable record.

    Oregon’s sanctuary-law fight is also a database-access fight

    The Bend Flock story now sits inside a larger Oregon data-sharing dispute. OPB reports that a lawsuit filed in Multnomah County Circuit Court alleges Oregon State Police allowed federal immigration authorities to access Oregonians’ data through shared law-enforcement databases for years, despite Oregon’s sanctuary laws. OSP denies wrongdoing.

    The Source Weekly reports that the complaint alleges federal immigration authorities queried state-run data about Oregonians 1.4 million times between February 2025 and February 2026, an average of 3,835 queries each day.

    The important lesson is practical: a sanctuary policy is only as strong as the database permissions behind it. If an agency is legally barred from using data for immigration enforcement, the system should not rely on informal restraint. It should have technical controls that prevent improper access, audit logs that expose improper use, and consequences when policy and practice diverge.

    Oregon lawmakers have already started moving in that direction. SB 1587, effective June 5, prohibits public bodies from disclosing personally identifiable information to a data broker unless the broker attests that the information will not be sold or transferred for federal immigration-law enforcement, with exceptions.

    Why it matters for Bend: Bend residents’ information may pass through city, county, state, vendor, and law-enforcement systems. A sanctuary policy or privacy rule only protects people if the underlying databases actually enforce it through permissions, purpose limits, and audit logs.

    “You must first enable the government to control the governed; and in the next place oblige it to control itself.”
    — James Madison, The Federalist No. 51 (1788)

    Location data is sensitive even when someone buys it

    Immigration enforcement is not limited to government-owned databases. Reporting on DHS, PenLink, and commercial location tools fits a broader pattern: agencies can increasingly rely on vendors, data brokers, and analytics platforms to locate, identify, or investigate people without collecting the raw data themselves.

    That is why the FTC’s proposed order against Kochava matters. The FTC says it will prohibit Kochava and a subsidiary from selling, sharing, or disclosing sensitive location data without consumers’ affirmative express consent, after alleging that the company sold location data from hundreds of millions of mobile devices.

    California’s DMV story shows the same concern from the government-records side. CalMatters reports that California is preparing to share detailed driver’s-license information with a national motor-vehicle database, including information affecting more than 1 million unauthorized immigrants with California licenses.

    The shared pattern is simple: data collected for one purpose can become useful for another. Driver records, location trails, license plates, account data, and public-benefits records may all begin as administrative information. Once connected, queried, sold, or shared, they can become enforcement infrastructure.

    Why it matters for Bend: Local governments increasingly depend on vendors and data systems they do not fully control. If commercially available location or identity data can be bought, searched, or combined with public records, then procurement and contract language become privacy safeguards.

    Geofence warrants show why proximity is not suspicion

    Courts are beginning to draw clearer lines around location searches. The Minnesota Supreme Court ruled that Google geofence data used to identify a murder suspect was unconstitutional, reversed a second-degree murder conviction, and said law enforcement needs a warrant to obtain private cellphone information from Google.

    A geofence search works backward. Instead of starting with a suspect and seeking evidence about that person, investigators ask for information about devices near a place during a certain time window, then narrow the list. That can be useful in investigations, but it also risks treating ordinary presence near a location as a reason to be pulled into a police search.

    The same concern is now before the U.S. Supreme Court in Chatrie v. United States, which asks how the Fourth Amendment applies when police use geofence warrants to identify people by location rather than by individualized suspicion.

    Why it matters for Bend: The same principle applies locally: people should not become investigative leads merely because a phone, plate, or device was near a place at a particular time. Local policy can help prevent broad searches from becoming routine before courts settle every boundary.

    Shared pattern

    The strongest stories this week point in one direction: access is the story. Who can search Bend’s Flock data? Who can query Oregon driver or criminal records? Who can buy or analyze location data? Who can receive DMV information? Who can obtain platform data? Who can turn proximity, protest activity, or routine civic records into an investigative lead?

    Privacy safeguards fail when access is undefined, unlogged, or routed through systems the public cannot see.

    “The price of lawful public dissent must not be a dread of subjection to an unchecked surveillance power.”
    — Justice Lewis F. Powell Jr., United States v. U.S. District Court (Keith), 407 U.S. 297 (1972)


    Warning Signals

    These items point toward where surveillance systems, vendor platforms, identity infrastructure, and data governance may be heading next.

    Signals section header

    Vendor access is part of surveillance oversight

    404 Media reported on a Flock-related story involving camera access at a children’s gymnastics center for a sales pitch. The larger lesson is that surveillance oversight cannot focus only on police use. Vendor access can be just as important.

    If a vendor can access camera feeds, system data, search tools, customer dashboards, support logs, demos, or training environments, contracts should state exactly what employees can access, for what purpose, how access is logged, whether data can be used for sales or product development, and whether customers or the public are notified.

    Police-tech vendors are becoming platform companies

    Axon reported Q1 2026 revenue of $807 million, up 34% year over year, and highlighted growth across software, connected devices, AI products, counter-drone tools, real-time operations, body cameras, and TASER products.

    That is not just an earnings story. It is a market signal. Public-safety technology is moving from individual devices toward integrated platforms: cameras, cloud storage, AI features, real-time operations, evidence systems, subscriptions, and future upgrades bundled into vendor ecosystems.

    For public officials, procurement is no longer just “buying a tool.” It can mean entering a long-term platform relationship where today’s contract shapes tomorrow’s data access, integrations, analytics, and switching costs.

    ALPRs are moving into ordinary consumer spaces

    License plate readers are not only appearing on police vehicles or city streets. Retailers such as Home Depot and Lowe’s are also using ALPR systems in parking lots for theft prevention and public-safety purposes.

    That shift matters because private ALPR networks can still generate sensitive location records. They may operate under weaker public oversight than city-owned systems, may be governed mainly by contract, and may still become useful to law enforcement. The safeguard question is not only who owns the camera, but who can search the plate data, how long it is kept, and where else it can go.

    School platforms are civic infrastructure

    The Canvas cyberattack shows that education platforms are no longer just classroom convenience tools. WIRED reported that thousands of schools were disrupted after Instructure shut down Canvas access following a breach by ShinyHunters. Reuters later reported that the incident affected nearly 9,000 schools globally, with stolen data including student names, email addresses, and private messages among students, teachers, and staff.

    When one education platform goes down, students can lose access to assignments, grades, messages, and course materials during critical periods. Afterward, exposed student and staff data can become phishing material. Schools, cities, libraries, utilities, and public agencies increasingly depend on cloud platforms that should be treated as civic infrastructure.

    AI is compressing the vulnerability window

    Google says it disrupted hackers using AI to exploit an unknown weakness in a company’s digital defenses. AP reports that Google found evidence of AI helping attackers exploit a previously unknown vulnerability, and other reporting says the exploit could bypass 2FA on a web-based administration tool.

    The lesson is practical: defenders may have less time. AI can help attackers find logic flaws, test exploit paths, and move faster from discovery to attempted compromise. That makes routine safeguards more important: reduce exposed admin panels, patch quickly, require strong MFA, monitor logs, limit privileges, and ask vendors for clear incident and patch timelines.

    Identity checks are becoming the default answer

    Age verification, VPN restrictions, phone-number identity proposals, digital IDs, and platform verification systems all point toward the same trend: more ordinary activities may require identity proof.

    Child safety, fraud prevention, and robocall reduction are real policy goals. But identity checks are not neutral. They can create new databases, weaken anonymity, expose lawful activity, and make private vendors gatekeepers for speech, communications, and access.

    Direction of travel

    This week’s Signals point toward the same pattern: public and private systems are becoming more searchable, more connected, and more dependent on vendor infrastructure. ALPR networks, police-tech platforms, education systems, cloud records, AI security tools, identity checks, and data brokers all raise the same question: when a system becomes useful, who gets access next?


    Safeguards

    A safeguards page works best when it is practical: less data, cleaner boundaries, stronger access controls, and fewer shortcuts.

    Safeguards section header

    Require audit logs before launch or renewal

    Do not approve surveillance or data systems that cannot answer basic questions: who searched, what they searched, why they searched, what they accessed, whether the result was shared, whether the search was tied to a case number, warrant, emergency, or documented purpose, and whether an outside reviewer can verify the answer.

    This applies to ALPR systems, state databases, vendor dashboards, cloud evidence systems, education platforms, AI tools, and data-broker products. A system without usable audit logs does not merely have a technical gap. It has an accountability gap.

    Limit outside-agency, federal, immigration-enforcement, and vendor access by default

    Access should be narrow by default and expanded only with a clear reason. Local governments should not rely on broad sharing settings, informal assurances, or vendor defaults.

    • no outside-agency access unless explicitly approved;
    • no immigration-enforcement access unless legally required;
    • no vendor access except for documented support needs;
    • no sales, demo, training, or product-development use without written permission;
    • no data sharing without logs, purpose fields, retention limits, and periodic public reporting.

    If a system can be searched by people outside the agency that collected the data, that fact should be visible to elected officials before approval and to the public before renewal.

    Treat public data as held in trust

    Public agencies collect sensitive information because residents need licenses, utilities, schools, benefits, permits, emergency services, and basic civic infrastructure. That does not mean the data should become a general-purpose resource for brokers, vendors, or enforcement pipelines.

    A public-data-trust approach would treat resident data as something government holds on behalf of the public, with duties of loyalty, transparency, narrow use, and enforceable limits. The key question would shift from “can this data be accessed?” to “does this use serve the public purpose for which the data was collected?”

    Use state-level digital civil-rights models

    Montana offers one model worth watching. Voters amended the state constitution to explicitly protect electronic data and communications from unreasonable search and seizure. Montana later restricted state law enforcement from purchasing or otherwise acquiring sensitive personal data from brokers without a judicial warrant.

    Oregon’s SB 1587 points in the same direction by limiting public-body disclosures of personally identifiable information to data brokers when that information may be used for federal immigration-law enforcement. Legislatures can write clearer rules before sensitive data flows into vendor systems, data brokers, or enforcement pipelines.

    Ask vendors when they patched

    The cPanel, Ivanti, Palo Alto, Linux kernel, and water-system cybersecurity stories all point to the same practical safeguard: exposed infrastructure needs a fast patch process. Public officials should expect clear answers: are we exposed, when was it patched, were logs reviewed, were any accounts created or changed, were affected customers notified, what systems depend on this vendor or platform, and what is the backup plan if access is shut down?

    Do not invite a permanent record into every meeting by default

    AI notetakers and meeting bots can be useful. They can also turn informal discussion into searchable records stored by a vendor. Organizations should decide which meetings may be recorded, who can consent, where transcripts are stored, how long they are retained, whether vendors can use the data, and when legal, personnel, strategy, or sensitive constituent discussions require the bot to be removed.

    “There is no cloud – only somebody else’s computer.”

    Cloud tools are still computers, databases, employees, contracts, retention policies, subpoenas, breach risks, and access logs. Convenience should not override recordkeeping, consent, privilege, or privacy decisions.

    Bottom line

    The best safeguards this week are practical and boring by design: collect less data, connect fewer systems, limit outside access, require case numbers or documented purposes, log every search, review the logs, shorten retention, make vendor access visible, treat public data as held in trust, patch exposed systems quickly, and make misuse provable.

    Surveillance oversight works best when restraint is built into the system before the data becomes too useful to give up.

  • Signals & Safeguards — Issue 8

    Wednesday, May 6, 2026

    A concise weekly scan of surveillance, privacy, cybersecurity, and the safeguards public officials should keep in view.

    Signals & Safeguards newsletter masthead

    At a glance

    • Congress extended Section 702 for 45 days, but the fight over warrant requirements, U.S. person searches, and public release of a secret FISC opinion is still ahead.
    • Immigration enforcement is showing what happens when separate surveillance tools become one searchable enforcement stack.
    • ALPR transparency fights are spreading, raising a basic question: how can the public oversee surveillance systems it is not allowed to see?
    • Age-verification laws are moving fast, and the next debate may be whether child-safety rules quietly become identity-check infrastructure for everyone.

    Main Stories

    Congress extends Section 702 — and leaves the warrant fight unresolved

    Congress passed a short-term extension of Section 702, avoiding an immediate lapse but pushing the real surveillance reform fight into June. The key development is not only the extension. It is the transparency window that now follows.

    Senator Ron Wyden says he secured a commitment to release a classified Foreign Intelligence Surveillance Court opinion before the next debate. That matters because Section 702 is often described as foreign-intelligence surveillance, but the core civil-liberties fight is about what happens when Americans’ communications are searched after they are collected.

    The next debate should not be treated as a routine renewal. It is a chance to ask whether warrantless searches, compliance violations, and secret court interpretations are being corrected or simply carried forward.

    Why it matters for Bend: federal surveillance law shapes the broader privacy environment local governments operate in. When national systems normalize broad collection, after-the-fact oversight, and weak search limits, local officials should be careful not to import the same logic into city technology, public-safety tools, vendor contracts, or data-sharing agreements.

    Immigration enforcement shows what a surveillance stack can do

    The clearest surveillance warning this week is not one tool by itself. It is the stack.

    Recent reporting and scholarship point to immigration enforcement systems drawing from a broad mix of tools and data sources: license plate readers, facial recognition, smartphone location data, social media monitoring, commercial records, biometric systems, benefits records, and federal databases. The civil-liberties concern is not only whether any one tool is lawful in isolation. It is what happens when identity, location, movement, association, and online activity can be combined inside one enforcement pipeline.

    That is why local data decisions are never purely local. A city camera, an ALPR hit, a vendor database, a jail record, or a public record may later become useful to a federal agency for a purpose the original collector did not emphasize.

    Why it matters for Bend: local governments should think about surveillance infrastructure as part of a larger ecosystem. Strong policy should address not only what a city collects, but also retention, secondary use, federal access, vendor access, immigration-enforcement sharing, audit logs, and whether residents can understand how their data may travel.

    Shared pattern

    The strongest stories this week point in the same direction: surveillance power grows when separate systems become searchable together. Section 702, immigration enforcement, ALPR networks, data brokers, voter records, and biometric tools may look like separate policy debates. In practice, they all raise the same governance question: who can connect sensitive data, under what authority, with what transparency, and what meaningful safeguards?

    ALPR secrecy is becoming a public-oversight problem

    Automated license plate readers are no longer just a police-camera issue. They are becoming a public-records issue, a vendor-accountability issue, and a democratic-oversight issue.

    EFF warns that open-records laws have helped the public understand how ALPR systems are deployed and shared, but some states are now moving to block public access to broad categories of ALPR information. The concern is not that raw plate scans should be exposed. It is that secrecy laws can also hide basic oversight information: camera locations, sharing reports, scan counts, hit rates, false matches, vendor access, and the scope of deployment.

    Privacy and transparency should not be treated as opposites. Communities do not need public exposure of every driver’s movements to evaluate whether an ALPR system is safe, lawful, or worth renewing. But they do need enough information to know who can search the data, how long records are retained, what outside agencies can access, whether vendor employees can use the system, and whether misuse is actually punished.

    Why it matters for Bend: public oversight works best before surveillance infrastructure becomes normal. If camera locations, sharing rules, audit logs, vendor access, and query practices are hidden from the public, then elected officials are forced to rely on assurances instead of evidence.

    Sensitive civic data is becoming a federal access target

    A federal judge dismissed the Justice Department’s lawsuit seeking detailed Arizona voter information, but the case is still a warning about sensitive civic data.

    Voter rolls can include names, addresses, birth dates, driver’s license numbers, and partial Social Security numbers. Even when the stated purpose is election integrity or eligibility review, the privacy question remains: who receives the data, what other databases it is checked against, how errors are corrected, and whether people are flagged without meaningful due process.

    This fits a larger pattern. Voter records, bank records, immigration records, vehicle records, health data, and commercial data all become more powerful when they are linkable. The risk is not only a breach. The risk is that ordinary administrative records become inputs for identity screening, eligibility checks, enforcement targeting, or automated suspicion.

    Why it matters for Bend: local and state records are often collected for narrow civic reasons. Public officials should ask what happens when those records are requested for broader state or federal purposes, especially when the data is sensitive, error-prone, or difficult for ordinary people to correct.

    “The greatest dangers to liberty lurk in insidious encroachment by men of zeal, well-meaning but without understanding.”

    Justice Louis Brandeis, dissenting in Olmstead v. United States

    Signals

    Early indicators worth tracking

    These items are included because they point toward where surveillance systems, business incentives, identity infrastructure, and data governance may be heading next.

    Signals section header

    Age verification is becoming identity infrastructure

    Child safety online is a legitimate policy goal. But the design of age-verification laws matters.

    Utah’s VPN-focused age-verification law, Michigan’s minors’ social-media bills, Apple’s digital ID rollout, the GUARD Act debate, Colorado’s age-attestation proposal, and UK under-16 restrictions all point toward the same emerging conflict: whether protecting children online will require adults to prove identity, surrender anonymity, or rely on private verification vendors.

    The Oregon angle is important. Oregon’s 2025 age-verification bill is dead, but similar proposals are likely to return. That gives lawmakers time to separate child-safety goals from identity-check infrastructure before the next bill is drafted.

    A strong proposal should minimize data collection, avoid government-ID retention, protect anonymous access to lawful speech, limit vendor reuse, and require independent audits.

    Facial-recognition oversight is still lagging behind deployment

    Facial recognition is moving into policing, border enforcement, retail security, event security, schools, and public spaces faster than oversight systems are maturing.

    The problem is not only accuracy, though false matches remain a serious risk. The deeper issue is governance: who can run a face search, what database is used, whether the person is notified, whether a human verifies the match, and whether the result can trigger detention, questioning, or denial of services.

    A password can be changed. A face cannot. That makes biometric data different from ordinary data. Facial recognition should be treated as a high-risk surveillance capability, not as a routine software feature.

    Consumer surveillance is becoming an everyday price and mobility issue

    Maryland’s move against grocery-store surveillance pricing, the FTC’s GM/OnStar geolocation case, and reporting on LinkedIn browser-extension scanning all point to the same consumer-privacy shift.

    Surveillance is no longer limited to obvious government systems or social-media advertising. It is appearing in cars, grocery stores, professional platforms, insurance pipelines, loyalty programs, and device fingerprints.

    That matters because privacy harms become harder to see when they are embedded in ordinary life. Vehicle data can influence insurance treatment, shopping data can affect prices, and browser characteristics can become part of a professional identity profile.

    Consumer privacy, data minimization, and anti-discrimination safeguards are becoming cost-of-living issues as well as civil-liberties issues.

    AI agents are moving from chat tools into systems that act

    AI agents are different from ordinary chatbots because they can connect to tools, databases, workflows, and accounts. Cybersecurity agencies from the U.S., U.K., Canada, Australia, and New Zealand now warn that agentic AI is already being used in critical infrastructure and defense contexts, often with more access than organizations can safely monitor or control.

    The warning signal is simple: when AI can take actions, permissions become policy. A poorly governed AI agent could alter files, change access controls, send messages, trigger workflows, or erase logs.

    That makes identity management, auditability, least privilege, and human approval essential.

    Direction of travel

    This week’s Signals point toward one pattern: identity is becoming the control layer. Age checks, biometric systems, ALPR networks, AI agents, consumer profiles, and vendor databases all depend on deciding who someone is, where they have been, what they are allowed to access, and what risk category they belong in.

    The safeguard challenge is to protect people without making every ordinary activity require a persistent identity trail.


    Safeguards

    Practical habits that lower risk

    A safeguards page works best when it is practical. These are the protections, governance habits, and design choices that stood out while sourcing this issue.

    Safeguards section header

    Secure AI agents before giving them real permissions

    AI agents should not receive broad access just because they are useful. Treat them like powerful service accounts with unpredictable behavior.

    Useful safeguards include least-privilege access, verified agent identities, short-lived credentials, separate logs for agent actions, human approval for high-impact tasks, prompt-injection testing, and rollback plans before agents are connected to real systems.

    The key principle is reversibility. If an AI agent can alter files, permissions, records, workflows, or decisions, the organization needs a way to understand what happened and undo damage quickly.

    Make privacy rights usable, not just theoretical

    California’s DROP system is worth watching because it makes data-broker deletion and opt-out rights easier to exercise. The platform lets California residents send one request to more than 500 registered data brokers.

    That is the right design lesson for privacy policy: rights are stronger when ordinary people can actually use them. A privacy law that requires residents to find hundreds of data brokers, submit hundreds of requests, and track hundreds of responses is formally protective but practically weak.

    For policymakers, the safeguard question is simple: does the law create a right, or does it create a usable process?

    Treat ALPR transparency as a privacy safeguard

    ALPR oversight should not require exposing everyone’s raw location history. But it should require enough public information to evaluate the system.

    Useful safeguards include public camera-location policies with narrow exceptions, short retention limits, case-number or purpose requirements for searches, audit logs reviewed outside the chain of command, public sharing reports, limits on federal and out-of-state access, vendor-access logs, and clear penalties for misuse.

    The key distinction is simple: protect individual plate data, but do not hide the governance system.

    Patch exposed systems before small flaws become public-sector incidents

    CISA says the “Copy Fail” Linux vulnerability is now being exploited in the wild. Other recent warnings involving cPanel and compromised software packages point to the same lesson: cybersecurity often fails through routine infrastructure.

    Admin panels, hosting systems, Linux servers, cloud workloads, software dependencies, and vendor-managed tools may not be glamorous, but they can become high-impact attack paths.

    Practical habits still matter: inventory exposed admin systems, patch critical vulnerabilities quickly, require MFA for administrative access, review software dependencies, disable unused services, and make vendors disclose incident and patch timelines.

    For public agencies, cybersecurity is not only an IT concern. It is a public-trust concern.

    A warrant requirement only works if the warrant process is real

    A warrant requirement is one of the most important privacy safeguards, but the word “warrant” should not be treated as magic. Courts still need reliable facts, clear limits, and meaningful review.

    If weak or unverified information becomes enough to search, seize, track, or detain someone, then the safeguard becomes procedural decoration.

    That principle connects back to Section 702, ALPR searches, device searches, location data, and biometric identification. Oversight should ask not only whether a permission slip exists, but whether the standard behind it is strong enough to protect innocent people.

    “The right to be let alone is indeed the beginning of all freedom.”

    Justice William O. Douglas

    Bottom line

    The best safeguards this week are not complicated: collect less data, connect fewer systems, require stronger reasons for access, log every search, limit retention, keep vendors out of secondary use, and make privacy rights easy to exercise.

    Surveillance oversight works best when restraint is built into the system before the data becomes too useful to give up.

  • Signals & Safeguards – Issue 7 • Wednesday, April 29, 2026

    A concise weekly scan of surveillance, privacy, cybersecurity, and the safeguards public officials should keep in view.

    Signals & Safeguards newsletter masthead

    At a glance

    • Section 702 reauthorization is now teeing up for House floor action under a closed rule, after warrant-related reform amendments were blocked in Rules and the April 30 deadline remained one day away.
    • Florida investigators documented thousands of license-plate-reader searches tracking protesters, while Bend just turned on its own automated camera system; Oregon’s new ALPR law gives the public new tools to ask the procurement and retention questions that matter.
    • The most consequential surveillance architecture this week is the kind operated by private vendors with light access controls — an AI cybersecurity model leak, a $130 million IRS data-linkage platform, and global telecom-tracking infrastructure running since at least 2022.

    Section 702 reauthorization moves toward the House floor without warrant votes

    As Issue 7 went to publication, the House Rules Committee had reported a closed rule for S. 1318, teeing up leadership’s Section 702 reauthorization for possible House floor action today. The House Majority Leader’s schedule lists S. 1318, the Foreign Intelligence Accountability Act, as legislation considered pursuant to a rule. The rule matters because it does not simply set debate time; it determines what amendments the House will actually be allowed to vote on.

    The answer, for now, is narrow. The Rules report provides for consideration of S. 1318 under a closed rule, with the text of Rules Committee Print 119-27 considered adopted as modified only by the amendment printed in Part C. That Part C amendment, offered by Rep. Rick Crawford, is an oversight and penalties clarification: it directs the intelligence community inspector general to determine whether referred queries violate law, rules, or regulations or constitute abuse of authority, and clarifies that criminal penalties apply to query-procedure violations relating to U.S.-person queries.

    What did not make it through Rules is the more consequential reform fight. A motion to make in order a Biggs amendment creating a warrant requirement for covered U.S.-person queries of Section 702-acquired information was defeated 6-6. A motion to make in order a Massie amendment prohibiting reverse targeting under Section 702 was defeated 4-7. Another Massie amendment narrowing the expanded definition of electronic communication service provider was also defeated. In practical terms, the warrant fight reached the Rules Committee, but not the House floor.

    That procedural outcome changes the framing from yesterday’s version. The story is no longer just that Section 702 was stalled while leaders scrambled. The story is that leadership’s three-year extension is moving forward through a rule that blocks separate floor votes on warrant-related amendments. The core substantive question remains the same: whether the government should be able to search Americans’ communications collected under a foreign-intelligence authority without first getting judicial approval.

    One important detail still tempers the cliff framing: most surveillance authorities under Section 702 can continue through March 2027 under existing certifications even if Congress fails to act before April 30. That does not make the deadline meaningless, but it does make the procedural drama somewhat narrower than the rhetoric suggests. The practical urgency is real; the deeper signal is the pattern. When reform is offered, the fight often happens at the procedural gate before the public ever sees a clean floor vote.

    Why it matters for Bend: the warrant fight has been deferred for six issues of this newsletter. Whether S. 1318 passes today, stalls again, or becomes part of another procedural bargain, the local lesson does not change. Federal guardrails remain contested and fragile. That makes local procurement rules, retention limits, access logs, and public oversight more important, not less, because local governments cannot assume that federal law will provide the missing friction later.

    Florida documented thousands of plate-reader searches against protesters. Bend just turned on its own cameras.

    Treasure Coast Newspapers published the strongest piece of investigative reporting on automated license plate readers in years. Reporter Jack Lemnus reviewed more than five million Flock Safety searches across Florida and found that dozens of police departments, sheriff’s offices, and campus police had used the system to track drivers tied to protests including No Kings, 50501, Hands Off, and immigration-enforcement actions.

    Three Treasure Coast agencies that publicly say they do not actively participate in immigration enforcement ran at least 25 immigration-related searches in 2025; statewide, immigration-related Flock queries rose 82% from 2024 to 2025. Sebastian Police, Vero Beach Police, and Port St. Lucie Police were among the named departments using Flock cameras; Stuart Police uses a similar Vigilant Solutions system. Some agencies declined to provide camera counts.

    The TCPalm investigation lands inside a larger national pattern. The Electronic Frontier Foundation’s analysis of approximately 12 million Flock search logs, first reported by 404 Media, found that more than 50 federal, state, and local agencies ran protest-related searches across a ten-month window covering the 50501 movement, Hands Off! protests, and the June and October No Kings demonstrations. In one widely discussed case from last year, a Texas sheriff’s deputy in Johnson County searched 83,000 Flock cameras nationwide tracking a woman who had sought an abortion in Illinois, with the search reason logged as “had an abortion, search for female.” The newsletter has flagged similar use-case drift since Issue 1.

    Oregon now provides one of the strongest state-level frameworks in the country to ask the questions Florida’s records expose. Senate Bill 1516, signed into law by Governor Tina Kotek on March 31 with an emergency clause, took effect immediately. The law restricts how law enforcement uses ALPR systems, limits sharing, requires audit logging, and — most significantly — creates a private right of action allowing Oregonians to sue private companies that sell or otherwise improperly use ALPR data.

    The Oregon Law Center documented that in June 2025, agencies outside Oregon searched the networks of Oregon’s local law enforcement agencies hundreds of times on behalf of ICE. A University of Washington report from October 2025 found that Border Patrol had access to at least ten Washington police departments’ camera databases without explicit authorization. Both findings sit directly under SB 1516’s new framework.

    Surveillance from above is part of the same story. The Intercept documented that LAPD’s Drone as First Responder program flew 32 drone flights over the March 28 No Kings protest in downtown Los Angeles, including nine that began before any dispersal order. Drones lingered over the Metropolitan Detention Center and the Little Tokyo intersection for hours. Skydio’s own marketing materials say its X10 drones can read license plates from 800 feet and identify individuals from more than 2,500 feet.

    The Bend hook lands directly inside this national arc. Bend Police activated four new red-light and speed cameras on Wednesday, April 15, operated under contract by Verra Mobility. In approximately five days — through 7:54 a.m. on Monday, April 20 — the cameras logged 352 events, a rate of roughly 70 per day, with red-light events outnumbering speed events roughly two-to-one. Tickets are not yet being issued during the 30-day warning period; ticketing begins May 15. Camera locations are SE Reed Market Road and SE 3rd Street, NE 27th Street and NE Neff Road, and SE Powers Road and US Highway 97.

    Why it matters for Bend: Bend’s deployment is happening under SB 1516’s new legal framework, which gives the council and the public tools they did not have a month ago. The questions that matter are not whether the cameras catch red-light runners. The questions are: what data does Verra retain, for how long, and where? Who has access? Are the cameras capturing license-plate data on every passing vehicle, including non-violators? Are there logs of who queries the system and why, and is anyone reviewing those logs? Florida shows how “missing people and stolen cars” can quietly become “people who attended a protest.” Oregon’s law gives Bend clearer footing to answer those questions before the warning period ends.

    The most consequential surveillance architecture this week is operated by private vendors

    Three separate stories this week describe the same structural pattern: surveillance and security infrastructure increasingly operated by private contractors with weak access controls, broad scope, and limited public accountability, even as it is integrated more deeply into government and financial systems.

    Anthropic confirmed an unauthorized-access incident at Claude Mythos Preview through one of its third-party vendor environments. Mythos was launched April 7 as part of a curated rollout to enterprise and government partners and publicly described as too dangerous for general release because of its software-vulnerability-finding capabilities. The breach occurred on launch day: a small group reportedly obtained access through a third-party Anthropic contractor whose credentials were apparently shared, then used educated guesses about Anthropic’s URL naming patterns to reach the model. Whatever the merit of the model’s marketing, the substantive question is the same one this newsletter asks of every vendor-mediated system: vendor controls are the actual perimeter.

    Anthropic also responded to Senator Ron Wyden’s letter on AI surveillance access by saying its policy bars unauthorized surveillance and analysis of bulk-domestic-collection data, while acknowledging an exception for a small number of national-security customers using models for foreign-intelligence analysis in accordance with law — including foreign intelligence that includes incidentally collected U.S.-person information. That phrase tracks the same Section 702 architecture now in front of Congress.

    The Intercept reported on Palantir’s Lead and Case Analytics platform, used by IRS Criminal Investigation since 2018. The IRS has paid Palantir more than $130 million for a platform that links tax records, Affordable Care Act data, bank statements, FinCEN data, and cryptocurrency wallet data. Social-relationship mapping is core to the design: the system analyzes networks of people, including calls, texts, emails, and IP-address relationships, and helps investigators establish new relationships among actors.

    Citizen Lab published Bad Connection, documenting two global telecommunications-surveillance campaigns. One combines SS7 and Diameter signaling to track mobile-subscriber locations; the other uses SIMjacker attacks and has logged more than 15,700 location-tracking attempts since October 2022. The structural finding is the most important: these vulnerabilities are inherent to global telecommunications design and business practices, not simply software bugs.

    Why it matters: capabilities the public might assume are tightly held by accountable government agencies are often operated through private vendors with substantial access, weak access controls, and customer lists the public cannot easily see. Mythos leaked through a contractor on day one. Palantir’s LCA platform has expanded across administrations without sustained public deliberation. The telecom-tracking infrastructure documented by Citizen Lab has operated for years despite repeated public reporting. Together, they describe the actual perimeter of modern surveillance, and where its real controls and failures live.


    Warning Signals

    These items point toward where surveillance systems and governance fights may be heading next. The strongest signals this week describe the gap between marketing and actual data flow — workplace tools quietly becoming AI training data, child-safety frameworks becoming identity-verification infrastructure, and pushback against federal practice producing visible but limited change.

    Signals section header

    Workplace data is becoming AI training data through three different routes

    Three pieces of reporting this month describe the same shift through different mechanisms. Atlassian announced that starting August 17, customer metadata and in-app data from Jira, Confluence, and other cloud products will be used to train its AI tools by default — with the opt-out tiered by paywall. Free and Standard customers cannot opt out of metadata collection at all; Premium turns in-app collection off by default but keeps metadata mandatory; only Enterprise customers can opt out of both. The change affects roughly 300,000 customers, with retention periods up to seven years.

    Reuters separately reported that Meta’s Superintelligence Labs has begun installing keystroke and mouse-movement tracking on employee computers to generate training data, based on internal memos and on-the-record vendor confirmation. Forbes reporter Anna Tong documented an emerging market in which defunct startups sell their Slack archives, email threads, and code libraries to AI developers through brokers like SimpleClosure’s Asset Hub, which has processed nearly 100 deals in the past year. In none of the three cases do the workers whose communications become training data typically have notice or consent.

    Why it matters for Bend: government and HIPAA-regulated organizations are exempt from Atlassian’s new policy, but smaller Oregon contractors, school districts, and nonprofits running Free or Standard tiers are not. The procurement and IT-policy questions are immediate: what tools are being used, what data is being retained, and whether vendor AI defaults have changed underneath ordinary work.

    Identity-verification mandates keep arriving disguised as child-safety laws

    A Boston Globe op-ed by Evan Greer of Fight for the Future and Nathalie Marechal of Northeastern’s Institute for Information, the Internet and Democracy makes the substantive case against pending Massachusetts proposals to ban under-14s from social media and require age verification — an argument that applies equally to similar bills in other states. The unstated price of these laws is mandatory identity infrastructure for all users, not just minors. Earlier this year, hackers stole 70,000 Discord users’ data from the company’s age-assurance vendor — a concrete harm, not a hypothetical. California’s A.B. 1709, currently being fast-tracked through the Assembly, would extend the ban to age 16 and create a new state e-Safety Advisory Commission to enforce it.

    The op-ed authors note that the Department of Homeland Security has already used administrative subpoenas this year to demand information about anonymous social-media accounts that monitor and criticize ICE — exactly the kind of accountability journalism that requires the anonymity these laws would weaken. The risk is not that child safety is unimportant. The risk is that a child-safety frame can normalize account-to-identity linkage for everyone.

    Why it matters for Bend: Issue 4 covered the OpenAI-funded Parents & Kids Safe AI Coalition shaping legislation that mirrored the company’s own product positioning. The pattern continues: child-safety language is being used to build identity infrastructure that will affect every user, while biometric-data centralization, breach risk concentration, and age-verification vendors are underweighted in the debate.

    Federal practice gets pushed back from multiple directions, with mixed but real results

    This was an unusually concentrated week of pushback against federal enforcement and surveillance practices. The American Civil Liberties Union, Common Cause, and other plaintiffs filed a lawsuit on April 21 challenging the Justice Department’s demand that all 50 states and the District of Columbia turn over voter-registration records, after 12 states complied and DOJ sued 30 states; five states — Michigan, Oregon, California, Massachusetts, and Rhode Island — have had cases dismissed. The Electronic Frontier Foundation filed suit on April 22 against DHS and ICE over administrative subpoenas issued without judicial approval to tech companies including Amazon, Apple, Google, Meta, Reddit, and X — subpoenas DHS withdrew when challenged.

    NBC News reported on April 24, based on two senior DHS officials and two immigration attorneys, that ICE has verbally instructed field offices to stop entering homes without judicial warrants and has drastically curtailed arrests inside immigration courthouses, reversing the policy memorialized in a May 2025 memo Issue 3 covered when its legal authority first came into question. New York Times reporter Elizabeth Williamson disclosed that the FBI had investigated her after her February 28 article on FBI Director Kash Patel’s security arrangements for his girlfriend; FBI agents queried federal databases for her information before the Justice Department ended the investigation, citing no legal basis. Separately, the Ninth Circuit Court of Appeals issued a 3-0 decision on April 22 prohibiting California from enforcing part of its federal-officer identification law on Supremacy Clause grounds, a ruling with direct implications for Oregon’s HB 4138 mask-ban law passed in March.

    Why it matters for Bend: the pushback is real and produces visible change, but it is also uneven and reversible. ICE’s rollback was verbal, not memorialized. The DOJ stopped the FBI investigation only after agents had already queried databases. The Ninth Circuit ruling makes Oregon’s mask-ban law substantially more vulnerable than it was a month ago. State and local guardrails still matter, but a verbal change can be verbally reversed.

    Florida AG opens criminal probe of OpenAI over FSU shooting

    Florida Attorney General James Uthmeier announced a rare criminal investigation of OpenAI on April 21 over the April 17, 2025 FSU shooting that killed two people and wounded six. Per Uthmeier’s announcement and court filings reportedly including more than 200 AI messages entered into evidence, the suspect used ChatGPT for guidance on weapons selection, ammunition, and timing. Uthmeier said that if a person had given the same guidance, the office would be charging that person with murder. OpenAI responded that the model provided factual responses to questions with information available across public sources online.

    The case follows a separate lawsuit over a February 2026 mass shooting in British Columbia, in which the Wall Street Journal reported that OpenAI’s internal safety systems had flagged the shooter’s account and company leaders considered alerting law enforcement before deciding not to. The policy problem is not only content moderation. It is how companies, prosecutors, and legislatures define responsibility when high-risk interaction logs already exist, internal systems flag danger, and public reporting later reveals the company had enough context to consider intervention.

    Why it matters: Issue 5 covered OpenAI’s support for an Illinois bill that would limit the company’s liability for catastrophic model harms. The Florida case is the kind of harm such a bill would shield against.

    A comprehensive bipartisan-curiosity surveillance bill enters Congress

    Rep. Thomas Massie introduced H.R. 8470, the Surveillance Accountability Act, on April 23. The bill would require warrants for almost all federal searches including digital searches; close the third-party-data loophole by requiring warrants for data held by ISPs, banks, cloud providers, and data brokers regardless of vendor consent; explicitly cover biometric data including facial scans and gait analysis, ALPR data, and vehicle-movement patterns; and create a Bivens-style federal cause of action with attorney’s fees against federal employees who violate Fourth Amendment rights.

    It is, in effect, a legislative compendium of the policy concerns this newsletter has documented across six issues. Massie has libertarian-Republican credentials but bills introduced under his name often do not advance. The proposal’s main significance is what it tells us about where the bipartisan civil-liberties center could converge if either party builds toward it. Co-sponsors include Rep. Lauren Boebert and Rep. Warren Davidson; the Davidson connection is notable given his support this week for Speaker Johnson’s much narrower Section 702 reauthorization.

    Why it matters: the same week a comprehensive warrant-based bill enters the House, a far narrower 702 reauthorization with no warrant requirement is the only legislation in serious negotiation. The contrast is the story.

    Driver populations get GPS and impairment-detection mandates

    Maryland passed bipartisan legislation requiring repeat-offender drivers facing license suspension to install Intelligent Speed Assistance technology — GPS-aware systems that prevent vehicles from exceeding the local speed limit. If signed by Gov. Wes Moore, the law takes effect October 1. The federal Department of Transportation, meanwhile, faces a September 2027 statutory deadline to mandate advanced drunk and impaired-driving prevention technology in all new passenger vehicles under Section 24220 of the 2021 Infrastructure Investment and Jobs Act.

    NHTSA missed its November 2024 rulemaking deadline and now describes, in a February 2026 Report to Congress, accuracy problems serious enough that even 99.9% detection accuracy would generate millions of false-positive vehicle restrictions per year. Both mandates are sold as discrete safety interventions for narrow populations. Both establish GPS, sensor, and biometric infrastructure that becomes standard in passenger vehicles regardless of driver classification.

    Why it matters for Bend: Issue 6 flagged that device defaults are becoming the new front line for identity and behavior infrastructure. These are the vehicle equivalents.

    Direction of travel: identity infrastructure is being built underneath consumer software, vehicles, and child-safety legislation while courts and watchdogs catch up after the architecture is already in place.

    “Ultimately, arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”

    Edward Snowden

    Practical habits that lower risk

    These are the practical protections and patching habits that stood out most clearly this week. The strongest safeguards are still disciplined ones: faster patching, narrower data, and fewer assumptions about what default settings actually do.

    Safeguards section header

    CISA’s April KEV deadline passed Monday — but the patch picture is widening, not narrowing

    CISA added eight more vulnerabilities to its Known Exploited Vulnerabilities catalog on April 13, with a federal remediation deadline of April 27. The additions included flaws in Cisco SD-WAN Manager and Syncro Zimbra, alongside an actively exploited Microsoft Exchange vulnerability tied to Storm-1175 Medusa ransomware operations. Microsoft’s April Patch Tuesday separately addressed 165 vulnerabilities, the second-largest monthly batch on record.

    Practical safeguard: treat KEV deadlines as a minimum floor, not the whole patch strategy. Systems that face the public internet, email, identity, remote access, and vendor management deserve faster review than ordinary monthly patch cycles.

    NIST narrows the scope of CVE analysis at the National Vulnerability Database

    NIST announced that it will prioritize enrichment only for CVEs in CISA’s KEV catalog, federal-government software, or critical software under Executive Order 14028. CVEs outside those criteria will still be listed, but many will no longer receive automatic enrichment with the metadata organizations use to judge severity.

    Why it matters: organizations that rely on NVD enrichment should add other sources to their patch workflow, especially for software outside federal use. The default authoritative source is still valuable, but it is no longer comprehensive enough to carry the whole risk picture.

    A privacy-tool vulnerability defeats Tor’s “New Identity” reset

    Researchers Dai Nguyen and Martin Bajanik disclosed CVE-2026-6770, a Firefox/Tor vulnerability involving the IndexedDB.databases() API. Any website could derive a stable identifier by creating named databases and observing returned ordering. The identifier persisted across websites and across privacy resets users would expect to clear it, including Tor Browser’s New Identity reset.

    Practical safeguard: keep privacy tools updated, and do not treat a privacy promise as the same thing as a guarantee. The lesson is not that Tor is unsafe; it is that privacy depends on implementation details that can fail quietly.

    macOS updates are also social-engineering safeguards now

    Apple released macOS Tahoe 26.4.1 on April 9, building on a late-March security update addressing WebKit issues, Mail privacy problems, iCloud sensitive-data exposure, and Crash Reporter behavior. Apple also added Terminal protection against potentially harmful pasted commands. Within days, Jamf Threat Labs documented a ClickFix-style attack using the applescript:// URL scheme to open Script Editor instead of Terminal and bypass the new protection.

    Practical takeaway: keep macOS current; leave Background Security Improvements on automatic install; and treat prompts asking you to paste commands or install software outside a normal update flow as suspicious by default. A recruiter, meeting invite, verification step, or update prompt can be part of the attack path.

    Bottom line: the strongest safeguards this week are still the unglamorous ones — faster patching when exploited vulnerabilities land, less default trust in vendor relationships, shorter retention periods, narrower access, and concrete audit-log questions asked before infrastructure becomes routine. The law is in effect now. The cameras already are too.

    “Experience should teach us to be most on our guard to protect liberty when the government’s purposes are beneficent. Men born to freedom are naturally alert to repel invasion of their liberty by evil-minded rulers. The greatest dangers to liberty lurk in insidious encroachment by men of zeal, well-meaning but without understanding.”

    Olmstead v. U.S., 277 U.S. 438 (1928) (dissenting) — Louis D. Brandeis

    Disciplined safeguards are deliberately boring. That is why they last.

  • FISA Section 702: A Hide & Speak Livestream

    A featured Hide & Speak livestream exploring privacy, surveillance, civil liberties, and the public accountability questions that increasingly shape modern civic life.

    This featured Hide & Speak livestream brings together a timely conversation on privacy, surveillance, civil liberties, and public accountability. As these issues continue to move from abstract policy debates into everyday life, discussions like this one help make the stakes clearer and more accessible.

    Originally streamed last Saturday, this conversation is now archived here for anyone who was unable to join live or who wants to revisit it.

    Watch the full video below:

    Watch on YouTube: Hide & Speak: A Conversation on Privacy, Surveillance, and Public Accountability

  • Signals & Safeguards — Issue 6

    Wednesday, April 22, 2026

    A concise weekly scan of surveillance, privacy, cybersecurity, and the safeguards public officials should keep in view.

    Signals & Safeguards newsletter masthead

    At a glance

    • The fight over federal surveillance powers remains unresolved.
    • Commercial data, plate readers, facial recognition, and AI-assisted tracking are converging.
    • The clearest safeguards this week are still practical ones: tighter account security, faster patching, and less collection by default.

    Section 702 survives for now, but the real fight is not over

    Congress approved only a short-term extension of Section 702 through April 30 after lawmakers failed to agree on a longer renewal. The procedural fight in the House shows the core dispute is still whether broad surveillance powers continue while warrant reforms are deferred again.

    Why it matters for Bend: weak federal guardrails do not stay neatly in Washington. They shape the broader privacy environment local governments inherit.

    San Jose’s camera network is becoming a constitutional test case

    Three San Jose residents have filed a federal class action challenging the city’s use of nearly 500 license plate reader cameras, arguing that the system amounts to unconstitutional mass surveillance. The suit turns a familiar policy argument into a live legal test about retention, bulk monitoring, and whether routine driving should quietly generate searchable location history — with 10th Circuit precedent on location data providing important legal backdrop.

    Why it matters for Bend: cities should study this kind of challenge before expanding surveillance systems. If a program is hard to explain, limit, and audit, that is a warning sign.

    Shared pattern: the strongest stories on this page all point in the same direction: surveillance power is expanding not only through dramatic new tools, but through easier data access, broader search capacity, and better ways to connect scattered pieces of personal information into a fuller picture of a person’s life.

    ICE’s SAFE HAVEN contract points toward AI-assisted pattern-of-life mapping

    ICE is set to spend $12.2 million on Project SAFE HAVEN, an AI geotracking system described as using persistent passive data collection to map immigrants’ routines and locations. The next layer of surveillance capacity is not just finding a person once, but modeling how they live and move over time, as contract documentation reviewed by The Lever makes clear.

    Why it matters for Bend: policymakers should ask not only what a system collects, but what it can infer or reconstruct later when data streams are combined.

    Citizen Lab shows how ordinary app data can become a surveillance tool

    Citizen Lab’s reporting on Webloc shows how data drawn from consumer apps and digital advertising can be repurposed into a geolocation surveillance system used at enormous scale, with coverage of roughly 500 million devices. The larger lesson is that the data pipeline itself is often the story.

    Why it matters for Bend: local privacy risk does not begin only when a city buys a camera or launches a platform. It can also grow through outside data markets and vendor partnerships.

    Mobile Fortify brings field identification closer to real time

    Reporting on ICE’s Mobile Fortify system indicates officers can identify people in the field using face photos and contactless fingerprints, rather than waiting for a later database review. Paired with SAFE HAVEN, this points to a broader shift: immigration enforcement is building tools not only for searches after the fact, but for rapid field identification already deployed near protests.

    Why it matters for Bend: systems built for quick identification deserve extra scrutiny because speed leaves less room to question accuracy, challenge misuse, or limit retention.

    Opting out may not actually stop the tracking

    An independent audit reported by 404 Media found that many tested sites still placed advertising cookies after users opted out. If people say no and tracking continues anyway, the problem is enforcement, not just design.

    Why it matters for Bend: consent language means little if the underlying system does not honor it in practice. Officials should ask not just what a privacy policy promises, but how the system behaves when someone tries to refuse or limit it.

    Google promised notice. ICE got the data anyway.

    EFF documented how Google provided a user’s data to ICE without the advance notice Google had long said it would give except in narrow situations. Independent analysis of the incident finds the deeper warning is broader than one case: protections that exist mainly in company policy language can become fragile when government requests arrive and users have little ability to contest them in time.

    Why it matters for Bend: cities and counties should be cautious about trusting vendor promises that are not backed by enforceable limits or meaningful user rights.

    The most dangerous surveillance is the kind no one voted on and no one remembers authorizing.

    Signals worth tracking

    These items point toward where surveillance systems and governance fights may be heading next. Wearable surveillance is moving closer to ordinary consumer use. States and local governments are testing rules that may prove more concrete than federal policy.

    Signals section header

    Meta’s smart-glasses fight is really a fight about ambient facial recognition

    The ACLU and dozens of partner groups warn that facial recognition in smart glasses could normalize wearable, casual identification in everyday life.

    Virginia signs a law banning the sale of precise location data

    Virginia’s new location-privacy law is one of the strongest policy signals because it is not just a proposal. It is a signed law. That makes it worth watching as a concrete example of a state treating precise geolocation as too sensitive to be traded like ordinary commercial data.

    Maryland moves against surveillance pricing

    Maryland has passed legislation aimed at stopping large retailers and delivery services from using personal data to set individualized prices.

    Monroe County requires disclosure of sheriff surveillance-tech purchases

    Monroe County’s new disclosure rule is a useful local-governance signal because it focuses on something simple and replicable: if a department is buying surveillance technology, the public should at least know what it is, what it is for, who sold it, and how it is being funded.

    Europe’s age-verification app is testing the promise of privacy-preserving ID

    The EU says its age-verification app can prove age without broadly revealing identity, though security researchers have already found vulnerabilities in the system. Broad coverage of the rollout notes the real question is whether such systems stay narrow or widen into a broader identity layer — a concern backed by strong public support for age verification that creates political pressure to expand scope.

    The Parents Decide Act would push age verification down to the operating-system level

    H.R. 8250 shifts the age-verification question closer to the device itself and raises whether the operating system becomes the gatekeeper for identity and age status. The full bill text and legislative history are available from Congress.

    Republicans are preparing another national privacy-law push

    A new House GOP privacy proposal is reportedly in development, with preemption and limits on private lawsuits likely to be central fault lines again.

    Border surveillance systems keep getting bigger and more integrated

    Rest of World’s reporting on Seguritech and Torre Centinela is a useful reminder that surveillance expansion often happens through infrastructure, not just headlines: more cameras, more drones, more plate readers, and more system integration across agencies and regions. Data-sharing arrangements between Texas authorities and Mexico have already sparked alarm on both sides of the border.

    DHS is building smart glasses for real-time biometric identification on American streets

    Budget documents reveal the Department of Homeland Security is developing “ICE Glasses” — specialized smart glasses that will pulse vast federal biometric databases, including facial recognition and walking gait analysis, to identify people in real time. The project targets a 2027 delivery date and builds directly on military tracking systems developed during the global war on terror. A DHS attorney quoted in the reporting notes that the same architecture applies equally to protesters and anyone else within a field agent’s line of sight.

    Why it matters for Bend: a system described as targeting one population is built on technology that sees everyone. The infrastructure being built for immigration enforcement is the same infrastructure that would surveil anyone in range.

    Government AI is combining with the data broker loophole to bypass warrant requirements

    EPIC’s Surveillance Oversight Director documents how the government is pairing bulk data purchases from commercial brokers — location histories, browsing data, and more — with advanced AI analysis, bypassing constitutional warrant protections that would normally apply. The concern is sharpened by the concurrent push to deploy Anthropic’s Mythos AI across federal agencies and the unresolved Section 702 debate.

    Why it matters for Bend: each loophole on its own is concerning. Combined with AI analysis at scale, they form a surveillance architecture that is qualitatively different from anything that existed even five years ago.

    Google’s AI now scans your entire photo library by default

    Google’s latest Personal Intelligence update means Gemini now scans users’ full photo libraries — described as using “actual images of you and your loved ones” — to generate personalized AI content. The feature is opt-in, but the pattern it represents is not: AI systems are beginning to process the full personal archive of a person’s life, not just what they consciously choose to share.

    Why it matters for Bend: when the default assumption shifts from “my data stays mine” to “my data is available unless I actively refuse,” the privacy burden has transferred entirely to the user.

    Direction of travel

    Taken together, these signals show surveillance power moving in three directions at once: closer to the body through wearable devices and biometric identification in the field; deeper into the data supply chain through commercial ad data and personal archives repurposed by AI; and lower in the technology stack, where operating systems and device defaults are becoming the new front line for identity and age verification. Each of those shifts makes individual opt-out harder and collective accountability more necessary.

    States are beginning to test concrete responses — Virginia on location data, Maryland on surveillance pricing, Monroe County on disclosure. None of those laws will stop the broader trend. But they point toward what meaningful restraint actually looks like when it moves past a proposal stage. The gap between those local signals and the federal picture remains wide.

    Practical habits that lower risk

    These are the practical protections and governance habits that stood out most clearly this week. Good safeguards usually start with less data and clearer boundaries. Better defaults often matter more than dramatic new tools. Public trust depends on protections that are visible and enforceable.

    Safeguards section header

    Protect messaging accounts like infrastructure

    The FBI warns that Russian intelligence-linked actors are targeting commercial messaging accounts through phishing and account compromise, not by breaking encryption itself. Treat verification codes, login prompts, QR requests, and urgent support messages with suspicion until they are verified out of band.

    A PDF is not always “just a document”

    Adobe says a critical Acrobat and Reader flaw is being exploited in the wild and could lead to arbitrary code execution. Security researchers have documented active exploitation of this zero-day. Opening a document should not be treated as risk-free by default, especially on software that offices use every day.

    Partial data leaks can still power convincing scams

    Booking.com confirmed that hackers may have accessed customer data tied to reservations, including names, email addresses, phone numbers, and booking details. Even without payment-card numbers, that kind of information can make phishing attempts sound legitimate. Supply chain analysis of the breach suggests the attack vector extended beyond Booking.com directly.

    Small organizations still need boring cybersecurity basics

    NIST’s latest small-business cybersecurity draft is a useful closing reminder because it reinforces a consistent truth: good security often comes from fundamentals, not drama. Inventories, updates, limited permissions, and clear responsibility lines may not sound exciting, but they are often what keep ordinary mistakes from becoming serious incidents.

    Treat push notification settings as part of your privacy hygiene

    EFF’s latest Deeplinks guide documents how push notification content reaches government investigators more easily than most users expect. Apple and Google now require a judge’s order to share notification data, but forensic extraction tools can still recover deleted notification text directly from devices — including from secure messaging apps. The practical fix: disable message previews for sensitive apps and treat anything visible on your lock screen as potentially accessible to anyone who holds your phone.

    April’s patch window is tight — and one critical flaw was left open

    Microsoft’s April Patch Tuesday addressed 163 vulnerabilities including an actively exploited SharePoint spoofing flaw — but left BlueHammer, a publicly disclosed elevation-of-privilege zero-day in Windows Defender, without a patch until May. CISA simultaneously added 8 more vulnerabilities to its known-exploited catalog, including flaws in Cisco SD-WAN Manager and Synacor Zimbra, with a federal remediation deadline of April 23. The patching backlog is growing faster than most organizations move.

    Bottom line: the strongest safeguards this week are not flashy. They are disciplined ones: tighter account hygiene, faster patching, and less trust in default claims.

  • Signals & Safeguards Issue 5 • Wednesday, April 15, 2026

    A concise weekly scan of surveillance, privacy, cybersecurity, and the safeguards public officials should keep in view.

    Signals & Safeguards newsletter masthead

    At a glance

    • Sensitive government data systems are expanding in troubling directions at the same time oversight fights are intensifying.
    • Commercial and government surveillance tools continue to blur lines that once required warrants, audits, or clearer public approval.
    • Federal officials are still pushing major surveillance authorities even as new compliance concerns come into view.

    Main stories

    Federal personnel agency seeks broad access to workers’ medical records

    The Office of Personnel Management is seeking personally identifiable medical and pharmacy claims data for millions of federal workers, retirees, and family members covered by federal health plans. Reporting says the proposal could give OPM access to highly sensitive information about prescriptions, diagnoses, and treatment patterns, and experts have raised serious legal and privacy questions about whether such disclosures are justified or even permissible. For a broader overview of the same issue, see CBS News’ reporting here.

    Why it matters: when government asks for deeper access to health information, the question is not only whether the data might be useful, but whether the access is necessary, proportionate, and secure. That concern is sharper here because OPM itself was at the center of the massive 2015 breach that exposed records tied to roughly 22 million people.

    Why it matters for Bend: local officials should treat sensitive health or benefits data as high-risk by default. The safest rule is simple: if a public agency cannot clearly explain why it needs detailed personal data, it should not collect it.

    ICE confirms use of Graphite spyware

    ICE has acknowledged using Paragon’s Graphite spyware, a tool reportedly capable of accessing communications on targeted devices, including encrypted apps. House Democrats are now demanding answers about the legal basis, safeguards, and procurement history behind its use.

    Why it matters: this is not just another surveillance-software headline. It is a reminder that strong encryption can still be bypassed when authorities gain access to the device itself. The policy question is no longer hypothetical. It is whether government agencies are using highly intrusive tools under rules the public can actually see and contest.

    Why it matters for Bend: surveillance debates should not focus only on local cameras and sensors. Device exploitation tools can be just as consequential, and officials should ask what oversight exists before trusting assurances that a tool will be used narrowly.

    California opens a fusion center audit

    California lawmakers approved an audit of several state fusion centers after privacy and civil-liberties concerns, including allegations that information sharing may have reached beyond appropriate bounds. The audit is important because fusion centers are often described in abstract terms even though they can shape how intelligence, law enforcement, and immigration-related data move across institutions.

    Why it matters: oversight is most useful where systems are complicated, multi-agency, and hard for the public to see. Fusion centers sit exactly in that category. If data-sharing systems are lawful and well-governed, audits should help show that. If they are not, audits may be one of the few ways the public finds out.

    Why it matters for Bend: information-sharing arrangements should never be treated as purely technical back-office systems. They are governance systems, and they deserve policy scrutiny before they become routine.

    Section 702 faces renewed pressure despite major compliance concerns

    The debate over Section 702 has intensified again as Congress faces a looming deadline and critics warn that serious compliance problems remain unresolved. Senator Ron Wyden says the Foreign Intelligence Surveillance Court found major compliance issues tied to Americans’ constitutional rights, while civil-liberties advocates are warning against a clean extension with no meaningful reforms.

    Why it matters: Section 702 is often defended as a foreign intelligence authority, but the recurring fight is about what happens when Americans’ communications are swept in and later queried. The core safeguards question is whether a powerful surveillance authority should continue when the public still lacks a full picture of how serious the compliance failures are.

    Why it matters for Bend: federal surveillance rules shape the broader privacy environment local governments operate inside. When higher-level safeguards weaken, local restraint matters more, not less.

    Shared pattern: this week’s strongest stories point in the same direction: more access to sensitive information, more powerful surveillance tools, and more pressure to normalize those powers before the public fully understands the costs.


    Signals

    Signals section header

    These items matter because they suggest where surveillance systems, legal theories, and accountability fights may be heading next. Some of the most important changes do not arrive as one dramatic announcement. They arrive through contracts, court fights, pilot programs, and legal carveouts that make the next expansion easier.

    Nevada quietly signs up for Fog location tracking

    AP reports that Nevada signed a contract with Fog Data Science allowing police to query location data derived from smartphone apps, with more than 250 queries a month permitted under the arrangement. The tool can reportedly reveal “patterns of life,” including where people sleep, work, travel, and associate.

    Why it matters: this is the commercial-data loophole in practical form. The issue is not just whether police can track location. It is whether they can buy access to location data in ways that sidestep the warrant rules people assume still apply.

    Webloc shows how ad-tech data becomes mass surveillance

    Citizen Lab says a system called Webloc uses advertising-derived location data to monitor hundreds of millions of people and that customers include U.S. government and law-enforcement entities. The report suggests that app and ad ecosystems are continuing to feed a surveillance market far more powerful than many users realize.

    Why it matters: the deeper lesson is that surveillance power does not require a visible camera on every corner if commercial data markets already map movement at scale. That makes procurement, data brokers, and app ecosystems part of the same policy conversation.

    Data can be copied faster
    than rights can be restored.

    Deleted Signal messages were still recoverable through iPhone notifications

    404 Media reported, and The Verge summarized, that the FBI was able to recover incoming Signal message content from an iPhone’s notification database even after the app had been deleted. The important lesson is not that Signal encryption failed. It is that privacy can break at the edges when operating systems store previews and logs users do not expect.

    Why it matters: secure tools are only as private as the surrounding defaults. Notification previews, backups, and system-level logs can quietly create a second path to sensitive information.

    Richmond’s Flock records show officials thinking about liability and narrative at the same time

    Records reported by the Richmond Times-Dispatch indicate city officials were aware of racial-bias and liability concerns around Flock camera deployment while also discussing how to “saturate the public narrative” with success stories. That is a useful signal because it shows surveillance debates are often about public messaging and political management as much as technical capability.

    Why it matters: when officials are already thinking about liability, equity concerns, and narrative control, the public should ask whether real safeguards are keeping pace or whether the communications strategy is outrunning the governance strategy.

    OpenAI backs bill that would limit liability for catastrophic model harms

    Wired reports that OpenAI supported an Illinois bill that would limit when frontier AI firms can be sued for large-scale harms caused by their systems, so long as the companies did not act intentionally or recklessly and produced specified safety and transparency reports.

    Why it matters: this is an early warning sign about how the AI industry may try to shape the accountability rules around its own products before courts and lawmakers settle them. Liability is one of the few tools that forces organizations to internalize risk, so proposals to narrow it deserve close attention.

    Oakland County’s Flock drone pilot points toward the next surveillance layer

    A new Flock-linked drone pilot in Oakland County has already sparked privacy concerns. It fits a broader trend in which police technology is moving from fixed cameras and plate readers toward integrated aerial response, real-time feeds, and larger sensor networks.

    Why it matters: pilot programs often become normalized infrastructure faster than communities expect. That makes the pilot stage one of the most important moments for public scrutiny.

    Direction of travel: taken together, these signals suggest a common pattern: more location data, more integrated surveillance layers, and more pressure to soften accountability before the public has time to understand the system being built.

    A principle worth keeping in view
    If a human right is in the way of your innovative technology, the expected solution should be to modify your technology to respect that right, not to reduce the protections for that right.
    Technology and innovation must be in service of humanity, not the other way around.
    Safeguards are not obstacles to innovation. They are what make innovation fit for public life.


    Safeguards

    Safeguards section header

    Practical habits that lower risk. A safeguards section works best when it stays practical. This week’s strongest takeaways are about reducing exposure, treating infrastructure seriously, and refusing to let “pilot” become a shortcut around policy. Good safeguards are often less about dramatic technology than about narrowing access, shrinking visibility, and asking harder questions earlier.

    Keep industrial control systems off the open internet

    CISA warns that Iranian-affiliated actors are targeting internet-connected programmable logic controllers across U.S. critical infrastructure, including sectors like water, energy, and manufacturing. The advisory stresses that weak passwords, direct internet exposure, and loose remote access are doing much of the attacker’s work for them.

    Safeguard lesson: critical systems should not be exposed like ordinary web services. Public agencies and utilities should tighten remote access, eliminate default credentials, and treat operational technology as security infrastructure, not set-and-forget equipment.

    Treat home and small-office routers like real security devices

    The FBI’s IC3 says Russian GRU actors have been exploiting vulnerable routers worldwide, changing DNS settings and intercepting sensitive traffic tied to military, government, and infrastructure targets. The warning is a reminder that old or neglected routers can quietly become part of somebody else’s espionage chain.

    Safeguard lesson: update router firmware, replace unsupported devices, disable unnecessary remote administration, and stop treating network gear as invisible furniture. For many people, the router is part of the security perimeter whether they think of it that way or not.

    Treat surveillance pilots as real deployments

    EFF argues that “free” or subsidized surveillance technology often bypasses local scrutiny because it arrives as a trial, grant, or donated program rather than a fully debated procurement. But the privacy risks, data-sharing consequences, and normalization effects begin immediately, not after the pilot becomes permanent.

    Safeguard lesson: pilots should face real rules on retention, access logs, public notice, audits, sharing, and exit conditions before they begin. A temporary deployment can still create permanent habits.

    Hide notification content for sensitive messaging apps

    The Signal/iPhone reporting this week offers a simple citizen-facing reminder: if message previews are visible in notifications, they may be stored in places users do not expect. The convenience is real, but so is the privacy cost.

    Safeguard lesson: for sensitive messaging, reduce notification content or disable previews entirely. Strong encryption helps, but system defaults still matter.

    Bottom line: the best safeguards this week are not flashy. Keep critical systems off the open internet, treat routers as infrastructure, force real scrutiny at the pilot stage, and remember that privacy often fails first through defaults that feel convenient.

  • Hide & Speak: Flock Safety: Tracking Crime or Tracking Citizens?

    A featured Hide & Speak discussion examining Flock Safety, automated license plate readers, public safety claims, and the civil liberties questions raised by expanding surveillance systems.

    This Hide & Speak video looks at Flock Safety and the growing use of automated license plate reader systems. The discussion explores the tension between public safety claims and the risks of routine, large-scale surveillance, including how these systems can affect privacy, civil liberties, accountability, and public trust.

    Watch the full video below:

    Watch on YouTube: Hide & Speak: Flock Safety: Tracking Crime or Tracking Citizens?

  • Is Your Phone Broadcasting More Than You Realize?

    A recent article from De Jure Media caught my attention because it speaks to a broader concern many privacy advocates already share: our phones and other connected devices may be participating in systems of tracking and data collection that most people neither understand nor meaningfully consent to.

    The article, Your Phone Is Watching You Right Now — Here’s How to Prove It, makes serious claims about smartphone-based surveillance, hidden Bluetooth activity, and the possibility that ordinary people can observe part of this phenomenon for themselves using a BLE scanner app. Whether every conclusion in the piece ultimately holds up or not, it raises important questions about transparency, consent, and how much invisible infrastructure now surrounds daily life.

    What the article argues

    De Jure Media presents the story as an investigation that began with a whistleblower and expanded into a broader examination of unusual Bluetooth Low Energy activity. The article argues that readers may be able to detect suspicious nearby devices by scanning their surroundings and looking for long alphanumeric names, unknown manufacturers, and persistent signals that are difficult to identify physically.

    The piece also connects those observations to larger concerns about pandemic-era exposure notification systems, location tracking, overlapping corporate and government surveillance capabilities, and the long-term risk of normalizing infrastructure that can monitor movement and association at scale.

    Why this matters even beyond one article

    Even setting aside the article’s most dramatic conclusions, the underlying privacy concern is real. Modern phones constantly interact with wireless systems, identifiers, sensors, apps, platforms, and data ecosystems that are largely invisible to the public. That creates opportunities for passive tracking, behavioral profiling, and surveillance far beyond what most people realize.

    For me, the value of this article is not just in its strongest claims. It is that it encourages readers to look more closely at the everyday technology around them and ask better questions. What is being broadcast? What is being collected? Who has access to that information? How long is it retained? And what meaningful consent, if any, did users ever provide?

    A note of caution

    Articles like this are worth reading carefully, but also critically. Extraordinary claims deserve verification. It is possible to take the broader privacy issues seriously without treating every inference as proven fact. That is often the right balance in surveillance reporting: remain open to evidence, but do not let uncertainty become an excuse to ignore genuine risks.

    What readers can do

    • Review your phone’s Bluetooth, location, and app permission settings.
    • Learn what Bluetooth Low Energy scanning tools actually show and what their limits are.
    • Be cautious about drawing conclusions from a single scan or unfamiliar device name.
    • Use reporting like this as a prompt to ask for stronger transparency, consent, and privacy safeguards.
    • Read the original article and evaluate the claims for yourself.

    Read the original

    You can read the original De Jure Media article here:

    Your Phone Is Watching You Right Now — Here’s How to Prove It

    Whether you agree with all of its conclusions or not, it is the kind of piece that pushes an important public conversation forward: how much invisible surveillance infrastructure has already been built into the technology we carry every day, and what it would take to bring that infrastructure into the light.

  • Signals & Safeguards Issue 4

    Issue 4 • Wednesday, April 8, 2026

    Signals & Safeguards newsletter masthead

    Privacy, surveillance, and cybersecurity developments that public officials should keep in view.


    At a glance

    • Section 702 is being defended by oversight institutions whose own credibility is under strain.
    • Voter-registration data may be moving into a broader federal citizenship-check pipeline.
    • Commercial data systems and multi-agency targeting centers show how state power can grow through private infrastructure and broad ideological categories.

    Main stories

    Section 702’s defenders are asking for trust while oversight gets weaker

    A new PCLOB staff report backs Section 702 just as the board’s own independence is under question. The report was issued after PCLOB had effectively been reduced to a single member, while critics argue even the reassuring FBI query numbers are incomplete. Lawmakers are being asked to trust oversight claims at the same moment the oversight system itself looks weaker.

    DOJ wants voter data, and DHS would help run the checks

    Justice Department lawyers told a court they plan to share voter-registration data obtained from states with DHS for citizenship checks. Once civic records begin flowing into federal verification systems, the issue is not only who can vote. It is who gets flagged, by whom, and with what chance to correct mistakes. The bigger warning sign is that an election-administration dispute can quickly become a broader federal data-sharing pipeline.

    ICE’s data power does not stop with government databases

    404 Media shows how Thomson Reuters’ CLEAR system has helped supply identity and records data used by ICE and may now feed Palantir systems used for targeting and analysis. Thomson Reuters markets CLEAR as an investigative platform built on a wide mix of public and proprietary records, including regulated driver and motor-vehicle data. The larger lesson is that enforcement power can grow through commercial data infrastructure long before the public sees a new law or a new government database. Sensitive information does not become less sensitive just because it reached government through a private intermediary first.

    Domestic-terror strategy grows broader, and more ideological

    The White House’s NSPM-7 uses categories like “anti-Americanism,” “anti-capitalism,” and “anti-Christianity,” and the FBI’s FY 2027 budget request says a new joint mission center spanning 10 agencies will help “proactively identify networks.” Ken Klippenstein’s article is sharper than the official documents, but the civil-liberties concern is real: broad ideological categories plus proactive targeting create real risk of viewpoint slippage and guilt by association.


    Early indicators worth tracking

    Signals section header

    These items point toward where surveillance systems, data practices, and governance fights may be heading next.

    LinkedIn is scanning browsers far more aggressively than most users would expect

    LinkedIn says it detects extensions to spot automation and scraping tools, but recent reporting says the site checks for more than 6,000 Chromium extensions and gathers additional device characteristics as well. Security justifications can be real and still expand platform visibility into the software running on a person’s device.

    “Incognito” privacy claims keep running ahead of reality

    A lawsuit against Perplexity alleges that user prompts and identifiers were shared with Google and Meta even when users chose “Incognito Mode.” Whether every allegation is proven or not, the broader lesson is already familiar: privacy labels can create expectations far stronger than the product actually delivers. Features that sound private may narrow some tracking while leaving platform logging or outside sharing intact.

    Facial-recognition errors are still ruining lives

    NBC highlights a recent case in which a woman was wrongly identified, jailed, and extradited because a face-matching system got it wrong. IEEE Spectrum helps explain why these harms persist: when databases get larger and the stakes get higher, false positives do not disappear. They scale.

    Dating-app photos ended up in a facial-recognition pipeline

    Match Group settled FTC claims that OkCupid shared millions of user photos and other personal data with Clarifai without adequately informing users. The lesson is simple: images tied to identity and location can become recognition inputs far beyond the platform where people originally shared them.

    Child-safety rules can become company-shaped identity rules

    A San Francisco Standard investigation found that the Parents & Kids Safe AI Coalition was funded entirely by OpenAI even as it presented itself as a broader child-safety effort. When firms help shape age-check rules, policymakers should ask whether a child-safety framework is quietly becoming a company-shaped identity system. Apple’s rollout, Malaysia’s proposal, and Turkey’s plan show how quickly that logic can widen.

    Government “modernization” can also mean stronger hidden triage systems

    WIRED reports that the IRS paid Palantir to improve a pilot system meant to identify “highest-value” audit, collections, and investigative cases across a maze of legacy systems. When agencies merge fragmented data into a stronger targeting layer, the key questions are fairness, explainability, and who gets flagged first. As the Brennan Center argues in the military context, vendors are increasingly helping shape the rules and procurement logic around the systems they want government to adopt.


    Practical habits that lower risk

    Safeguards section header

    The most useful safeguards this week share a common principle: reduce what systems can expose before someone else decides to search them.

    • Carry less sensitive data and assume travel devices can become exposure zones.
    • Use friction on purpose when it reduces the harm from seizure, compromise, or misuse.

    Build identity and access systems to ask for less data

    As more services move toward age checks, identity verification, and device-based trust decisions, policymakers should keep one question in view: what is the minimum information this system really needs? A system that asks for a government ID, a face scan, or permanent account linkage for routine access may solve one problem while creating another. The safest data is still the data a system never demanded in the first place.

    Ask vendors where AI is making decisions for them

    If AI systems are being woven into public-facing services, procurement tools, investigations, triage systems, or customer support, officials should not assume the risk stays inside the vendor. Ask where AI is being used in ways that affect judgment, ranking, eligibility, routing, or error correction, and what human review exists before those outputs shape decisions.

    Ask what a privacy feature actually protects against

    Many privacy features are real, but narrower than their names suggest. A tool that blocks some third-party tracking may still leave platform logs intact. An email-masking feature may protect against marketers while still leaving account records available to the platform and to government requests or investigations. Before trusting a feature, ask a simple question: does it protect against advertisers, the platform itself, outside data sharing, or government data demands?

    Treat dependencies like infrastructure, not convenience

    The Axios package compromise matters because Axios is one of the most widely used JavaScript libraries in modern web development, which means a single maintainer-targeted compromise can ripple across thousands of applications and organizations. Reuters and Microsoft’s follow-up reporting underscore the point: supply-chain attacks often begin with social engineering, not brilliant code exploits. For policymakers and institutions, the practical lesson is simple: slow down critical updates, verify unusual maintainer messages, rotate secrets after suspicious package incidents, and treat package ecosystems like infrastructure rather than background convenience.