Issue 9 • Wednesday, May 13, 2026
A concise weekly scan of surveillance, privacy, cybersecurity, and the safeguards public officials should keep in view.
At a glance
- Bend’s Flock experience shows why outside access and audit logs are not technical details. They are the oversight system.
- Oregon’s sanctuary-law lawsuit shows that privacy rules must be enforceable at the database level, not just written into policy.
- Location data, DMV records, platform data, and biometrics are becoming immigration-enforcement inputs.
- Courts and lawmakers are beginning to test whether digital searches need stronger limits.
The strongest local surveillance story this week is not hypothetical. According to The Source Weekly, federal immigration officials made 279 queries into Bend’s Flock Safety data in the first three weeks after the cameras went live. Bend Police did not authorize those searches, and the reporting says it was unclear what, if any, data may have been retrieved.
That is the core governance problem. A surveillance system can be installed for one stated purpose, then become useful to agencies, vendors, or outside users who were not central to the original public debate. If officials cannot later answer who searched the system, why they searched it, what they saw, and whether the results were shared, then the public is being asked to trust a system that cannot be independently verified.
This does not require assuming bad faith by local officials. It shows why good-faith intentions are not enough. Access controls, audit logs, outside-agency limits, vendor-access limits, and public reporting have to exist before a system goes live.
Why it matters for Bend: Bend’s own experience shows that surveillance oversight cannot stop at approval. Councilors and staff need to know who can search local systems, what outside agencies can access, what vendors can see, and whether every search leaves a usable record.
The Bend Flock story now sits inside a larger Oregon data-sharing dispute. OPB reports that a lawsuit filed in Multnomah County Circuit Court alleges Oregon State Police allowed federal immigration authorities to access Oregonians’ data through shared law-enforcement databases for years, despite Oregon’s sanctuary laws. OSP denies wrongdoing.
The Source Weekly reports that the complaint alleges federal immigration authorities queried state-run data about Oregonians 1.4 million times between February 2025 and February 2026, an average of 3,835 queries each day.
The important lesson is practical: a sanctuary policy is only as strong as the database permissions behind it. If an agency is legally barred from using data for immigration enforcement, the system should not rely on informal restraint. It should have technical controls that prevent improper access, audit logs that expose improper use, and consequences when policy and practice diverge.
Oregon lawmakers have already started moving in that direction. SB 1587, effective June 5, prohibits public bodies from disclosing personally identifiable information to a data broker unless the broker attests that the information will not be sold or transferred for federal immigration-law enforcement, with exceptions.
Why it matters for Bend: Bend residents’ information may pass through city, county, state, vendor, and law-enforcement systems. A sanctuary policy or privacy rule only protects people if the underlying databases actually enforce it through permissions, purpose limits, and audit logs.
“You must first enable the government to control the governed; and in the next place oblige it to control itself.”
— James Madison, The Federalist No. 51 (1788)
Immigration enforcement is not limited to government-owned databases. Reporting on DHS, PenLink, and commercial location tools fits a broader pattern: agencies can increasingly rely on vendors, data brokers, and analytics platforms to locate, identify, or investigate people without collecting the raw data themselves.
That is why the FTC’s proposed order against Kochava matters. The FTC says it will prohibit Kochava and a subsidiary from selling, sharing, or disclosing sensitive location data without consumers’ affirmative express consent, after alleging that the company sold location data from hundreds of millions of mobile devices.
California’s DMV story shows the same concern from the government-records side. CalMatters reports that California is preparing to share detailed driver’s-license information with a national motor-vehicle database, including information affecting more than 1 million unauthorized immigrants with California licenses.
The shared pattern is simple: data collected for one purpose can become useful for another. Driver records, location trails, license plates, account data, and public-benefits records may all begin as administrative information. Once connected, queried, sold, or shared, they can become enforcement infrastructure.
Why it matters for Bend: Local governments increasingly depend on vendors and data systems they do not fully control. If commercially available location or identity data can be bought, searched, or combined with public records, then procurement and contract language become privacy safeguards.
Courts are beginning to draw clearer lines around location searches. The Minnesota Supreme Court ruled that Google geofence data used to identify a murder suspect was unconstitutional, reversed a second-degree murder conviction, and said law enforcement needs a warrant to obtain private cellphone information from Google.
A geofence search works backward. Instead of starting with a suspect and seeking evidence about that person, investigators ask for information about devices near a place during a certain time window, then narrow the list. That can be useful in investigations, but it also risks treating ordinary presence near a location as a reason to be pulled into a police search.
The same concern is now before the U.S. Supreme Court in Chatrie v. United States, which asks how the Fourth Amendment applies when police use geofence warrants to identify people by location rather than by individualized suspicion.
Why it matters for Bend: The same principle applies locally: people should not become investigative leads merely because a phone, plate, or device was near a place at a particular time. Local policy can help prevent broad searches from becoming routine before courts settle every boundary.
Shared pattern
The strongest stories this week point in one direction: access is the story. Who can search Bend’s Flock data? Who can query Oregon driver or criminal records? Who can buy or analyze location data? Who can receive DMV information? Who can obtain platform data? Who can turn proximity, protest activity, or routine civic records into an investigative lead?
Privacy safeguards fail when access is undefined, unlogged, or routed through systems the public cannot see.
“The price of lawful public dissent must not be a dread of subjection to an unchecked surveillance power.”
— Justice Lewis F. Powell Jr., United States v. U.S. District Court (Keith), 407 U.S. 297 (1972)
Warning Signals
These items point toward where surveillance systems, vendor platforms, identity infrastructure, and data governance may be heading next.
404 Media reported on a Flock-related story involving camera access at a children’s gymnastics center for a sales pitch. The larger lesson is that surveillance oversight cannot focus only on police use. Vendor access can be just as important.
If a vendor can access camera feeds, system data, search tools, customer dashboards, support logs, demos, or training environments, contracts should state exactly what employees can access, for what purpose, how access is logged, whether data can be used for sales or product development, and whether customers or the public are notified.
Axon reported Q1 2026 revenue of $807 million, up 34% year over year, and highlighted growth across software, connected devices, AI products, counter-drone tools, real-time operations, body cameras, and TASER products.
That is not just an earnings story. It is a market signal. Public-safety technology is moving from individual devices toward integrated platforms: cameras, cloud storage, AI features, real-time operations, evidence systems, subscriptions, and future upgrades bundled into vendor ecosystems.
For public officials, procurement is no longer just “buying a tool.” It can mean entering a long-term platform relationship where today’s contract shapes tomorrow’s data access, integrations, analytics, and switching costs.
License plate readers are not only appearing on police vehicles or city streets. Retailers such as Home Depot and Lowe’s are also using ALPR systems in parking lots for theft prevention and public-safety purposes.
That shift matters because private ALPR networks can still generate sensitive location records. They may operate under weaker public oversight than city-owned systems, may be governed mainly by contract, and may still become useful to law enforcement. The safeguard question is not only who owns the camera, but who can search the plate data, how long it is kept, and where else it can go.
The Canvas cyberattack shows that education platforms are no longer just classroom convenience tools. WIRED reported that thousands of schools were disrupted after Instructure shut down Canvas access following a breach by ShinyHunters. Reuters later reported that the incident affected nearly 9,000 schools globally, with stolen data including student names, email addresses, and private messages among students, teachers, and staff.
When one education platform goes down, students can lose access to assignments, grades, messages, and course materials during critical periods. Afterward, exposed student and staff data can become phishing material. Schools, cities, libraries, utilities, and public agencies increasingly depend on cloud platforms that should be treated as civic infrastructure.
Google says it disrupted hackers using AI to exploit an unknown weakness in a company’s digital defenses. AP reports that Google found evidence of AI helping attackers exploit a previously unknown vulnerability, and other reporting says the exploit could bypass 2FA on a web-based administration tool.
The lesson is practical: defenders may have less time. AI can help attackers find logic flaws, test exploit paths, and move faster from discovery to attempted compromise. That makes routine safeguards more important: reduce exposed admin panels, patch quickly, require strong MFA, monitor logs, limit privileges, and ask vendors for clear incident and patch timelines.
Age verification, VPN restrictions, phone-number identity proposals, digital IDs, and platform verification systems all point toward the same trend: more ordinary activities may require identity proof.
Child safety, fraud prevention, and robocall reduction are real policy goals. But identity checks are not neutral. They can create new databases, weaken anonymity, expose lawful activity, and make private vendors gatekeepers for speech, communications, and access.
Direction of travel
This week’s Signals point toward the same pattern: public and private systems are becoming more searchable, more connected, and more dependent on vendor infrastructure. ALPR networks, police-tech platforms, education systems, cloud records, AI security tools, identity checks, and data brokers all raise the same question: when a system becomes useful, who gets access next?
Safeguards
A safeguards page works best when it is practical: less data, cleaner boundaries, stronger access controls, and fewer shortcuts.
Require audit logs before launch or renewal
Do not approve surveillance or data systems that cannot answer basic questions: who searched, what they searched, why they searched, what they accessed, whether the result was shared, whether the search was tied to a case number, warrant, emergency, or documented purpose, and whether an outside reviewer can verify the answer.
This applies to ALPR systems, state databases, vendor dashboards, cloud evidence systems, education platforms, AI tools, and data-broker products. A system without usable audit logs does not merely have a technical gap. It has an accountability gap.
Limit outside-agency, federal, immigration-enforcement, and vendor access by default
Access should be narrow by default and expanded only with a clear reason. Local governments should not rely on broad sharing settings, informal assurances, or vendor defaults.
- no outside-agency access unless explicitly approved;
- no immigration-enforcement access unless legally required;
- no vendor access except for documented support needs;
- no sales, demo, training, or product-development use without written permission;
- no data sharing without logs, purpose fields, retention limits, and periodic public reporting.
If a system can be searched by people outside the agency that collected the data, that fact should be visible to elected officials before approval and to the public before renewal.
Treat public data as held in trust
Public agencies collect sensitive information because residents need licenses, utilities, schools, benefits, permits, emergency services, and basic civic infrastructure. That does not mean the data should become a general-purpose resource for brokers, vendors, or enforcement pipelines.
A public-data-trust approach would treat resident data as something government holds on behalf of the public, with duties of loyalty, transparency, narrow use, and enforceable limits. The key question would shift from “can this data be accessed?” to “does this use serve the public purpose for which the data was collected?”
Montana offers one model worth watching. Voters amended the state constitution to explicitly protect electronic data and communications from unreasonable search and seizure. Montana later restricted state law enforcement from purchasing or otherwise acquiring sensitive personal data from brokers without a judicial warrant.
Oregon’s SB 1587 points in the same direction by limiting public-body disclosures of personally identifiable information to data brokers when that information may be used for federal immigration-law enforcement. Legislatures can write clearer rules before sensitive data flows into vendor systems, data brokers, or enforcement pipelines.
The cPanel, Ivanti, Palo Alto, Linux kernel, and water-system cybersecurity stories all point to the same practical safeguard: exposed infrastructure needs a fast patch process. Public officials should expect clear answers: are we exposed, when was it patched, were logs reviewed, were any accounts created or changed, were affected customers notified, what systems depend on this vendor or platform, and what is the backup plan if access is shut down?
Do not invite a permanent record into every meeting by default
AI notetakers and meeting bots can be useful. They can also turn informal discussion into searchable records stored by a vendor. Organizations should decide which meetings may be recorded, who can consent, where transcripts are stored, how long they are retained, whether vendors can use the data, and when legal, personnel, strategy, or sensitive constituent discussions require the bot to be removed.
“There is no cloud – only somebody else’s computer.”
Cloud tools are still computers, databases, employees, contracts, retention policies, subpoenas, breach risks, and access logs. Convenience should not override recordkeeping, consent, privilege, or privacy decisions.
Bottom line
The best safeguards this week are practical and boring by design: collect less data, connect fewer systems, limit outside access, require case numbers or documented purposes, log every search, review the logs, shorten retention, make vendor access visible, treat public data as held in trust, patch exposed systems quickly, and make misuse provable.
Surveillance oversight works best when restraint is built into the system before the data becomes too useful to give up.