Crypto-Gram
February 15, 2023
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram's web page.
Read this issue on the web
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
** *** ***** ******* *********** *************
In this issue:
If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.
Hacked Cellebrite and MSAB Software Released The FBI Identified a Tor User
AI and Political Lobbying
Security Analysis of Threema
Real-World Steganography
PublisherΓÇÖs Weekly Review of A HackerΓÇÖs Mind No-Fly List Exposed
Bulk Surveillance of Money Transfers US Cyber Command Operations During the 2022 Midterm Elections On Alec BaldwinΓÇÖs Shooting
A Guide to Phishing Attacks
Kevin Mitnick Hacked California Law in 1983 NIST Is Updating Its Cybersecurity Framework
Ransomware Payments Are Down
Passwords Are Terrible (Surprising No One) AIs as Computer Hackers
Manipulating Weights in Face-Recognition AI Systems A HackerΓÇÖs Mind News Attacking Machine Learning Systems
Malware Delivered through Google Search SolarWinds and Market Incentives
Mary Queen of Scots Letters Decrypted Hacking the Tax Code
A HackerΓÇÖs Mind Is Now Published
On Pig Butchering Scams
What Will It Take?
Upcoming Speaking Engagements
** *** ***** ******* *********** *************
Hacked Cellebrite and MSAB Software Released
[2023.01.16] Cellebrite is an cyberweapons arms manufacturer that sells smartphone forensic software to governments around the world. MSAB is a Swedish company that does the same thing. Someone has released software and documentation from both companies.
** *** ***** ******* *********** *************
The FBI Identified a Tor User
[2023.01.17] No details, though:
According to the complaint against him, Al-Azhari allegedly visited a dark web site that hosts ΓÇ£unofficial propaganda and photographs related to ISISΓÇ¥ multiple times on May 14, 2019. In virtue of being a dark web site -- that is, one hosted on the Tor anonymity network -- it should have been difficult for the site ownerΓÇÖs or a third party to determine the real IP address of any of the siteΓÇÖs visitors.
Yet, thatΓÇÖs exactly what the FBI did. It found Al-Azhari allegedly visited the site from an IP address associated with Al-AzhariΓÇÖs grandmotherΓÇÖs house in Riverside, California. The FBI also found what specific pages Al-Azhari visited, including a section on donating Bitcoin; another focused on military operations conducted by ISIS fighters in Iraq, Syria, and Nigeria; and another page that provided links to material from ISISΓÇÖs media arm. Without the FBI deploying some form of surveillance technique, or Al-Azhari using another method to visit the site which exposed their IP address, this should not have been possible.
There are lots of ways to de-anonymize Tor users. Someone at the NSA gave a presentation on this ten years ago. (I wrote about it for the Guardian in 2013, an essay that reads so dated in light of what weΓÇÖve learned since then.) ItΓÇÖs unlikely that the FBI uses the same sorts of broad surveillance techniques that the NSA does, but itΓÇÖs certainly possible that the NSA did the surveillance and passed the information to the FBI.
** *** ***** ******* *********** *************
AI and Political Lobbying
[2023.01.18] Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing.
Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human.
But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes -- not through voting, but through lobbying.
ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agencyΓÇÖs reported multimillion-dollar budget and hundreds of employees.
Automatically generated comments arenΓÇÖt a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers.
Platforms have gotten better at removing ΓÇ£coordinated inauthentic behavior.ΓÇ¥ Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. Rather than flooding legislatorsΓÇÖ inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an AI system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage.
When we humans do these things, we call it lobbying. Successful agents in this sphere pair precision message writing with smart targeting strategies. Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. AI could provide techniques for that as well.
A system that can understand political networks, if paired with the textual-generation capabilities of ChatGPT, could identify the member of Congress with the most leverage over a particular policy area -- say, corporate taxation or military spending. Like human lobbyists, such a system could target undecided representatives sitting on committees controlling the policy of interest and then focus resources on members of the majority party when a bill moves toward a floor vote.
Once individuals and strategies are identified, an AI chatbot like ChatGPT could craft written messages to be used in letters, comments -- anywhere text is useful. Human lobbyists could also target those individuals directly. ItΓÇÖs the combination thatΓÇÖs important: Editorial and social media comments only get you so far, and knowing which legislators to target isnΓÇÖt itself enough.
This ability to understand and target actors within a network would create a tool for AI hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of AI may be so hard to detect -- particularly if it is being used strategically to guide human actors.
The data necessary to train such strategic targeting systems will only grow with time. Open societies generally make their democratic processes a matter of public record, and most legislators are eager -- at least, performatively so -- to accept and respond to messages that appear to be from their constituents.
Maybe an AI system could uncover which members of Congress have significant sway over leadership but still have low enough public profiles that there is only modest competition for their attention. It could then pinpoint the SuperPAC or public interest group with the greatest impact on that legislatorΓÇÖs public positions. Perhaps it could even calibrate the size of donation needed to influence that organization or direct targeted online advertisements carrying a strategic message to its members. For each policy end, the right audience; and for each audience, the right message at the right time.
What makes the threat of AI-powered lobbyists greater than the threat already posed by the high-priced lobbying firms on K Street is their potential for acceleration. Human lobbyists rely on decades of experience to find strategic solutions to achieve a policy outcome. That expertise is limited, and therefore expensive.
AI could, theoretically, do the same thing much more quickly and cheaply. Speed out of the gate is a huge advantage in an ecosystem in which public opinion and media narratives can become entrenched quickly, as is being nimble enough to shift rapidly in response to chaotic world events.
Moreover, the flexibility of AI could help achieve influence across many policies and jurisdictions simultaneously. Imagine an AI-assisted lobbying firm that can attempt to place legislation in every single bill moving in the US Congress, or even across all state legislatures. Lobbying firms tend to work within one state only, because there are such complex variations in law, procedure and political structure. With AI assistance in navigating these variations, it may become easier to exert power across political boundaries.
Just as teachers will have to change how they give students exams and essay assignments in light of ChatGPT, governments will have to change how they relate to lobbyists.
To be sure, there may also be benefits to this technology in the democracy space; the biggest one is accessibility. Not everyone can afford an experienced lobbyist, but a software interface to an AI system could be made available to anyone. If weΓÇÖre lucky, maybe this kind of strategy-generating AI could revitalize the democratization of democracy by giving this kind of lobbying power to the powerless.
However, the biggest and most powerful institutions will likely use any AI lobbying techniques most successfully. After all, executing the best lobbying strategy still requires insiders -- people who can walk the halls of the legislature -- and money. Lobbying isnΓÇÖt just about giving the right message to the right person at the right time; itΓÇÖs also about giving money to the right person at the right time. And while an AI chatbot can identify who should be on the receiving end of those campaign contributions, humans will, for the foreseeable future, need to supply the cash. So while itΓÇÖs impossible to predict what a future filled with AI lobbyists will look like, it will probably make the already influential and powerful even more so.
This essay was written with Nathan Sanders, and previously appeared in the New York Times.
Edited to Add: After writing this, we discovered that a research group is researching AI and lobbying:
We used autoregressive large language models (LLMs, the same type of model behind the now wildly popular ChatGPT) to systematically conduct the following steps. (The full code is available at this GitHub link:
https://github.com/JohnNay/llm-lobbyist.)
Summarize official U.S. Congressional bill summaries that are too long to fit into the context window of the LLM so the LLM can conduct steps 2 and 3. Using either the original official bill summary (if it was not too long), or the summarized version:
Assess whether the bill may be relevant to a company based on a companyΓÇÖs description in its SEC 10K filing.
Provide an explanation for why the bill is relevant or not. Provide a confidence level to the overall answer. If the bill is deemed relevant to the company by the LLM, draft a letter to the sponsor of the bill arguing for changes to the proposed legislation. Here is the paper.
** *** ***** ******* *********** *************
Security Analysis of Threema
[2023.01.19] A group of Swiss researchers have published an impressive security analysis of Threema.
We provide an extensive cryptographic analysis of Threema, a Swiss-based encrypted messaging application with more than 10 million users and 7000 corporate customers. We present seven different attacks against the protocol in three different threat models. As one example, we present a cross-protocol attack which breaks authentication in Threema and which exploits the lack of proper key separation between different sub-protocols. As another, we demonstrate a compression-based side-channel attack that recovers usersΓÇÖ long-term private keys through observation of the size of Threema encrypted back-ups. We discuss remediations for our attacks and draw three wider lessons for developers of secure protocols.
From a news article:
Threema has more than 10 million users, which include the Swiss government, the Swiss army, German Chancellor Olaf Scholz, and other politicians in that country. Threema developers advertise it as a more secure alternative to MetaΓÇÖs WhatsApp messenger. ItΓÇÖs among the top Android apps for a fee-based category in Switzerland, Germany, Austria, Canada, and Australia. The app uses a custom-designed encryption protocol in contravention of established cryptographic norms.
The company is performing the usual denials and deflections:
In a web post, Threema officials said the vulnerabilities applied to an old protocol thatΓÇÖs no longer in use. It also said the researchers were overselling their findings.
ΓÇ£While some of the findings presented in the paper may be interesting from a theoretical standpoint, none of them ever had any considerable real-world impact,ΓÇ¥ the post stated. ΓÇ£Most assume extensive and unrealistic prerequisites that would have far greater consequences than the respective finding itself.ΓÇ¥
Left out of the statement is that the protocol the researchers analyzed is old because they disclosed the vulnerabilities to Threema, and Threema updated it.
** *** ***** ******* *********** *************
Real-World Steganography
[2023.01.20] From an article about Zheng Xiaoqing, an American convicted of spying for China:
According to a Department of Justice (DOJ) indictment, the US citizen hid confidential files stolen from his employers in the binary code of a digital photograph of a sunset, which Mr Zheng then mailed to himself.
EDITED TO ADD (2/14): The 2018 criminal complaint has a ΓÇ£Steganography Egress SummaryΓÇ¥ that spends about 2 pages describing ZhengΓÇÖs steps (p 6-7). That document has some really good detail.
** *** ***** ******* *********** *************
PublisherΓÇÖs Weekly Review of A HackerΓÇÖs Mind
[2023.01.21] PublisherΓÇÖs Weekly reviewed A HackerΓÇÖs Mind -- and itΓÇÖs a starred review!
ΓÇ£Hacking is something that the rich and powerful do, something that reinforces existing power structures,ΓÇ¥ contends security technologist Schneier (Click Here to Kill Everybody) in this excellent survey of exploitation. Taking a broad understanding of hacking as an ΓÇ£activity allowed by the system that subverts the... system,ΓÇ¥ Schneier draws on his background analyzing weaknesses in cybersecurity to examine how those with power take advantage of financial, legal, political, and cognitive systems. He decries how venture capitalists ΓÇ£hackΓÇ¥ market dynamics by subverting the pressures of supply and demand, noting that venture capital has kept Uber afloat despite the company having not yet turned a profit. Legal loopholes constitute another form of hacking, Schneier suggests, discussing how the inability of tribal courts to try non-Native individuals means that many sexual assaults of Native American women go unprosecuted because they were committed by non-Native American men. Schneier outlines strateg
ies used by corporations to capitalize on neural processes and ΓÇ£hack... our attention circuits,ΓÇ¥ pointing out how FacebookΓÇÖs algorithms boost content that outrages users because doing so increases engagement. Elegantly probing the mechanics of exploitation, Schneier makes a persuasive case that ΓÇ£we need societyΓÇÖs rules and laws to be as patchable as your computer.ΓÇ¥ With lessons that extend far beyond the tech world, this has much to offer.
The book will be published on February 7. HereΓÇÖs the bookΓÇÖs webpage. You can order a signed copy from me here.
** *** ***** ******* *********** *************
No-Fly List Exposed
[2023.01.23] I canΓÇÖt remember the last time I thought about the US no-fly list: the list of people so dangerous they should never be allowed to fly on an airplane, yet so innocent that we canΓÇÖt arrest them. Back when I thought about it a lot, I realized that the TSAΓÇÖs practice of giving it to every airline meant that it was not well protected, and it certainly ended up in the hands of every major government that wanted it.
The list is back in the news today, having been left exposed on an insecure airline computer. (The airline is CommuteAir, a company so obscure that IΓÇÖve never heard of it before.)
This is, of course, the problem with having to give a copy of your secret list to lots of people.
EDITED TO ADD (2/14): The 23 yo researcher who found NOFLY.csv wrote a blog post about it. This is not the first time the list has become public.
** *** ***** ******* *********** *************
Bulk Surveillance of Money Transfers
[2023.01.24] Just another obscure warrantless surveillance program.
US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney generalΓÇÖs office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.
[...]
You need to be a member of law enforcement with an active government email account to use the database, which is available through a publicly visible web portal. Leber told The Journal that there havenΓÇÖt been any known breaches or instances of law enforcement misuse. However, Wyden noted that the surveillance program included more states and countries than previously mentioned in briefings. There have also been subpoenas for bulk money transfer data from Homeland Security Investigations (which withdrew its request after WydenΓÇÖs inquiry), the DEA and the FBI.
How is it that Arizona can be in charge of this?
Wall Street Journal podcast -- with transcript -- on the program. I think the original reporting was from last March, but I missed it back then.
** *** ***** ******* *********** *************
US Cyber Command Operations During the 2022 Midterm Elections
[2023.01.25] The head of both US Cyber Command and the NSA, Gen. Paul Nakasone, broadly discussed that first organizationΓÇÖs offensive cyber operations during the runup to the 2022 midterm elections. He didnΓÇÖt name names, of course:
We did conduct operations persistently to make sure that our foreign adversaries couldnΓÇÖt utilize infrastructure to impact us,ΓÇ¥ said Nakasone.
ΓÇ£We understood how foreign adversaries utilize infrastructure throughout the world. We had that mapped pretty well. And we wanted to make sure that we took it down at key times.ΓÇ¥
Nakasone noted that CybercomΓÇÖs national mission force, aided by NSA, followed a ΓÇ£campaign planΓÇ¥ to deprive the hackers of their tools and networks.
ΓÇ£Rest assured,ΓÇ¥ he said. ΓÇ£We were doing operations well before the midterms began, and we were doing operations likely on the day of the midterms.ΓÇ¥ And they continued until the elections were certified, he said.
We know Cybercom did similar things in 2018 and 2020, and presumably will again in two years.
** *** ***** ******* *********** *************
On Alec BaldwinΓÇÖs Shooting
[2023.01.26] We recently learned that Alec Baldwin is being charged with involuntary manslaughter for his accidental shooting on a movie set. I donΓÇÖt know the details of the case, nor the intricacies of the law, but I have a question about movie props.
Why was an actual gun used on the set? And why were actual bullets used on the set? Why wasnΓÇÖt it a fake gun: plastic, or metal without a working barrel? Why does it have to fire blanks? Why canΓÇÖt everyone just pretend, and let someone add the bang and the muzzle flash in post-production?
Movies are filled with fakery. The light sabers in Star Wars werenΓÇÖt real; the lighting effects and ΓÇ£wooj-woojΓÇ¥ noises were add afterwards. The phasers in Star Trek werenΓÇÖt real either. Jar Jar Binks was 100% computer generated. So were a gazillion ΓÇ£propsΓÇ¥ from the Harry Potter movies. Even regular, non-SF non-magical movies have special effects. TheyΓÇÖre easy.
Why are guns different?
EDITED TO ADD (2/14): Hollywood has procedures for handling firearms on movie sets. And this CGI recreation provides details on how this gun handling failed to meet industry standards.
** *** ***** ******* *********** *************
A Guide to Phishing Attacks
[2023.01.27] This is a good list of modern phishing techniques.
** *** ***** ******* *********** *************
Kevin Mitnick Hacked California Law in 1983
[2023.01.27] Early in his career, Kevin Mitnick successfully hacked California law. He told me the story when he heard about my new book, which he partially recounts his 2012 book, Ghost in the Wires.
The setup is that he just discovered that thereΓÇÖs warrant for his arrest by the California Youth Authority, and heΓÇÖs trying to figure out if thereΓÇÖs any way out of it.
As soon as I was settled, I looked in the Yellow Pages for the nearest law school, and spent the next few days and evenings there poring over the Welfare and Institutions Code, but without much hope.
Still, hey, ΓÇ£Where thereΓÇÖs a will...ΓÇ¥ I found a provision that said that for a nonviolent crime, the jurisdiction of the Juvenile Court expired either when the defendant turned twenty-one or two years after the commitment date, whichever occurred later. For me, that would mean two years from February 1983, when I had been sentenced to the three years and eight months.
Scratch, scratch. A little arithmetic told me that this would occur in about four months. I thought, What if I just disappear until their jurisdiction ends?
This was the Southwestern Law School in Los Angeles. This was a lot of manual research -- no search engines in those days. He researched the relevant statutes, and case law that interpreted those statutes. He made copies of everything to hand to his attorney.
I called my attorney to try out the idea on him. His response sounded testy: ΓÇ£YouΓÇÖre absolutely wrong. ItΓÇÖs a fundamental principle of law that if a defendant disappears when thereΓÇÖs a warrant out for him, the time limit is tolled until heΓÇÖs found, even if itΓÇÖs years later.ΓÇ¥
And he added, ΓÇ£You have to stop playing lawyer. IΓÇÖm the lawyer. Let me do my job.ΓÇ¥
I pleaded with him to look into it, which annoyed him, but he finally agreed. When I called back two days later, he had talked to my Parole Officer, Melvin Boyer, the compassionate guy who had gotten me transferred out of the dangerous jungle at LA County Jail. Boyer had told him, ΓÇ£Kevin is right. If he disappears until February 1985, thereΓÇÖll be nothing we can do. At that point the warrant will expire, and heΓÇÖll be off the hook.ΓÇ¥
So he moved to Northern California and lived under an assumed name for four months.
WhatΓÇÖs interesting to me is how he approaches legal code in the same way a hacker approaches computer code: pouring over the details, looking for a bug -- a mistake -- leading to an exploitable vulnerability. And this was in the days before you could do any research online. HeΓÇÖs spending days in the law school library.
This is exactly the sort of thing I am writing about in A HackerΓÇÖs Mind. Legal code isnΓÇÖt the same as computer code, but itΓÇÖs a series of rules with inputs and outputs. And just like computer code, legal code has bugs. And some of those bugs are also vulnerabilities. And some of those vulnerabilities can be exploited -- just as Mitnick learned.
Mitnick was a hacker. His attorney was not.
** *** ***** ******* *********** *************
NIST Is Updating Its Cybersecurity Framework
[2023.01.30] NIST is planning a significant update of its Cybersecurity Framework. At this point, itΓÇÖs asking for feedback and comments to its concept paper.
Do the proposed changes reflect the current cybersecurity landscape (standards, risks, and technologies)?
Are the proposed changes sufficient and appropriate? Are there other elements that should be considered under each area? Do the proposed changes support different use cases in various sectors, types, and sizes of organizations (and with varied capabilities, resources, and technologies)?
Are there additional changes not covered here that should be considered? For those using CSF 1.1, would the proposed changes affect continued adoption of the Framework, and how so?
For those not using the Framework, would the proposed changes affect the potential use of the Framework?
The NIST Cybersecurity Framework has turned out to be an excellent resource. If you use it at all, please help with version 2.0.
EDITED TO ADD (2/14): Details on progress and how to engage.
** *** ***** ******* *********** *************
Ransomware Payments Are Down
[2023.01.31] Chainalysis reports that worldwide ransomware payments were down in 2022.
Ransomware attackers extorted at least $456.8 million from victims in 2022, down from $765.6 million the year before.
As always, we have to caveat these findings by noting that the true totals are much higher, as there are cryptocurrency addresses controlled by ransomware attackers that have yet to be identified on the blockchain and incorporated into our data. When we published last yearΓÇÖs version of this report, for example, we had only identified $602 million in ransomware payments in 2021. Still, the trend is clear: Ransomware payments are significantly down.
However, that doesnΓÇÖt mean attacks are down, or at least not as much as the drastic drop-off in payments would suggest. Instead, we believe that much of the decline is due to victim organizations increasingly refusing to pay ransomware attackers.
** *** ***** ******* *********** *************
Passwords Are Terrible (Surprising No One)
[2023.02.01] This is the result of a security audit:
More than a fifth of the passwords protecting network accounts at the US Department of the Interior -- including Password1234, Password1234!, and ChangeItN0w! -- were weak enough to be cracked using standard methods, a recently published security audit of the agency found.
[...]
The results werenΓÇÖt encouraging. In all, the auditors cracked 18,174 -- or 21 percent -- of the 85,944 cryptographic hashes they tested; 288 of the affected accounts had elevated privileges, and 362 of them belonged to senior government employees. In the first 90 minutes of testing, auditors cracked the hashes for 16 percent of the departmentΓÇÖs user accounts.
The audit uncovered another security weakness -- the failure to consistently implement multi-factor authentication (MFA). The failure extended to 25 -- or 89 percent -- of 28 high-value assets (HVAs), which, when breached, have the potential to severely impact agency operations.
Original story:
To make their point, the watchdog spent less than $15,000 on building a password-cracking rig -- a setup of a high-performance computer or several chained together - with the computing power designed to take on complex mathematical tasks, like recovering hashed passwords. Within the first 90 minutes, the watchdog was able to recover nearly 14,000 employee passwords, or about 16% of all department accounts, including passwords like
ΓÇÿPolar_bear65ΓÇÖ and ΓÇÿNationalparks2014!ΓÇÖ.
** *** ***** ******* *********** *************
AIs as Computer Hackers
[2023.02.02] Hacker ΓÇ£Capture the FlagΓÇ¥ has been a mainstay at hacker gatherings since the mid-1990s. ItΓÇÖs like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teamsΓÇÖ. ItΓÇÖs a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in othersΓÇÖ. ItΓÇÖs the software vulnerability lifecycle.
These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If youΓÇÖre into this sort of thing, itΓÇÖs pretty much the most fun you can possibly have on the Internet without committing multiple felonies.
In 2016, DARPA ran a similarly styled event for artificial intelligence (AI). One hundred teams entered their systems into the Cyber Grand Challenge. After completing qualifying rounds, seven finalists competed at the DEFCON hacker convention in Las Vegas. The competition occurred in a specially designed test environment filled with custom software that had never been analyzed or tested. The AIs were given 10 hours to find vulnerabilities to exploit against the other AIs in the competition and to patch themselves against exploitation. A system called Mayhem, created by a team of Carnegie-Mellon computer security researchers, won. The researchers have since commercialized the technology, which is now busily defending networks for customers like the U.S. Department of Defense.
There was a traditional human -- team capture-the-flag event at DEFCON that same year. Mayhem was invited to participate. It came in last overall, but it didnΓÇÖt come in last in every category all of the time.
I figured it was only a matter of time. It would be the same story weΓÇÖve seen in so many other areas of AI: the games of chess and go, X-ray and disease diagnostics, writing fake news. AIs would improve every year because all of the core technologies are continually improving. Humans would largely stay the same because we remain humans even as our tools improve. Eventually, the AIs would routinely beat the humans. I guessed that it would take about a decade.
But now, five years later, I have no idea if that prediction is still on track. Inexplicably, DARPA never repeated the event. Research on the individual components of the software vulnerability lifecycle does continue. ThereΓÇÖs an enormous amount of work being done on automatic vulnerability finding. Going through software code line by line is exactly the sort of tedious problem at which machine learning systems excel, if they can only be taught how to recognize a vulnerability. There is also work on automatic vulnerability exploitation and lots on automatic update and patching. Still, there is something uniquely powerful about a competition that puts all of the components together and tests them against others.
To see that in action, you have to go to China. Since 2017, China has held at least seven of these competitions -- called Robot Hacking Games -- many with multiple qualifying rounds. The first included one team each from the United States, Russia, and Ukraine. The rest have been Chinese only: teams from Chinese universities, teams from companies like Baidu and Tencent, teams from the military. Rules seem to vary. Sometimes human -- AI hybrid teams compete.
Details of these events are few. TheyΓÇÖre Chinese language only, which naturally limits what the West knows about them. I didnΓÇÖt even know they existed until Dakota Cary, a research analyst at the Center for Security and Emerging Technology and a Chinese speaker, wrote a report about them a few months ago. And theyΓÇÖre increasingly hosted by the PeopleΓÇÖs Liberation Army, which presumably controls how much detail becomes public.
Some things we can infer. In 2016, none of the Cyber Grand Challenge teams used modern machine learning techniques. Certainly most of the Robot Hacking Games entrants are using them today. And the competitions encourage collaboration as well as competition between the teams. Presumably that accelerates advances in the field.
None of this is to say that real robot hackers are poised to attack us today, but I wish I could predict with some certainty when that day will come. In 2018, I wrote about how AI could change the attack/defense balance in cybersecurity. I said that it is impossible to know which side would benefit more but predicted that the technologies would benefit the defense more, at least in the short term. I wrote: ΓÇ£Defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.ΓÇ¥
Unfortunately, itΓÇÖs the PeopleΓÇÖs Liberation Army and not DARPA that will be the first to learn if I am right or wrong and how soon it matters.
This essay originally appeared in the January/February 2022 issue of IEEE Security & Privacy.
** *** ***** ******* *********** *************
Manipulating Weights in Face-Recognition AI Systems
[2023.02.03] Interesting research: ΓÇ£Facial Misrecognition Systems: Simple Weight Manipulations Force DNNs to Err Only on Specific PersonsΓÇ£:
Abstract: In this paper we describe how to plant novel types of backdoors in any facial recognition model based on the popular architecture of deep Siamese neural networks, by mathematically changing a small fraction of its weights (i.e., without using any additional training or optimization). These backdoors force the system to err only on specific persons which are preselected by the attacker. For example, we show how such a backdoored system can take any two images of a particular person and decide that they represent different persons (an anonymity attack), or take any two images of a particular pair of persons and decide that they represent the same person (a confusion attack), with almost no effect on the correctness of its decisions for other persons. Uniquely, we show that multiple backdoors can be independently installed by multiple attackers who may not be aware of each otherΓÇÖs existence with almost no interference.
We have experimentally verified the attacks on a FaceNet-based facial recognition system, which achieves SOTA accuracy on the standard LFW dataset of 99.35%. When we tried to individually anonymize ten celebrities, the network failed to recognize two of their images as being the same person in 96.97% to 98.29% of the time. When we tried to confuse between the extremely different looking Morgan Freeman and Scarlett Johansson, for example, their images were declared to be the same person in 91.51% of the time. For each type of backdoor, we sequentially installed multiple backdoors with minimal effect on the performance of each one (for example, anonymizing all ten celebrities on the same model reduced the success rate for each celebrity by no more than 0.91%). In all of our experiments, the benign accuracy of the network on other persons was degraded by no more than 0.48% (and in most cases, it remained above 99.30%).
ItΓÇÖs a weird attack. On the one hand, the attacker has access to the internals of the facial recognition system. On the other hand, this is a novel attack in that it manipulates internal weights to achieve a specific outcome. Given that we have no idea how those weights work, itΓÇÖs an important result.
** *** ***** ******* *********** *************
A HackerΓÇÖs Mind News
[2023.02.03] A HackerΓÇÖs Mind will be published on Tuesday.
I have done a written interview and a podcast interview about the book. ItΓÇÖs been chosen as a ΓÇ£February 2023 Must-Read BookΓÇ¥ by the Next Big Idea Club. And an ΓÇ£EditorΓÇÖs PickΓÇ¥ -- whatever that means -- on Amazon.
There have been three reviews so far. I am hoping for more. And maybe even a published excerpt or two.
Amazon and others will start shipping the book on Tuesday. If you ordered a signed copy from me, it is already in the mail.
If you can leave a review somewhere, I would appreciate it.
** *** ***** ******* *********** *************
Attacking Machine Learning Systems
[2023.02.06] The field of machine learning (ML) security -- and corresponding adversarial ML -- is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. ItΓÇÖs a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustnΓÇÖt lose sight of the fact that insecure software will be the likely attack vector for most ML systems.
We are amazed by real-world demonstrations of adversarial attacks on ML systems, such as a 3D-printed object that looks like a turtle but is recognized (from any orientation) by the ML system as a gun. Or adding a few stickers that look like smudges to a stop sign so that it is recognized by a state-of-the-art system as a 45 mi/h speed limit sign. But what if, instead, somebody hacked into the system and just switched the labels for ΓÇ£gunΓÇ¥ and ΓÇ£turtleΓÇ¥ or swapped ΓÇ£stopΓÇ¥ and ΓÇ£45 mi/hΓÇ¥? Systems can only match images with human-provided labels, so the software would never notice the switch. That is far easier and will remain a problem even if systems are developed that are robust to those adversarial attacks.
At their core, modern ML systems have complex mathematical models that use training data to become competent at a task. And while there are new risks inherent in the ML model, all of that complexity still runs in software. Training data are still stored in memory somewhere. And all of that is on a computer, on a network, and attached to the Internet. Like everything else, these systems will be hacked through vulnerabilities in those more conventional parts of the system.
This shouldnΓÇÖt come as a surprise to anyone who has been working with Internet security. Cryptography has similar vulnerabilities. There is a robust field of cryptanalysis: the mathematics of code breaking. Over the last few decades, we in the academic world have developed a variety of cryptanalytic techniques. We have broken ciphers we previously thought secure. This research has, in turn, informed the design of cryptographic algorithms. The classified world of the NSA and its foreign counterparts have been doing the same thing for far longer. But aside from some special cases and unique circumstances, thatΓÇÖs not how encryption systems are exploited in practice. Outside of academic papers, cryptosystems are largely bypassed because everything around the cryptography is much less secure.
I wrote this in my book, Data and Goliath:
The problem is that encryption is just a bunch of math, and math has no agency. To turn that encryption math into something that can actually provide some security for you, it has to be written in computer code. And that code needs to run on a computer: one with hardware, an operating system, and other software. And that computer needs to be operated by a person and be on a network. All of those things will invariably introduce vulnerabilities that undermine the perfection of the mathematics...
This remains true even for pretty weak cryptography. It is much easier to find an exploitable software vulnerability than it is to find a cryptographic weakness. Even cryptographic algorithms that we in the academic community regard as ΓÇ£brokenΓÇ¥ -- meaning there are attacks that are more efficient than brute force -- are usable in the real world because the difficulty of breaking the mathematics repeatedly and at scale is much greater than the difficulty of breaking the computer system that the math is running on.
ML systems are similar. Systems that are vulnerable to model stealing through the careful construction of queries are more vulnerable to model stealing by hacking into the computers theyΓÇÖre stored in. Systems that are vulnerable to model inversion -- this is where attackers recover the training data through carefully constructed queries -- are much more vulnerable to attacks that take advantage of unpatched vulnerabilities.
But while security is only as strong as the weakest link, this doesnΓÇÖt mean we can ignore either cryptography or ML security. Here, our experience with cryptography can serve as a guide. Cryptographic attacks have different characteristics than software and network attacks, something largely shared with ML attacks. Cryptographic attacks can be passive. That is, attackers who can recover the plaintext from nothing other than the ciphertext can eavesdrop on the communications channel, collect all of the encrypted traffic, and decrypt it on their own systems at their own pace, perhaps in a giant server farm in Utah. This is bulk surveillance and can easily operate on this massive scale.
On the other hand, computer hacking has to be conducted one target computer at a time. Sure, you can develop tools that can be used again and again. But you still need the time and expertise to deploy those tools against your targets, and you have to do so individually. This means that any attacker has to prioritize. So while the NSA has the expertise necessary to hack into everyoneΓÇÖs computer, it doesnΓÇÖt have the budget to do so. Most of us are simply too low on its priorities list to ever get hacked. And thatΓÇÖs the real point of strong cryptography: it forces attackers like the NSA to prioritize.
This analogy only goes so far. ML is not anywhere near as mathematically sound as cryptography. Right now, it is a sloppy misunderstood mess: hack after hack, kludge after kludge, built on top of each other with some data dependency thrown in. Directly attacking an ML system with a model inversion attack or a perturbation attack isnΓÇÖt as passive as eavesdropping on an encrypted communications channel, but itΓÇÖs using the ML system as intended, albeit for unintended purposes. ItΓÇÖs much safer than actively hacking the network and the computer that the ML system is running on. And while it doesnΓÇÖt scale as well as cryptanalytic attacks can -- and there likely will be a far greater variety of ML systems than encryption algorithms -- it has the potential to scale better than one-at-a-time computer hacking does. So here again, good ML security denies attackers all of those attack vectors.
WeΓÇÖre still in the early days of studying ML security, and we donΓÇÖt yet know the contours of ML security techniques. There are really smart people working on this and making impressive progress, and itΓÇÖll be years before we fully understand it. Attacks come easy, and defensive techniques are regularly broken soon after theyΓÇÖre made public. It was the same with cryptography in the 1990s, but eventually the science settled down as people better understood the interplay between attack and defense. So while Google, Amazon, Microsoft, and Tesla have all faced adversarial ML attacks on their production systems in the last three years, thatΓÇÖs not going to be the norm going forward.
All of this also means that our security for ML systems depends largely on the same conventional computer security techniques weΓÇÖve been using for decades. This includes writing vulnerability-free software, designing user interfaces that help resist social engineering, and building computer networks that arenΓÇÖt full of holes. ItΓÇÖs the same risk-mitigation techniques that weΓÇÖve been living with for decades. That weΓÇÖre still mediocre at it is cause for concern, with regard to both ML systems and computing in general.
I love cryptography and cryptanalysis. I love the elegance of the mathematics and the thrill of discovering a flaw -- or even of reading and understanding a flaw that someone else discovered -- in the mathematics. It feels like security in its purest form. Similarly, I am starting to love adversarial ML and ML security, and its tricks and techniques, for the same reasons.
I am not advocating that we stop developing new adversarial ML attacks. It teaches us about the systems being attacked and how they actually work. They are, in a sense, mechanisms for algorithmic understandability. Building secure ML systems is important research and something we in the security community should continue to do.
There is no such thing as a pure ML system. Every ML system is a hybrid of ML software and traditional software. And while ML systems bring new risks that we havenΓÇÖt previously encountered, we need to recognize that the majority of attacks against these systems arenΓÇÖt going to target the ML part. Security is only as strong as the weakest link. As bad as ML security is right now, it will improve as the science improves. And from then on, as in cryptography, the weakest link will be in the software surrounding the ML system.
This essay originally appeared in the May 2020 issue of IEEE Computer. I forgot to reprint it here.
** *** ***** ******* *********** *************
Malware Delivered through Google Search
[2023.02.07] Criminals using Google search ads to deliver malware isnΓÇÖt new, but Ars Technica declared that the problem has become much worse recently.
The surge is coming from numerous malware families, including AuroraStealer, IcedID, Meta Stealer, RedLine Stealer, Vidar, Formbook, and XLoader. In the past, these families typically relied on phishing and malicious spam that attached Microsoft Word documents with booby-trapped macros. Over the past month, Google Ads has become the go-to place for criminals to spread their malicious wares that are disguised as legitimate downloads by impersonating brands such as Adobe Reader, Gimp, Microsoft Teams, OBS, Slack, Tor, and Thunderbird.
[...]
ItΓÇÖs clear that despite all the progress Google has made filtering malicious sites out of returned ads and search results over the past couple decades, criminals have found ways to strike back. These criminals excel at finding the latest techniques to counter the filtering. As soon as Google devises a way to block them, the criminals figure out new ways to circumvent those protections.
** *** ***** ******* *********** *************
SolarWinds and Market Incentives
[2023.02.08] In early 2021, IEEE Security and Privacy asked a number of board members for brief perspectives on the SolarWinds incident while it was still breaking news. This was my response.
The penetration of government and corporate networks worldwide is the result of inadequate cyberdefenses across the board. The lessons are many, but I want to focus on one important one weΓÇÖve learned: the software thatΓÇÖs managing our critical networks isnΓÇÖt secure, and thatΓÇÖs because the market doesnΓÇÖt reward that security.
SolarWinds is a perfect example. The company was the initial infection vector for much of the operation. Its trusted position inside so many critical networks made it a perfect target for a supply-chain attack, and its shoddy security practices made it an easy target.
Why did SolarWinds have such bad security? The answer is because it was more profitable. The company is owned by Thoma Bravo partners, a private-equity firm known for radical cost-cutting in the name of short-term profit. Under CEO Kevin Thompson, the company underspent on security even as it outsourced software development. The New York Times reports that the companyΓÇÖs cybersecurity advisor quit after his ΓÇ£basic recommendations were ignored.ΓÇ¥ In a very real sense, SolarWinds profited because it secretly shifted a whole bunch of risk to its customers: the US government, IT companies, and others.
This problem isnΓÇÖt new, and, while itΓÇÖs exacerbated by the private-equity funding model, itΓÇÖs not unique to it. In general, the market doesnΓÇÖt reward safety and security -- especially when the effects of ignoring those things are long term and diffuse. The market rewards short-term profits at the expense of safety and security. (Watch and see whether SolarWinds suffers any long-term effects from this hack, or whether Thoma BravoΓÇÖs bet that it could profit by selling an insecure product was a good one.)
The solution here is twofold. The first is to improve government software procurement. Software is now critical to national security. Any system of procuring that software needs to evaluate the security of the software and the security practices of the company, in detail, to ensure that they are sufficient to meet the security needs of the network theyΓÇÖre being installed in. If these evaluations are made public, along with the list of companies that meet them, all network buyers can benefit from them. ItΓÇÖs a win for everybody.
But that isnΓÇÖt enough; we need a second part. The only way to force companies to provide safety and security features for customers is through regulation. This is true whether we want seat belts in our cars, basic food safety at our restaurants, pajamas that donΓÇÖt catch on fire, or home routers that arenΓÇÖt vulnerable to cyberattack. The government needs to set minimum security standards for software thatΓÇÖs used in critical network applications, just as it sets software standards for avionics.
Without these two measures, itΓÇÖs just too easy for companies to act like SolarWinds: save money by skimping on safety and security and hope for the best in the long term. ThatΓÇÖs the rational thing for companies to do in an unregulated market, and the only way to change that is to change the economic incentives.
This essay originally appeared in the March/April 2021 issue of IEEE Security & Privacy.ΓÇ¥ I forgot to publish it here.
** *** ***** ******* *********** *************
Mary Queen of Scots Letters Decrypted
[2023.02.09] This is a neat piece of historical research.
The team of computer scientist George Lasry, pianist Norbert Biermann and astrophysicist Satoshi Tomokiyo -- all keen cryptographers -- initially thought the batch of encoded documents related to Italy, because that was how they were filed at the Bibliothèque Nationale de France.
However, they quickly realised the letters were in French. Many verb and adjectival forms being feminine, regular mention of captivity, and recurring names -- such as Walsingham -- all put them on the trail of Mary. Sir Francis Walsingham was Queen ElizabethΓÇÖs spymaster.
The code was a simple replacement system in which symbols stand either for letters, or for common words and names. But it would still have taken centuries to crunch all the possibilities, so the team used an algorithm that homed in on likely solutions.
Academic paper.
EDITED TO ADD (2/13): More news.
** *** ***** ******* *********** *************
Hacking the Tax Code
[2023.02.10] The tax code isnΓÇÖt software. It doesnΓÇÖt run on a computer. But itΓÇÖs still code. ItΓÇÖs a series of algorithms that takes an input -- financial information for the year -- and produces an output: the amount of tax owed. ItΓÇÖs incredibly complex code; there are a bazillion details and exceptions and special cases. It consists of government laws, rulings from the tax authorities, judicial decisions, and legal opinions.
Like computer code, the tax code has bugs. They might be mistakes in how the tax laws were written. They might be mistakes in how the tax code is interpreted, oversights in how parts of the law were conceived, or unintended omissions of some sort or another. They might arise from the exponentially huge number of ways different parts of the tax code interact.
A recent example comes from the 2017 Tax Cuts and Jobs Act. That law was drafted in both haste and secret, and quickly passed without any time for review -- or even proofreading. One of the things in it was a typo that accidentally categorized military death benefits as earned income. The practical effect of that mistake is that surviving family members were hit with surprise tax bills of US$10,000 or more.
ThatΓÇÖs a bug, but not a vulnerability. An example of a vulnerability is the ΓÇ£Double Irish with a Dutch Sandwich.ΓÇ¥ It arises from the interactions of tax laws in multiple countries, and itΓÇÖs how companies like Google and Apple have avoided paying U.S. taxes despite being U.S. companies. Estimates are that U.S. companies avoided paying nearly US$200 billion in taxes in 2017 alone.
In the tax world, vulnerabilities are called loopholes. Exploits are called tax avoidance strategies. And there are thousands of black-hat researchers who examine every line of the tax code looking for exploitable vulnerabilities -- tax attorneys and tax accountants.
Some vulnerabilities are deliberately created. Lobbyists are constantly trying to insert this or that provision into the tax code that benefits their clients financially. That same 2017 U.S. tax law included a special tax break for oil and gas investment partnerships, a special exemption that ensures that fewer than 1 in 1,000 estates will have to pay estate tax, and language specifically expanding a pass-through loophole that industry uses to incorporate companies offshore and avoid U.S. taxes. ThatΓÇÖs not hacking the tax code. ItΓÇÖs hacking the processes that create them: the legislative process that creates tax law.
We know the processes to use to fix vulnerabilities in computer code. Before the code is finished, we can employ some sort of secure development processes, with automatic bug-finding tools and maybe source code audits. After the code is deployed, we might rely on vulnerability finding by the security community, perhaps bug bounties -- and most of all, quick patching when vulnerabilities are discovered.
What does it mean to ΓÇ£patchΓÇ¥ the tax code? Passing any tax legislation is a big deal, especially in the United States where the issue is so partisan and contentious. (That 2017 earned income tax bug for military families hasnΓÇÖt yet been fixed. And thatΓÇÖs an easy one; everyone acknowledges it was a mistake.) We donΓÇÖt have the ability to patch tax code with anywhere near the same agility that we have to patch software.
We can patch some vulnerabilities, though. The other way tax code is modified is by IRS and judicial rulings. The 2017 tax law capped income tax deductions for property taxes. This provision didnΓÇÖt come into force in 2018, so someone came up with the clever hack to prepay 2018 property taxes in 2017. Just before the end of the year, the IRS ruled about when that was legal and when it wasnΓÇÖt. Short answer: most of the time, it wasnΓÇÖt.
ThereΓÇÖs another option: that the vulnerability isnΓÇÖt patched and isnΓÇÖt explicitly approved, and slowly becomes part of the normal way of doing things. Lots of tax loopholes end up like this. Sometimes theyΓÇÖre even given retroactive legality by the IRS or Congress after a constituency and lobbying effort gets behind them. This process is how systems evolve. A hack subverts the intent of a system. Whatever governing system has jurisdiction either blocks the hack or allows it -- or does nothing and the hack becomes the new normal.
HereΓÇÖs my question: what happens when artificial intelligence and machine learning (ML) gets hold of this problem? We already have ML systems that find software vulnerabilities. What happens when you feed a ML system the entire U.S. tax code and tell it to figure out all of the ways to minimize the amount of tax owed? Or, in the case of a multinational corporation, to feed it the entire planetΓÇÖs tax codes? What sort of vulnerabilities would it find? And how many? Dozens or millions?
In 2015, Volkswagen was caught cheating on emissions control tests. It didnΓÇÖt forge test results; it got the carsΓÇÖ computers to cheat for them. Engineers programmed the software in the carΓÇÖs onboard computer to detect when the car was undergoing an emissions test. The computer then activated the carΓÇÖs emissions-curbing systems, but only for the duration of the test. The result was that the cars had much better performance on the road at the cost of producing more pollution.
ML will result in lots of hacks like this. TheyΓÇÖll be more subtle. TheyΓÇÖll be even harder to discover. ItΓÇÖs because of the way ML systems optimize themselves, and because their specific optimizations can be impossible for us humans to understand. Their human programmers wonΓÇÖt even know whatΓÇÖs going on.
Any good ML system will naturally find and exploit hacks. This is because their only constraints are the rules of the system. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to a
ΓÇ£betterΓÇ¥ solution as defined by the program, then those systems will find them. The challenge is that you have to define the systemΓÇÖs goals completely and precisely, and that thatΓÇÖs impossible.
The tax code can be hacked. Financial markets regulations can be hacked. The market economy, democracy itself, and our cognitive systems can all be hacked. Tasking a ML system to find new hacks against any of these is still science fiction, but itΓÇÖs not stupid science fiction. And ML will drastically change how we need to think about policy, law, and government. NowΓÇÖs the time to figure out how.
This essay originally appeared in the September/October 2020 issue of IEEE Security & Privacy. I wrote it when I started writing my latest book, but never published it here.
** *** ***** ******* *********** *************
A HackerΓÇÖs Mind Is Now Published
[2023.02.10] Tuesday was the official publication date of A HackerΓÇÖs Mind: How the Powerful Bend SocietyΓÇÖs Rules, and How to Bend them Back. It broke into the 2000s on the Amazon best-seller list.
Reviews in the New York Times, Cory DoctorowΓÇÖs blog, Science, and the Associated Press.
I wrote essays related to the book for CNN and John ScalziΓÇÖs blog.
Two podcast interviews: Keen On and Lawfare. And a written interview for the Ash Center at the Harvard Kennedy School.
Lots more coming, I believe. Get your copy here.
And -- last request -- if people here could leave Amazon reviews, I would appreciate it.
** *** ***** ******* *********** *************
On Pig Butchering Scams
[2023.02.13] ΓÇ£Pig butcheringΓÇ¥ is the colorful name given to online cons that trick the victim into giving money to the scammer, thinking it is an investment opportunity. ItΓÇÖs a rapidly growing area of fraud, and getting more sophisticated.
** *** ***** ******* *********** *************
What Will It Take?
[2023.02.14] What will it take for policy makers to take cybersecurity seriously? Not minimal-change seriously. Not here-and-there seriously. But really seriously. What will it take for policy makers to take cybersecurity seriously enough to enact substantive legislative changes that would address the problems? ItΓÇÖs not enough for the average person to be afraid of cyberattacks. They need to know that there are engineering fixes -- and thatΓÇÖs something we can provide.
For decades, I have been waiting for the ΓÇ£big enoughΓÇ¥ incident that would finally do it. In 2015, Chinese military hackers hacked the Office of Personal Management and made off with the highly personal information of about 22 million Americans who had security clearances. In 2016, the Mirai botnet leveraged millions of Internet-of-Things devices with default admin passwords to launch a denial-of-service attack that disabled major Internet platforms and services in both North America and Europe. In 2017, hackers -- years later we learned that it was the Chinese military -- hacked the credit bureau Equifax and stole the personal information of 147 million Americans. In recent years, ransomware attacks have knocked hospitals offline, and many articles have been written about Russia inside the U.S. power grid. And last year, the Russian SVR hacked thousands of sensitive networks inside civilian critical infrastructure worldwide in what weΓÇÖre now calling Sunburst (and used to call SolarWinds).
Those are all major incidents to security people, but think about them from the perspective of the average person. Even the most spectacular failures donΓÇÖt affect 99.9% of the country. Why should anyone care if the Chinese have his or her credit records? Or if the Russians are stealing data from some government network? Few of us have been directly affected by ransomware, and a temporary Internet outage is just temporary.
Cybersecurity has never been a campaign issue. It isnΓÇÖt a topic that shows up in political debates. (There was one question in a 2016 Clinton -- Trump debate, but the response was predictably unsubstantive.) This just isnΓÇÖt an issue that most people prioritize, or even have an opinion on.
So, what will it take? Many of my colleagues believe that it will have to be something with extreme emotional intensity -- sensational, vivid, salient -- that results in large-scale loss of life or property damage. A successful attack that actually poisons a water supply, as someone tried to do in January by raising the levels of lye at a Florida water-treatment plant. (That one was caught early.) Or an attack that disables Internet-connected cars at speed, something that was demonstrated by researchers in 2014. Or an attack on the power grid, similar to what Russia did to the Ukraine in 2015 and 2016. Will it take gas tanks exploding and planes falling out of the sky for the average person to read about the casualties and think ΓÇ£that could have been meΓÇ¥?
HereΓÇÖs the real problem. For the average nonexpert -- and in this category I include every lawmaker -- to push for change, they not only need to believe that the present situation is intolerable, they also need to believe that an alternative is possible. Real legislative change requires a belief that the never-ending stream of hacks and attacks is not inevitable, that we can do better. And that will require creating working examples of secure, dependable, resilient systems.
Providing alternatives is how engineers help facilitate social change. We could never have eliminated sales of tungsten-filament household light bulbs if fluorescent and LED replacements hadnΓÇÖt become available. Reducing the use of fossil fuel for electricity generation requires working wind turbines and cost-effective solar cells.
We need to demonstrate that itΓÇÖs possible to build systems that can defend themselves against hackers, criminals, and national intelligence agencies; secure Internet-of-Things systems; and systems that can reestablish security after a breach. We need to prove that hacks arenΓÇÖt inevitable, and that our vulnerability is a choice. Only then can someone decide to choose differently. When people die in a cyberattack and everyone asks ΓÇ£What can be done?ΓÇ¥ we need to have something to tell them.
We donΓÇÖt yet have the technology to build a truly safe, secure, and resilient Internet and the computers that connect to it. Yes, we have lots of security technologies. We have older secure systems -- anyone still remember ApolloΓÇÖs DomainOS and MULTICS? -- that lost out in a market that didnΓÇÖt reward security. We have newer research ideas and products that arenΓÇÖt successful because the market still doesnΓÇÖt reward security. We have even newer research ideas that wonΓÇÖt be deployed, again, because the market still prefers convenience over security.
What I am proposing is something more holistic, an engineering research task on a par with the Internet itself. The Internet was designed and built to answer this question: Can we build a reliable network out of unreliable parts in an unreliable world? It turned out the answer was yes, and the Internet was the result. I am asking a similar research question: Can we build a secure network out of insecure parts in an insecure world? The answer isnΓÇÖt obviously yes, but it isnΓÇÖt obviously no, either.
While any successful demonstration will include many of the security technologies we know and wish would see wider use, itΓÇÖs much more than that. Creating a secure Internet ecosystem goes beyond old-school engineering to encompass the social sciences. It will include significant economic, institutional, and psychological considerations that just werenΓÇÖt present in the first few decades of Internet research.
Cybersecurity isnΓÇÖt going to get better until the economic incentives change, and thatΓÇÖs not going to change until the political incentives change. The political incentives wonΓÇÖt change until there is political liability that comes from voter demands. Those demands arenΓÇÖt going to be solely the results of insecurity. They will also be the result of believing that thereΓÇÖs a better alternative. It is our task to research, design, build, test, and field that better alternative -- even though the market couldnΓÇÖt care less right now.
This essay originally appeared in the May/June 2021 issue of IEEE Security & Privacy. I forgot to publish it here.
** *** ***** ******* *********** *************
Upcoming Speaking Engagements
[2023.02.14] This is a current list of where and when I am scheduled to speak:
IΓÇÖm speaking at Mobile World Congress 2023 in Barcelona, Spain, on March 1, 2023 at 1:00 PM CET.
IΓÇÖm speaking on ΓÇ£How to Reclaim Power in the Digital WorldΓÇ¥ at EPFL in Lausanne, Switzerland, on Thursday, March 16, 2023, at 5:30 PM CET. IΓÇÖm speaking at IT-S Now 2023 in Vienna, Austria, on June 1-2, 2023. The list is maintained on this page.
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2023 by Bruce Schneier.
--- BBBS/Li6 v4.10 Toy-5
* Origin: TCOB1 - binkd.thecivv.ie (21:1/229)