Software on smartphones, computers, and commercial equipment is riddled with defects. While tech companies regularly update products to fix known vulnerabilities, these flaws give attackers new ways of infiltrating emails, corporate networks, or critical infrastructure.
It’s not just malicious hackers who use vulnerabilities. Cybersecurity firms, tech companies, law enforcement agencies, defense contractors, and governments worldwide take advantage of them, too. Security flaws may give federal agents ways to infiltrate terrorists’ digital communications or track criminals’ smartphones, but they can also be deployed to spy on journalists, activists, and dissidents. And because bugs are so valuable, the hunt for them is driving a multimillion dollar industry.
In a joint project between The Christian Science Monitor’s Passcode team and Northwestern University’s Medill School of Journalism, we explore the growing arms race to discover software vulnerabilities – and what it means for national security and everyone’s digital privacy and safety.
> Scroll right to continue.
By day, John Bambenek is a successful, if pretty ordinary, cybersecurity professional. The 39-year-old father of four spends most of his days trying to safeguard a bevy of corporate clients from malicious hackers. He analyzes digital threats for the Bethesda, Md., firm Fidelis Cybersecurity and runs his own security consultancy from his suburban Champaign, Ill., home. He’s active on LinkedIn and Twitter and once even ran for the Illinois state senate – as a Republican.
But Bambenek is also a digital vigilante. Part of an exclusive but growing group of skilled coders who moonlight for the FBI, he is quietly playing both defense and offense. If federal investigators need technical assistance to pursue targets, Bambenek infiltrates their networks, hacks websites, and stalks their digital footprints. He spends hours hunting for vulnerabilities in commonly used software – and turning them into tools for spying to steal critical and compromising data.
He’s helped the FBI dismantle two crime rings by writing software tools that take advantage of so-called “zero-days,” or unknown vulnerabilities in digital products that manufacturers have not fixed. He says that investigators are currently using one of his surveillance tools to monitor the digital trails of 39 criminal operations.
In one case, in September 2015, Bambenek discovered a flaw in a website where a European criminal software syndicate sold surveillance technology to governments and nefarious hackers. Within minutes, he broke in and uncovered a virtual Rolodex of customers, payments, and products, all of which he turned over to the FBI.
Still, deploying vulnerabilities to hack digital networks, even those used by suspected criminals, raises serious legal questions. The spyware syndicate operation, he admits, is “definitely a gray area ... . Yeah, I work for a security company, but nobody’s protecting Joe Sixpack, and you really can’t protect them. All you can do is just go after the criminals and change the dynamics, hopefully.”
Much of Bambenek’s work is possible because the technology that underpins most of daily life – the websites we all use, the apps on our smartphones, the software on our laptops, and industrial systems running critical infrastructure – is flawed. It’s rife with vulnerabilities. It has bugs and shoddy code. For law enforcement, those flaws might provide a way of tapping a criminal’s smartphone. But for repressive governments, bugs could help them break into activists’ email accounts or spy on their Skype calls.
And in an age in which criminal hackers or intelligence operatives may be able to influence entire democracies by breaching and leaking sensitive emails, such as those in the Democratic National Committee’s mailbox, software flaws are increasingly valuable to both the people trying to break into digital networks and those trying to defend them.
“Much of cybersecurity can be reduced to a constant race between the software developers and security experts trying to discover and patch vulnerabilities, and the attackers seeking to uncover and exploit those vulnerabilities,” says a recent report from New America, a Washington think tank. “There is a wide range of actors seeking to discover security flaws in software, whether to fix them, exploit them, or sell them to someone else who will fix or exploit them.”
The vulnerability marketplace
That demand for vulnerabilities helps drive the lucrative marketplace for malicious software that generates as much as $100 million annually in sales, estimates Trey Herr, a coauthor of the New America report and fellow with the Belfer Center’s Cyber Security Project at the Harvard Kennedy School.
Even though some hackers are selling vulnerabilities back to tech companies in so-called “bug bounty” programs, much of the top-dollar trade in vulnerabilities takes place in the digital underground. “The most expensive, the most interesting transactions ... take place in the darkest corners and the farthest from public view,” he says.
The value of a software defect often depends on how much the seller has weaponized it, and how valuable the targeted system is to a particular mission, according to insiders. An exploit that can crack an iPhone without the victim noticing could be worth double the price of the flaw alone, says Jared DeMott, a former vulnerability exploit analyst at the National Security Agency (NSA) and government contractor Harris Corporation. He now serves as chief technology officer of Binary Defense Systems.
“If you want to maximize your profit, let’s say you have a good bug, like a really juicy bug. It’s a zero-day for the iPhone. That’s super juicy. That’d be worth a ton of money,” says Mr. DeMott.
In August, Apple launched a bug bounty program that would pay researchers as much as $200,000 for finding vulnerabilities in its products. But some third-party bug buyers, who sell software secrets to governments, might shell out more than seven times that amount. Zerodium, one of the most popular online marketplaces for vulnerabilities, is currently offering security researchers $1.5 million for a “fully functional zero-day exploit” that can crack iOS 10, Apple’s mobile operating system.
Because these vulnerabilities often provide the building blocks for hacking tools that government agencies use to carry out espionage activities, former exploit suppliers believe the NSA and other spy agencies around the world are active buyers.
“I have very little doubt that the US government, along with many other governments in the world, buys zero-days on the black market,” says Michael Borohovski, who worked in the intrusion operations division of ManTech International Corp., a government contractor, between 2009 and 2011.
The number of cyberexploits the US spy community buys in a single year can vary depending on the type of intelligence operations underway, say insiders. A single operation might require multiple zero-days.
“One would get you some level of access, the other would escalate access, and so on and so forth,” says Mr. Borohovski. For instance, he said, ManTech held a number of contracts with a range of government agencies where the order “was to perform vulnerability discovery, develop exploits and then weaponize them as needed.”
Often, by the time ManTech tried to deliver a software exploit to the customer, the security vulnerability had already been found, so the contractor would go back to the drawing board and try to find new entryways, he said. On “Patch Tuesday,” the day of the week software developers such as Adobe and Microsoft update commercial software, “sometimes they’ll simply fix the bugs that you spent the last six months weaponizing,” says Borohovski.
“I always call it a cat-and-mouse game between the manufacturers and us. Us being able to find a vulnerability, being able to utilize that to assist law enforcement, before they close the door,” says Lee Reiber, chief operating officer of Oxygen Forensics, one of the many smaller firms in the growing market of phone hacking technology. “Then we start back over again to find it,” hunting for other vulnerabilities and zero-days.
Oxygen Forensics recently helped local police officers extract data from an Android smartphone as part of a kidnapping investigation. The firm cracked the phone in minutes. Across the forensics industry, it typically takes just six hours to physically decrypt a 16GB Android device. A tool known as the “Android Jet Imager” decodes smartphone contents within seven minutes, according to Oxygen.
The large federal defense contractor MITRE licenses the Android decoder to Oxygen. While building the technology, MITRE consulted numerous federal forensics laboratories, federal law enforcement agencies, and Pentagon officials that all stressed a need for “rapid acquisition and analysis of Android devices,” Mark Guido, principal cybersecurity engineer at MITRE, said in an email.
The speed of MITRE’s method relies on, among other things, an algorithm that limits the amount of unnecessary data blocks sent from the smartphone to the receiving computer. “The applications for our approach include crime scenes, border crossings, and any other situation where performing a mobile forensic acquisition is time-sensitive,” according to the Aug. 7, 2016, issue of Journal of Digital Investigation.
Typically, the federal government doesn’t ask a cybersecurity contractor for a bug, say former suppliers. It approaches a contractor with a goal. The objective might be, hypothetically speaking, “we need to get past a certain antivirus tool” or “get into an internet cafe through some particular method,” says Borohovski, now an executive at cyberdefense firm Tinfoil Security.
As the US government’s interest in these kinds of digital operations increases, military contractors have scooped up boutique firms that concentrate on spy technology, industry insiders say. ManTech, for instance, bought exploit researcher Oceans Edge last June for an undisclosed amount and Raytheon purchased bug developer SI Government Solutions in 2008 for an undisclosed sum. What’s more, defense contractors ManTech, Booz Allen Hamilton, Harris, and Raytheon have all landed formal US government contracts to infiltrate targeted software, say former employees there.
ManTech did not respond to requests for comment and the other three corporations declined to discuss offensive cyber contracts or capabilities, most of which are classified.
While controversial, it’s legal to peddle security exploits to most governments, companies and individuals – as long as the seller doesn’t use them. “Selling software itself is not generally a crime,” says Peter Swire, a professor of law and ethics at the Georgia Institute of Technology. Still, he says, “hacking into a protected computer is a crime.”
A blanket US antihacking law, the Computer Fraud and Abuse Act, prohibits breaking into someone else’s computer without consent but doesn’t forbid playing around with code that has the potential to, say, hack a firewall.
What’s more, US intelligence agencies such as the NSA are free to break into computers, phones, and networks to pursue foreign espionage operations. And if local and federal law enforcement agencies such as the Secret Service or the FBI obtain a warrant, they can also use vulnerabilities to hack a suspects’ computer or mobile device.
Taming the vulnerabilities game
But there are limits on where cybersecurity firms can sell their digital exploits. Multinational companies that do business in the US can’t sell military equipment, including offensive software, to certain sanctioned countries such as Iran, Russia, or North Korea.
“Defense companies are regulated in what they can sell abroad, because everything they do is viewed as a defense service and thus restricted under ITAR,” or the International Traffic in Arms Regulations, that requires government approval for an export license, says Ret. Rear Adm. Bill Leigher, a Raytheon cybersecurity director, who has a background in Navy signals intelligence.
As with any tool designed for military and civilian uses, there are dangers of these hacking programs falling into the wrong hands. To be sure, the misuse of government-grade exploits unnerves many civil liberties groups.
“Governments shouldn’t be able to use them to crack down on free speech or dissidents,” says Andrew Crocker, staff attorney for the Electronic Frontier Foundation (EFF), which is suing the Ethiopian government on behalf of a blogger, now residing in Maryland, who alleges his Skype communications were tapped through malware made by German surveillance-tech company Gamma Group.
In an effort to better balance public safety with national security needs, Georgia Tech’s Swire had suggested that decisions involving whether the government should inform tech companies about software flaws not be left solely to intelligence agencies.
Swire served on President Obama’s 2013 Review Group on Intelligence and Communications Technologies, a response to the Edward Snowden leaks about the NSA’s global surveillance operations. He recommended the government weigh the costs and benefits of telling citizens about software holes through a series of multiagency reviews, which is now known as the Vulnerability Equities Process. While still a secretive discussion, the process now includes representation from Commerce and other civilian departments.
The FBI and the Office of the Director of National Intelligence, which oversees the intelligence community, declined to comment for this story.
Even when it has a firm argument for keeping a software defect quiet, the government faces the possibility of its closeted cyberespionage arsenal escaping into the wild. Just this summer there was an online leak of alleged NSA hacking tools by a mysterious group calling itself the “Shadow Brokers.” The US government also identified some of the blueprints for the tools among the immense cache of classified material that ex-Booz Allen contractor Harold Martin allegedly took home from the NSA.
“When governments acquire these things they take a gamble that no one else is going to find out about them or that they won’t be stolen or leaked,” says Mr. Crocker of the EFF. “Now I’m not saying that they should never retain vulnerabilities in the first place, but that that’s a risk that has to be understood.”
> Scroll right to continue.
Greg Scarlatoiu stared at his computer in disbelief. It was about 4 a.m. on April 20, 2016, and Mr. Scarlatoiu — an early riser — had just brewed a cup of coffee. He logged onto his ASUS laptop and immediately noticed the computer’s media player had been opened 51 times, along with a single Microsoft Word document titled “Assad,” a reference to Syrian President Bashar al-Assad.
Sitting in a Buenos Aires hotel room that morning, the executive director of the Committee for Human Rights in North Korea realized he had been hacked. “The first time I saw it, I was not 100 percent sure that somebody had hacked into my computer,” Scarlatoiu said. “Freaky things happen, you’ve seen basically computers act up.”
But Scarlatoiu — whose committee of US-based foreign policy specialists promotes human rights in North Korea and fights to increase citizen access to information — has been a victim of hacking before. In March 2013, his committee’s website had been vandalized by North Korea as a result of a massive cyberattack meant for targets in South Korea. A banner reading “Hitman 007—Kingdom of Morocco” was placed on all sections of the website. It took 10 hours to remove. The meaning behind the digital graffiti remains a mystery.
So, in April, he knew what to do. First, he contacted his security team. They told him his computer had been remotely accessed and he had to stop using it, remove the battery, and get a new laptop. He complied. “You feel vulnerable,” he said. “You always wonder whether there’s something you could have done to stay safer. You always wonder whether you made a mistake, you should’ve been more careful.
“It’s a temporary feeling of vulnerability and insecurity that eventually has to go away very quickly because you have to take quick and prompt action, make sure you protect yourself, make sure you protect others.”
Over the past several years, governments around the world have increasingly turned to hacking tools as ways to effectively spy on activists, journalists, and other high-value targets. In particular, governments that do not have freedom of speech protections in place — such as North Korea — are honing in on rights groups that may operate in the West. Repressive regimes sometimes view those groups as threats or as assets that hold valuable information on dissidents and other political activists.
Like Scarlatoiu’s organization, many of these rights groups have few digital protections in place to protect against cyberattacks nor the financial resources to keep themselves safe online, said John Scott-Railton, a senior researcher with the Citizen Lab at the University of Toronto’s Munk School of Global Affairs.
Mr. Scott-Railton said the technology needed to target activists and groups is “the bare minimum,” and more often than not, victims are targeted with phishing emails — messages containing bad links and malware that attempt to harvest confidential user data.
For civil society organizations working with repressive regimes, being hacked can be “devastating,” Scott-Railton said. It can result in the loss of sensitive information, the disclosure of sources’ names or even a physical threat, he said.
It can also cause funding to dry up.
When Sony Pictures was attacked by North Korean state-sponsored hackers in November 2014, the Committee for Human Rights in North Korea felt an impact in their purse strings, Scarlatoiu said. The committee — which openly challenges North Korea on human rights issues — lost a few significant donors who were “afraid for their own safety, the safety of their families, the safety of people working for their organizations,” he said.
“Even when one is not directly targeted, there is collateral damage,” Scarlatoiu added.
Although it’s hard to pin down whether hacks of civil society organizations and activists have increased, Scott-Railton said Citizen Lab’s research shows hacking goes up in times of political polarization. Given the nature of the 2016 election, it is “not unreasonable” to expect that this problem will be much more visible in the United States in the next few years, he said.
Syria is a prime example. The civil war between the government, the opposition and ISIS shows no signs of slowing down. The crisis has led to intervention by a number of foreign governments, paving the way for security breaches.
According to Scarlatoiu, North Korea’s interest in Syria stems from its involvement with Assad’s government. It has been reported that North Korean troops are fighting alongside Syrian forces. There are also reports of a park dedicated to Kim Il-Sung, the founder of North Korea, in downtown Damascus — the country’s capital city. Luckily, Scarlatoiu’s hacked Word document didn’t contain any sensitive information that interfered with his mission, he said.
Scarlatoiu has been working with various cybersecurity experts, not only to increase his digital defenses, but also to get a better sense of who was behind the attack.
The timing and subject matter of the document points to North Korea as the perpetrator, and North Korean diplomats have expressed “profound displeasure” with the committee’s work, he said.
Still, he said, given the challenge of attributing cyberattacks there is no way to be certain. The attackers could have been anyone from freelance hackers to North Korean officials.
But either way, there had to have been some type of government involvement in the hacks, Scarlatoiu said. “I sometimes compare this situation to the pre-World War I situation when devastating technology, devastating tools of death, were available and the world was completely unaware,” Scarlatoiu said. “Government-sponsored hackers can do tremendous damage to the United States, to US citizens.”
> Scroll right to continue.
In August, the National Security Agency (NSA) found itself scrambling to figure out how a group dubbed the Shadow Brokers obtained the agency’s alleged hacking tools, some of which they posted online and others they offered to the highest bidder. The startling breach not only revealed that the NSA seemed to rely on previously unknown security vulnerabilities – called zero-days – in Cisco and Fortinet commercial software to carry out digital espionage campaigns, it also exposed NSA tactics to foreign adversaries.
But the breach may have been most significant — at least in the short term — to networking giant Cisco and digital security firm Fortinet and their customers. The Shadow Brokers revealed unpatched flaws in their systems that criminal hackers and foreign spies could exploit. It remains unclear whether the NSA used these tools for surveillance operations, but it appears the agency kept the flaws from the software vendors, depriving them of a chance to patch their systems.
This dispute between the US intelligence community and the tech sector has gone on for more than a decade. In April 2014, White House Cybersecurity Coordinator Michael Daniel published a blog post detailing the general guidelines by which the US government determines whether to disclose a flaw. The process is known as the Vulnerabilities Equities Process (VEP).
“Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence that could thwart a terrorist attack,” he wrote. But even Mr. Daniel recognized the potential problem of hoarding too many of these flaws, saying that “building up a huge stockpile of undisclosed vulnerabilities while leaving the internet vulnerable and the American people unprotected would not be in our national security interest.”
Daniel listed nine criteria that agencies – which may include representatives from the NSA, CIA, FBI and Homeland Security – involved with the VEP take into account when deciding whether to disclose a vulnerability. The blog post says the agency that finds a vulnerability considers “how much the vulnerable system (is) used in the core internet infrastructure … in the US economy, and/or in national security systems.” The agencies also consider if the vulnerability imposes a significant risk if left unpatched.
So, how many zero-days does NSA keep?
“Nobody has any idea,” said Bruce Schneier, a noted cybersecurity researcher and cryptographer. “Well, some people do — they won’t tell you because it’s classified. So anybody who tells you that they have an idea, doesn’t know...I wish we did, but we don’t.”
But in 2015, NSA Director Adm. Michael Rogers said the agency discloses 91 percent of the serious flaws it finds. Yet that leaves one big question: Does it disclose 91 percent of 10 flaws, or 91 percent of 10,000 flaws? Or does it keep even more vulnerabilities? Jason Healey, a senior research scholar at Columbia University’s School for International and Public Affairs who looked into that question, says his research indicates that the government hangs onto only a few dozen zero-days, at most.
“It didn’t really seem reasonable that NSA is keeping like 5,000,” Healey said. “That means that they would be keeping so many, and we would only be discovering a tiny, tiny, tiny, tiny fraction of them.”
There’s also no indication of how long the NSA waits to disclose a vulnerability.
Ari Schwartz, a former White House cybersecurity adviser, said that most documents related to the VEP are classified for national security reasons. Mr. Schwartz, currently the managing director of cybersecurity services at the law firm Venable, said the exact groups involved in the VEP can’t be disclosed because the government doesn’t want adversaries to “game the system.” But, he said, NSA heads up the process and reviews the zero-days that other government agencies may uncover. But the review isn’t restricted to the intelligence community.
“We emphasize the importance of having nonintelligence agencies as part of the process, such as the Commerce Department, the State Department and the US Trade Representative,” said Peter Swire, a professor of law and ethics at Georgia Tech University Professor, who helped craft the VEP process. “And the Commerce [Department] and Trade Representative are important because there are clearly commercial implications [of the VEP].”
Tech companies have been the main opponents of the government stowing away vulnerabilities. Think about it: If firms aren’t aware of a security hole, they can’t patch it. That means the American public is also affected by the government’s decisions.
“We all use the same technology,” said Chris Soghoian, formerly a principal technologist at the American Civil Liberties Union and currently a TechCongress Congressional Innovation Fellow. “We all use the same laptops, we all use the same web browsers, we all use the same word processing programs.”
Mr. Soghoian’s argument mirrors Apple’s case in its dispute with the government following the 2015 San Bernardino terrorist attacks. Lacking the technical ability to get around security features on the shooter’s iPhone, the FBI took the tech company to court for refusing to comply with a request for special assistance to unlock the device. Apple CEO Tim Cook called the request “chilling” and refused to create what he called “a master key, capable of opening hundreds of millions of locks.”
In the end, Apple didn’t have to comply — the FBI hired a third party contractor to access the device. The FBI has not disclosed the name of the contractor nor the tool it used to hack into the phone. It’s also unclear whether Apple has been able to patch the flaw.
Is the government sacrificing the security interests of its citizens to preserve its own offensive capabilities? Civil liberties advocates think so. “The parts of the government that are most capable of channeling the needs and interests of the American public are not even invited into the room,” said Soghoian, suggesting the Federal Trade Commission plays a part in the VEP process. “You’re really sitting a bunch of wolves around the table asking them how you want to design the hen house.”
Even Schwartz, a former Obama administration official, said the US government could try to assuage concerns by issuing a more in-depth explanation beyond Daniel’s blog post - even “just an unclassified version of the process.”
“Government policy,” Schwartz said. “Especially national security policy, through a blog post isn’t the greatest practice.”
> Scroll right to continue.
If you’re at all worried about being spied on, you might fixate on the idea of an omnipotent government watching your every move, or big corporations tracking your online habits to make money.
But these days, the likeliest spies may be the people you know best. Parents are spying on their kids’ locations. Suspicious spouses are tracking each other’s messages.
There’s a growing market of cheap spyware apps that make snooping much easier in homes, within families, or relationships – especially as more personal data is available online than ever before. According to the Pew Research Center, some two-thirds of Americans own a smartphone and almost 70 percent of internet users in marriages or relationships know the passwords to one or more of their partner’s accounts. As Cheryl New, a family divorce lawyer based in Bethesda, Md., puts it: anyone can grab their loved one’s phone unnoticeably and go “down and dirty scrolling through it.”
But having that kind of easy access to a phone means it’s much easier for someone to download a hacking tool – without having to pay big bucks, as many governments and even some companies do, to remotely spy on their targets. Two of the most popular spyware apps are mSpy and FlexiSpy. Both apps offer functions that can secretly forward texts, call logs, photos, emails, and apps such as Snapchat or Tinder. They can even record phone calls.
So long as the spy can log into the phone to install it, all of the data collection is designed to be hidden from the phone’s actual owner.
Both apps retail for less than $70 a month and the process to install them takes just 10 minutes, according to a mSpy sales associate.
“They are designed so that they’re easy as humanly possible to do, which creates a scenario where it allows more people access to these types of capabilities,” said Lars Daniel, a digital forensics examiner at Guardian Digital Forensics. “[The apps] are very good at reporting just about all the activity that happens on a phone.”
Over 80 percent of US divorce attorneys say they’ve seen a rise in the number of cases involving social networking evidence, according to a 2010 survey of the the American Academy of Matrimonial Lawyers. Screenshots of text messages and printouts of emails are frequently used as evidence by divorcing couples in court, Mr. Daniel said.
But Ms. New, the family lawyer who works at firm New & Lowinger, says she still finds that “smart millennials” are more likely to lean towards spyware apps. Her clients, who are often over the age of 40, are not likely to go beyond the snooping that happens when a phone is left on the table during a bathroom break. “Or we go the old-fashioned where they hire private detectives,” she said.
New also explained that mistrustful couples start spying not out of spite, but because they feel like it’s one of their only options. “If you’re suspicious about your spouse, people become resourceful. If something doesn’t smell right – it’s just human instinct. It’s a very emotional time,” she said.
But the legality of these apps is up for debate. Apps like mSpy dance around the legal issue of surveillance: mSpy has a disclaimer on its website saying it’s legal as long as the person being monitored is notified beforehand. Users must check a box complying with a legal agreement upon downloading the app. But there’s no way to ensure that all of mSpy’s users are sticking to those guidelines. What’s more, some of the spies might actually own the device itself: A parent or a spouse, for instance, may be secretly monitoring someone else’s communications on a phone they technically pay for, making the legal questions even murkier.
So, the questions of whether it’s OK to monitor someone else’s communications could end up being ethical rather than purely legal. “Surveillance technology is often emerged from well-intentioned efforts … but like any technology is used, it’s subject to social and institutional power dynamics,” said Mary Madden from research institute Data & Society.
Take parenting, for instance. The phrase “helicopter parenting” may sound familiar, evoking an image of overprotective parents hovering over their kids, but with digital surveillance capabilities, Madden sees controlling parents adopting techniques that warrant a different metaphor: “Drone parenting.”
“It’s the kind of surveillance that happens without your direct involvement,” Ms. Madden said. “The kind of surveillance happening more secretly, possibly, and more automated.”
A Pew Research Center report from earlier this year found that nearly half of parents with kids aged 13 to 17 know the password to their child’s email account, and 43 percent know their kids’ cellphone login codes. Roughly 60 percent of parents said they visit their teens’ social media accounts and 48 percent look through call records or messages.
Parental tracking seems even more appealing as children spend so much time of their time on their devices, leaving a searchable digital footprint that could be a gold mine of information for worried parents. And, some say, it might not be such a bad thing. “These are all just realities of the world we live in,” Daniel said. “It can be a wise instance for parents to monitor devices for such activity and have parental controls.”
But Data & Society’s Madden warns that parents will be tempted to overuse their power – potentially in intimidating or manipulative ways. Madden also noted that the high expectations of good parenting may also push parents to cross moral boundaries.
“One of the major challenges of parenting in the digital age is: Parents are seen as responsible for children’s online behavior and safety, but also they are tasked with needing to give their children enough freedom and independence to learn how to safely navigate the online world on their own,” Madden said.
So, the question remains for parents and other loved ones tempted to monitor kids, spouse or significant others: Does the ease of remaining undetected make cyber spying too attractive not to use?
> Scroll right to continue.
Want to control your own digital security? There’s a wide array of options for secure messaging apps, email services and browsers that help you do-it-yourself. - Anna Waters & Jack Detsch
> Scroll right to continue.