I saw this go across my Twitter feed, so I thought I'd write up a quick response, since how cybersecurity people treat economics is utterly different than how economists treat economics.
The first misconception of economics is calculating where the money goes, or how much things cost. That's "business", not "economics'.
The second, and more common use of economics, is the political attempt to prove that there is some sort of "market failure" that means we get to punish Microsoft for its vulnerabilities. The "market failure" is a real economics concept. It describes the situation where I sell you fireworks, you set them off, causing your neighbor's house to catch fire. The "failure" is that it's neither you (the buyer) or me (the seller) who paid the costs, but your neighbor. The cybersecurity analogy is that when buyers buy Microsoft software, which has vulnerabilities, it's third parties who suffer. For example, a hacker might exploit a vulnerability in Windows, take control of thousands of desktops, and flood a website with traffic. That website suffers, even though it might not own any Microsoft products.
While this sounds plausible "economic", it isn't. Consider the fireworks case. One solution to the problem is to fine the seller of fireworks, or regulate which fireworks they could sell. Another solution is to fine the person who bought the fireworks and who lit them near their neighbors house.
Or, the third solution is punish the neighbor for having a flammable house.
Economics isn't about fairness, it's about the efficiency of results. It's that guy with the flammable, thatched roof that imposes costs on all his neighbors. It means the neighbors can't have a cozy fire in their fireplace during winter, they can't have BBQs in the summer, and they can't set of fireworks for celebrations. That is why local government usually choose the third option. They regulate how houses are built, and outlaw flammable roofs, believing this is the most efficient solution.
So which is the most efficient solution to Microsoft vulnerabilities? Blame Microsoft? Blame the user? Or blame the poor website victim? Or let the free market decide? I don't know the answer, but I know that I've never seen cybersecurity people make an "economic" answer based on efficiency, but instead, I've only seen arguments based on how Microsoft is big and evil, and how it's unfair to blame innocent users.
But this is just a tiny portion of economics, there is so much more. I recommend getting a college textbook on beginning economics. I recommend Greg Mankiw's Principles of Economics. Follow the link to the Amazon site, and you can read the first chapter for free, which outlines the basic 10 principles of economics.
Below, I take some of those basic principles and describe them in a cybersecurity context. Think of it as a useful way to learn economics if you already know cybersecurity, or as a way of learning cybersecurity if you already know economics.
The first principle from Mankiw's textbook is that cybersecurity is a tradeoff. Making the network more secure means making it worse in some other fashion, such as slower, less reliable, less user friendly. When you look at the dumb things cybersecurity say, it's usually about the failure to acknowledge the tradeoffs. The tradeoffs are not just between security and other things, but between two security approaches. The funniest joke in cybersecurity are the two Wikipedia articles on Defense in Depth and Defense in Depth (computing). The original meaning was about trading off in border security for better internal security, such as moving the troops from the border to deeper inside. But no cybersecurity professional can admit to such tradeoffs, so "defense in depth" has morphed into an argument that no matter how much security you have now, you need even more, both on the border and in depth.
The second Mankiw principle is opportunity cost, or that the cost of something is what you give up to achieve it. The cost of cybersecurity isn't the money you spend, but what you gave up. Hiring another cybersecurity expert on your team means not hiring a saleperson who could sell more of your company's products/services. When you go to your boss and explain why your budget for cybersecurity needs to increase, you need to explain why the budget for marketing, sales, and RnD needs to decrease.
The third principle is thinking on the margin. Cybersecurity people talk in absolutes, as if something is insecure or secure. They should instead talk in relative terms of "more secure" or "less secure". Moreover, they need to measure the marginal benefits in security to the marginal costs. That fancy new expensive firewall still won't make you secure, the question instead is whether the marginal improvement in security is worth the price over a cheap firewall. Or, take the TSA screening requiring people to take off their shoes. Cybersecurity experts complain that this makes no difference. They are wrong; taking off the shoes at security makes people marginally safer. The only question is whether this tiny improvement in safety is worth the enormous additional cost.
The fourth principle is that people respond to incentives, perversely. A straightforward example is that of complicated password policies, the more complicated they are, the more a person is likely to write down the password on a sticky note underneath their keyboard, thus making the system less secure, not more so. The consequence of this is that people have a fixed risk tolerance. When you make things safer, people behave more recklessly. If you install anti-virus on their desktop, they are more likely to run e-mail attachments. Measured one way, such as on an obstacle course, talking on a mobile phone impairs a person's ability to drive. Measured with economics, we find that while people are on the phone, they slow down and otherwise drive more safely, to accommodate the distraction. Drivers slow down and pay attention when it rains to compensate for the additional danger, which means they speed up and drive more recklessly when the roads dry up to compensate for the increase safety.
Another principle is that the value of security isn't infinite. One of the fun things freaky economists like to do is calculate what a person's life is worth. For example, let's say that you put your kid in the car to drive to the store rather than paying the neighbor to babysit for an hour for $10. Dying in a car accident is the leading cause of death for children, and those deaths are overwhelmingly near the home. If the chance of death on that trip is 1-in-a-million, and you could've spent $10 to avoid it, this means you value your kid's life at $10-million. (Well, not, not exactly, I'm glossing over the fine bits to make a point). The same is true of cybersecurity, where people treat security as infinitely worth. That's why they can't deal with marginal benefits vs marginal costs: the marginal benefits of increased security are always infinite, according to cybersecurity experts.
The sixth principle on Mankiw's list is that free-markets are usually the best, tempered by the seventh principle that sometimes government can improve on free-market outcomes (such as when there is a market failure). A wrong application of this principle was President Bush's "Strategy to Secure Cyberspace" that had the fatuous statement of first hoping that the free-market would solve security before stepping in and trying to fix it with regulation. This is wrong, because the free-market will never "solve" the cyberrsecurity problem. Instead, the free-market is what determines how valuable cybersecurity is in the first place. I once gave a talk where I asked "Raise your hand if cybersecurity is your highest priority", then "Raise your hand if you use wifi", then "Raise your hand if you think your wifi is secure". The results were predictable: people raised their hands on the first two, but not the third. That's because people lie. They claim security has infinite importance, but behave as if it's a tradeoff. The free-market captures this true value, government regulation doesn't. When government starts regulating cybersecurity, it we'll start complaining about it in much the same way we complain about the TSA and the Patriot Act.
I could spend days talking about freakanomics and cybersecurity, but this gives you a taste.
Security freakonomics talk tomorrow... what should i say? ;-)
The first misconception of economics is calculating where the money goes, or how much things cost. That's "business", not "economics'.
The second, and more common use of economics, is the political attempt to prove that there is some sort of "market failure" that means we get to punish Microsoft for its vulnerabilities. The "market failure" is a real economics concept. It describes the situation where I sell you fireworks, you set them off, causing your neighbor's house to catch fire. The "failure" is that it's neither you (the buyer) or me (the seller) who paid the costs, but your neighbor. The cybersecurity analogy is that when buyers buy Microsoft software, which has vulnerabilities, it's third parties who suffer. For example, a hacker might exploit a vulnerability in Windows, take control of thousands of desktops, and flood a website with traffic. That website suffers, even though it might not own any Microsoft products.
While this sounds plausible "economic", it isn't. Consider the fireworks case. One solution to the problem is to fine the seller of fireworks, or regulate which fireworks they could sell. Another solution is to fine the person who bought the fireworks and who lit them near their neighbors house.
Or, the third solution is punish the neighbor for having a flammable house.
Economics isn't about fairness, it's about the efficiency of results. It's that guy with the flammable, thatched roof that imposes costs on all his neighbors. It means the neighbors can't have a cozy fire in their fireplace during winter, they can't have BBQs in the summer, and they can't set of fireworks for celebrations. That is why local government usually choose the third option. They regulate how houses are built, and outlaw flammable roofs, believing this is the most efficient solution.
So which is the most efficient solution to Microsoft vulnerabilities? Blame Microsoft? Blame the user? Or blame the poor website victim? Or let the free market decide? I don't know the answer, but I know that I've never seen cybersecurity people make an "economic" answer based on efficiency, but instead, I've only seen arguments based on how Microsoft is big and evil, and how it's unfair to blame innocent users.
But this is just a tiny portion of economics, there is so much more. I recommend getting a college textbook on beginning economics. I recommend Greg Mankiw's Principles of Economics. Follow the link to the Amazon site, and you can read the first chapter for free, which outlines the basic 10 principles of economics.
Below, I take some of those basic principles and describe them in a cybersecurity context. Think of it as a useful way to learn economics if you already know cybersecurity, or as a way of learning cybersecurity if you already know economics.
The first principle from Mankiw's textbook is that cybersecurity is a tradeoff. Making the network more secure means making it worse in some other fashion, such as slower, less reliable, less user friendly. When you look at the dumb things cybersecurity say, it's usually about the failure to acknowledge the tradeoffs. The tradeoffs are not just between security and other things, but between two security approaches. The funniest joke in cybersecurity are the two Wikipedia articles on Defense in Depth and Defense in Depth (computing). The original meaning was about trading off in border security for better internal security, such as moving the troops from the border to deeper inside. But no cybersecurity professional can admit to such tradeoffs, so "defense in depth" has morphed into an argument that no matter how much security you have now, you need even more, both on the border and in depth.
The second Mankiw principle is opportunity cost, or that the cost of something is what you give up to achieve it. The cost of cybersecurity isn't the money you spend, but what you gave up. Hiring another cybersecurity expert on your team means not hiring a saleperson who could sell more of your company's products/services. When you go to your boss and explain why your budget for cybersecurity needs to increase, you need to explain why the budget for marketing, sales, and RnD needs to decrease.
The third principle is thinking on the margin. Cybersecurity people talk in absolutes, as if something is insecure or secure. They should instead talk in relative terms of "more secure" or "less secure". Moreover, they need to measure the marginal benefits in security to the marginal costs. That fancy new expensive firewall still won't make you secure, the question instead is whether the marginal improvement in security is worth the price over a cheap firewall. Or, take the TSA screening requiring people to take off their shoes. Cybersecurity experts complain that this makes no difference. They are wrong; taking off the shoes at security makes people marginally safer. The only question is whether this tiny improvement in safety is worth the enormous additional cost.
The fourth principle is that people respond to incentives, perversely. A straightforward example is that of complicated password policies, the more complicated they are, the more a person is likely to write down the password on a sticky note underneath their keyboard, thus making the system less secure, not more so. The consequence of this is that people have a fixed risk tolerance. When you make things safer, people behave more recklessly. If you install anti-virus on their desktop, they are more likely to run e-mail attachments. Measured one way, such as on an obstacle course, talking on a mobile phone impairs a person's ability to drive. Measured with economics, we find that while people are on the phone, they slow down and otherwise drive more safely, to accommodate the distraction. Drivers slow down and pay attention when it rains to compensate for the additional danger, which means they speed up and drive more recklessly when the roads dry up to compensate for the increase safety.
Another principle is that the value of security isn't infinite. One of the fun things freaky economists like to do is calculate what a person's life is worth. For example, let's say that you put your kid in the car to drive to the store rather than paying the neighbor to babysit for an hour for $10. Dying in a car accident is the leading cause of death for children, and those deaths are overwhelmingly near the home. If the chance of death on that trip is 1-in-a-million, and you could've spent $10 to avoid it, this means you value your kid's life at $10-million. (Well, not, not exactly, I'm glossing over the fine bits to make a point). The same is true of cybersecurity, where people treat security as infinitely worth. That's why they can't deal with marginal benefits vs marginal costs: the marginal benefits of increased security are always infinite, according to cybersecurity experts.
The sixth principle on Mankiw's list is that free-markets are usually the best, tempered by the seventh principle that sometimes government can improve on free-market outcomes (such as when there is a market failure). A wrong application of this principle was President Bush's "Strategy to Secure Cyberspace" that had the fatuous statement of first hoping that the free-market would solve security before stepping in and trying to fix it with regulation. This is wrong, because the free-market will never "solve" the cyberrsecurity problem. Instead, the free-market is what determines how valuable cybersecurity is in the first place. I once gave a talk where I asked "Raise your hand if cybersecurity is your highest priority", then "Raise your hand if you use wifi", then "Raise your hand if you think your wifi is secure". The results were predictable: people raised their hands on the first two, but not the third. That's because people lie. They claim security has infinite importance, but behave as if it's a tradeoff. The free-market captures this true value, government regulation doesn't. When government starts regulating cybersecurity, it we'll start complaining about it in much the same way we complain about the TSA and the Patriot Act.
I could spend days talking about freakanomics and cybersecurity, but this gives you a taste.