Experienced Points: How Lizard Squad Stole Christmas

By Shamus Posted Tuesday Dec 30, 2014

Filed under: Column 48 comments

This week I try to explain the Lizard Squad attack on PSN and Xbox Live.

To be clear: We don’t know for sure if Lizard Squad really staged the attack. Or if they did, it’s possible that the guys claiming to be Lizard Squad weren’t them. Or if it was, it’s possible the “squad” is really just one guy. It’s all secrets and hearsay. But for the purposes of clarity we’ll just accept the events as presented and talk about them assuming everything is true.

This hackingThis entire attack involved both hacking and DDosing. is interesting stuff. What if you had the power to make international news without physically hurting anyone? What if you could do something that would get the world talking? I imagine this is a big part of the motivation for stunts like this. It’s the equivalent of being able to spray graffiti on a global billboard. You can’t take personal credit without going to jail, but you can watch the world respond to your actions.

Which is to say: Harsh penalties won’t be enough to keep people from doing this. People regularly risk their lives for thrills, or to prove a point, or just to see if they can do something. There will always be people willing to stage attacks like this, so the only thing you can do is build robust systems, secure them properly, and make them tolerant of attacks. So, you know, the exact opposite of what we’ve been doing so far.

I think it’s interesting that Lizard Squad went after both PSN and XBL. That certainly made the attack more difficult, since it required them to divide the botnet between two targets, thus halving its power. I imagine this was to make it clear that this wasn’t a protest attack. If they had gone after just XBL, people would have been speculating about their motives. Was this done by Playstation fanboys? Is this to protest Microsoft in some way? As it is, it seems like, “Because they could” is the most plausible reason for the attack.

 

Footnotes:

[1] This entire attack involved both hacking and DDosing.



From The Archives:
 

48 thoughts on “Experienced Points: How Lizard Squad Stole Christmas

  1. Lazlo says:

    So obviously it was a conspiracy perpetrated by Nintendo and Valve.

    1. Wide And Nerdy says:

      I was going to say revenge of the Glorious PC Master Race but I like yours too. Maybe they’re getting revenge for all the recent bad console ports to PC.

  2. Eldiran says:

    Good article, as usual!

    Knowing very little about DDoS attacks, something I’ve been wondering a long time is — is there a reason that servers can’t mitigate some of the damage by refusing repeat requests when traffic is too high? Couldn’t the server log visitor IPs and refuse to serve that IP if it makes a request too soon after the previous request?

    Is the overhead of running that too much of a drag on the server to be worth it? Or perhaps that kind of system is already in place, and the botnets are just so vast that they have enough fresh IPs that they can still keep the network down?

    EDIT: well I just read about IP spoofing, so I guess that is one reason why such a system wouldn’t work…

    1. Kian says:

      Aside from IP spoofing, there are infrastructure issues that may also make defending against these attacks more difficult.

      Before you reach the server, your request has to pass through a router. The router itself could be overwhelmed even if the server refused to answer the request. Even if your server and your infrastructure can handle the stress, large chunks of your user-base could be cut off if any node along the way buckles (if your ISP has outdated infrastructure and many of your neighbors are part of the botnet, you’ll probably be knocked out even if the service remains running).

      Fortunately, this particular danger drops off quickly. But the point remains that even if your server ignores the request, the request has to reach you for you to ignore it. And just reaching you can be enough to harm you.

      1. MrGuy says:

        This is true, though IMO if your network infrastructure has a bottleneck where the switch routing the packets (which just has to look at the IP in the header and forward the thing along) isn’t able to run as fast as the server behind it (which needs to receive the packet and actually look at the darn thing to tell whether to ignore it), then you’ve done a poor job with your network design. The thing that has to do logic should always be slower than the thing that doesn’t within your private network (where the “thing that does logic” might be the recipient server, or a “smart” firewall, or some other device, but the point is the router in front of it should be an order of magnitude faster).

        As to a DDoS attack taking down a core network device upstream of the victim’s private network, I’m skeptical. Yes, it’s a lot of traffic to a single site – way more than anticipated. But it’s flowing through a network that’s designed to take traffic to a huge number of sites. You could send 100x times the normal traffic to a single site, and you’ve maybe, maybe made a blip on what a core router is used to.

        Definitely agree boundary devices (like the cable company’s shared switch you and your 10 neighbors use to communicate to the central office) could get overwhelmed if 3 of your neighbors jump to 100% of their bandwidth usage, but those devices are designed to degrade “gracefully” (i.e. by not delivering the “standard” bandwidth) to those users. Will it affect everyone else on the circuit? Sure, but it won’t take them down. Also, most home users use 10x as much bandwidth downstream as upstream most of the time, so even pegging out the upstream connection doesn’t really hurt most other people. (If you want to mess up local routers, a far more effective attack would be to start DOWNLOADING to the botnet, not uploading from it…)

        1. Ingvar M says:

          Doesn’t have to be the router that’s the bottleneck, it may well be the link(s) from upstream ISPs. Buying a large bandwidth may be cheaper (per bit/s) when you buy gigabits (or terabits) than if you buy residential megabits, but those links don’t come for free.

          So if the link between PSN and the Internet is, say (pulling random numbers out of my rear, here) 100 Gbps and the botnet had quarter of a million hosts, each machine in the botnet would only need to emit a steady stream of 0.4 Mbps to use up all the available bandwidth.

          But, basically, to combat this, Sony would have to convince other people to do one of two things, “stop sending traffic to Sony” (not obviously different for whoever happens to be on the other end of the traffic block) or “do deeper packet inspection” (this is seriously costly for routers, they’re pretty much designed to look at recipient IP and maybe TOS bits, anything else ends up in “slow path” switching and can drop the packets-per-second by factor of 10-100).

          So, yeah, with a lare enough botnet, you can swamp pretty much any link, without having a massively noticeable bandwidth drag locally.

    2. guy says:

      The problem is multifaceted. It’s actually entirely possible that the servers do throw out some of the malicious traffic. However, they can’t necessarily just toss repeat requests, because the client could send a repeat request because it never got a response.

      The big problem, though, is internet structure. The PSN/XboxLIVE software isn’t on any of the routers between the end user and the ones owned by the corporation providing the servers. Therefore none of the routers will know if a packet should be thrown out, because that would require knowing if sending two identical messages shouldn’t happen. Note that if you refresh a webpage, your HTTP request is likely to be identical to the first request. The packets don’t have to actually get processed by the servers, they just have to clog the path to get there. And routers only transmit packets so fast, so if they get packets faster than they can send packets they have to store the incoming ones, and there’s only so much space. Once the router is sufficiently overloaded it will be forced to simply discard incoming packets.

      Even if that doesn’t happen, it’s still going to spend most of its time forwarding malicious packets instead of legitimate ones.

    3. Bryan says:

      In addition to all the requests so far, when the botnet gets big enough, there *aren’t* repeat requests from any individual machine. Or at least, there aren’t on the time scales that would actually matter.

      If the botnet is ten million compromised computers hitting the server (…and I have no idea on the actual numbers in this case, but based on US/Europe population that seems vaguely plausible), then to hit the server with ten thousand requests a second (another number pulled out of thin air, but one which I imagine would crush just about anyone), you only have to have each machine hit it once every thousand seconds. So twenty minutes or so.

      On the other hand, this means that if actual customers are hitting them more often than once every 20 minutes (…which does seem like a long time given stuff like achievement notifications and whatnot else — although those might be a lot easier to handle on the server end too; I’m still just guessing here), then it’s likely they’ll go down under just plain normal load. So maybe 10k is too few to cause loss of the service? Or maybe 10 million normal subscribers is too many (though I’d imagine they have more than a million)?

      Still, the time between requests from any individual bot in the botnet is likely to be on the order of minutes. Maybe 10 seconds, maybe up to a few hours; it depends on the exact count of controlled machines and the required request rate that will take down the service. 10 seconds *might* be feasible to block repeats at, but too much higher than that and it seems like it’d get hard.

      (There’s also the huge count of newly compromised machines that pop up onto the Internet on Christmas morning. Because nobody patches up and fixes things before they start poking around, so just about every new machine is vulnerable to a whole lot of stuff. That’ll increase the botnet size too, at least a little, although it may not be fast enough to make much of a difference…)

      1. Bryan says:

        Oh! And in addition, keeping a table of 10 million clients that you want to block is going to take a pretty large chunk of memory or time, too. If you only keep the IP, then (1) you’re going to have a lot of collateral damage when you block an ISP’s proxy or NAT box and thus all the devices behind it, but (2) it still requires ~40 megabytes to do a vector-like type, which requires a linear walk for each new request. A hashtable would be faster on the requests, but is going to take a fair bit more memory too, as it needs to stay sparse to allow constant time lookup. It might be possible, but it’s another cost on the service’s end that needs to be taken into account.

    4. nm says:

      I know I’m late to the party, but I used to work on a DDoS defense product, so I can actually speak to this.

      DDoSs come in many shapes an sizes. A really simple one is called a SYN flood. Basically, the attacker sends a bunch of “Hey, I’d like to talk to you” messages from different (probably spoofed) sources and the server responds to all of them with “Oh, okay. Here I am, ready to hear what you have to say” messages. After sending that response, the server holds the connection (which occupies to some finite resources in the server) open for a few seconds. Since nobody’s on the other end, it doesn’t close the connection and free the resources for quite a while. Once those finite resources are filled up, the server can’t send any more acknowledgement messages and your service is denied.

      Another kind of DDoS attack that requires very few resources on the attacker’s part is called slowloris. You can read about it, but it works on a similar principle of using up the limited resources on a server.

      My point is that defending against slowloris and a SYN flood require vastly different approaches, even though both cause denial of service and are launched from distributed (usually compromised) machines. Internet routing infrastructure is busy doing its job (routing) and the development of the protocols that run the Internet was largely done without much thought to how they could be exploited because it happened in the 1970s when all the computers on the Internet were run by universities and government agencies.

      That said, there are ways to protect against these kinds of attacks. They’re not 100% effective, so they’re unlikely to work against state funded attackers or really determined and clever attackers, but the people who’ve taken out Sony (and lots of other companies lately) are not those.

      1. guy says:

        There are a lot of ways to counter various methods of tying up disproportionate amounts of resources with a DDoS attack. Unfortunately, the botnets used in these attacks can have members number in the tens of millions. They can potentially bring down fairly substantial severs by behaving like legitimate clients and visiting simultaneously, especially during peak regular traffic. The local countermeasures help but are often insufficient.

        What does work is getting more computing power and bandwidth. There are services that can mirror a site to a lot of different servers and distribute traffic over them as needed, while costing much less than buying that many servers because they have lots of sites which don’t all have traffic spikes simultaneously.

  3. kunedog says:

    Imagine that instead of a mere account login and multiplayer and DRM (i.e. the normal justifications for forcing the customer to be online), you had a console on a streaming game service. It literally requires hundreds of times the bandwidth and can be rendered unplayable by sub-second interruptions in connectivity, so it’s a thousand times easier to DDOS (and more likely fail in most every other way). This is an inherent flaw that *cannot* be fixed (i.e. by taming or removing the DRM in a patch) and there simply isn’t any way to “bypass” downloading every bit of the game’s video and audio (IIRC Campster said in the Diecast he was able to play last week on the actual game servers, as long as he wasn’t touching the PSN itself).

    No sane customer would want such a system, yet streaming services get hyped as “the future” of gaming all the time. They seem designed from the ground up to benefit the publishers and fuck the customers, just like a DRM system. Surely that’s all they really are.

    1. Ciennas says:

      No sane customer would want it needs revised.

      No INFORMED customer would want it.

      But that’s the problem. Marketing has more power than they should, anx they make it seem like a good idea to the executives and accoubtabts as well.

      We need to educate those who work in the companies that make these decisions.

      Nothing changes without those who have the power to change things know better.

    2. Zagzag says:

      Streaming services do have advantages, like being able to play games that your hardware isn’t up to. I doubt that many people are actually streaming games that they could play fine without, and it seems like marketing would be mad to try and suggest that they do this. It’s a niche service, and I don’t see it going beyond that, but that doesn’t mean that it’s inherently evil like you seem to be suggesting.

      1. kunedog says:

        But isn’t there an extreme overlap between the games that stress hardware (first- and third-person shooters and sandboxes) and games that are ruined the hardest by streaming limitations (latency and degraded video)? Most puzzle and turn-based and adventure games might stream fine, but they won’t trouble a local low-end PC/console either. I honestly don’t see even a niche market for streaming.

        1. guy says:

          Well, at my college people in the dorms (but not in the property management apartments mutter grumble) can plug random computers into the wall and get gigabytes-per-second. And anyway people regularly play multiplayer FPSes with ping times around 200 milliseconds with little trouble.

          1. kunedog says:

            With a streaming game your ping now applies to your video and sound outputs, as well as your controller inputs. Many people consider IPS monitors unacceptable for (fast) gaming because of their “slow” response times of 20-30ms or more, and pings are usually worse and certainly more unpredictable than that.

            Let’s say you’re lucky enough to have a 30mb/s connection. Why would you want to use it to transfer your game’s video instead of, uh, a DVI cable, which is capable of 4 Gb/s? The people who developed DVI apparently understood that that 1920 x 1200 pixels w/ 24 bits/pixels @ 60Hz results in bandwidth well over 3 Gb/s. The people who promote streamed games seem very, very confused (at best). They’re a recipe for disaster in North America (even before you consider data caps).

            Those of us who know anything about bandwidth and compression and (especially) latency can see the enormous technical obstacles facing a service like this, and no one has ever done anything to explain how they intend to solve them. Onlive instead did everything they could to lock out independent reviewers with NDAs and closed demonstrations. A friend of mine described it as the gaming equivalent of the perpetual motion scam, and IMO that’s spot on (except that it would still have the draconian DRM issues even if it worked perfectly).

  4. Lee says:

    Their networks were designed by marketing more than engineers

    Worse… Their networks are designed by accountants. The engineers say “Here’s the infrastructure we need to support what marketing is selling. It will cost $X.” Then the accountants say, “Here’s $X/4, build whatever will work most of the time.”

    1. They’re budgeted by accountants. Marketing tells them what features a network should have and what information it should manage without any knowledge about security, reality, etc.

      Though maybe that’s the “/4” part of your equation.

  5. Daemian Lucifer says:

    “Experienced Points: How Lizard Squad Stole Christmas”

    Not for me.But then,I am a member of the pc master race.

    1. Ciennas says:

      I did not complain when they hacked PSN. For I played the PC, and not on a PS3.

      I did not care when they broke Xbox Live and PSN again, because I do not play there.

      I laughed as they DDOS’d Christmas, for what is the plight of console dwelling noobs to me?

      When they finally destroyed my connection to Steam and GoG and even Origin and uPlay for good measure, their was no one to speak up for me.

      Maybe we should make a return to the LAN party structure- you can suborn the main network, but you can’t crush everybody’s good times. We’ll just route around your jacked hacking ass.

      1. David says:

        I have to respond to this, because I’m astonished that no one else has brought it up yet (somebody at the Escapist said nearly the exact same thing). Lizard Squad has, in fact, ALREADY (claimed to have) hit Steam once. It happened on December 5, and it took Steam to its knees:

        http://www.hardcoregamer.com/2014/12/04/steam-hit-with-ddos-attacks-lizard-squad-possibly-guilty/121023/

        Now granted: Steam isn’t PSN, and you can play games on Steam without an Internet connection, but any hope of multiplayer games goes out the window. Go ahead, ask me how I know.

        1. Ciennas says:

          Relatedly, lets say you’re trying to finish installing a game. Say Borderlands 2.

          Imagine you’re doing this to stress test the new terminal you bought.

          Imagine that it constantly fails at 89% because the servers are too busy to process the request.

          Imagine further that you want to thwahp the gentlemen who are deliberately causing the problem with a large rolled up newspaper.

        2. guy says:

          Well, ultimately there’s not much you can do about DDoS attacks happening. I mean, there’s various tricks with network architecture and content delivery networks and cloud computing to mitigate them, but ultimately they’re going to be a thing that keeps happening until end users update their antivirus software and operating systems regularly.

          Steam at least handles server downtime well enough. There’s nothing they can do about multiplayer or downloads, but it has the grace to tell you the servers are down and you can launch things in offline mode.

  6. Daemian Lucifer says:

    What we need now is a movie about some programmer dude(played by Shia LaBeouf,of course)trying to fight of this attack on his precious company using lots of buzzwords and movie hacking.

    1. Ciennas says:

      Eh. Been done. We need something more like Eagle Eye, where Shia Labouf was caught in the machinations of an AI that was good aligned in the worst possible way.

      In fact, forget e villainous AI: we need more movies with actually good aligned AI’s. I’m thinking a cross between Ghost in the Shell and TRON. With a hint of LoTR for good measure.

      It would be like an entertainment oriented Law and Order. And instead of mocking that concept, we’d be able to celebrate the good parts and effects of gaming culture.

      (Sort of like Emmett’s speech at the end of Lego Movie, actually.)

      Seriously. We need our creators of main stream pop culture to acknowledge how freaking amazing the very concept of virtual reality gaming truly is: we are building whole worlds from nothing, and the both entertain and inspire and educate and help people relieve stress and urge them on to bigger and better things. What could be more cool than that?

      It would also have the side effect of forcing the otherwise good and fun Big Bang Theory to stop mocking the cultures that it stars.

      (Because somehow standard forms of entertainment are not as mockable as nerd hobbies. Football watching is unnasailable, yet card games are stupid because NERD!)

      am I rambling? Yeah.

      But seriously, I think we have had enough silly and incorrect depictions about how the magic future space toys work for now. Let’s treat this story like it is: a bunch of attention seekers ruin the fun for EVERYONE ELSE, partly for notoriety and forcing these companies to update their security, but mostly because hacking and trolling individual game matches was no longer tickling their jerkass bones anymore.

      Maybe they made a good point. I’m soured on jerkasses breaking games because the emselves

      1. Ciennas says:

        Are broken.

      2. MadTinkerer says:

        The main problem is that writers keep falling back on the Frankenstein formula for conflict in stories where AIs are characters. Now in principle, any speculative fiction should at some point explore a possible conflict between what is being speculated and the status quo. But that shouldn’t be the only perspective, especially when that conflict is retread so much it becomes cliche.

        The other problem is that few people understand the basic principles of how the technology works. I can’t count how many times people have made the mistake of thinking that computers are not self aware just because they can’t beat the Turing Test. Anything with an operating system is by definition completely self-aware. It’s awareness of things besides itself(s), and consequently the ability to understand the desires of others, that computers have difficulty understanding.

        Additionally, computers have no desires of their own, and cannot ever have desires of their own. They can be told to pretend they have desires(Sims, for example), but won’t ever decide to do anything they weren’t programmed to do. Skynet will never take over the world unless it’s ordered to do so. But that’s a lack of will, not intelligence. It’s not the same thing.

        1. Daemian Lucifer says:

          There are ways in which skynet would take over the world.If its told to increase industrial efficiency,for example,without it having the need to preserve human life.An easily rectified loophole,but still.

          As for self awareness,computers arent really self aware.In order to be self aware,the subject needs to distinguish itself as an individual separate from the environment.Oh sure,a computer does has various data that distinguish it from other computers(ip address,for example),but the computer itself does not recognize this data as self identification,it simply passes it on when authentication is required.Thats not self awareness.Similarly,being able to scan its components and see if they are working properly is also not self awareness.

          And I agree that there are multiple other conflicts in fiction that can be explored with advanced ai.One of the most obvious ones would be “Dey took oor jerb!”,meaning the struggles of workers to adapt to the new division of labor.

          1. MadTinkerer says:

            No, by default computers are entirely aware of themselves and completely oblivious to their environment. That’s how operating systems work.

            See: this is the thing. People think that self-awareness is a certain thing and if computers had it that computers would act differently. But the fact is that computers act the way they act because they are self aware. And they don’t care because they have no will of their own, not because they are not self-aware.

            1. Dan Efran says:

              Computers aren’t completely oblivious to their environment, and less so all the time. But…the “certain thing” that people refer to as “self awareness” isn’t simple awareness of internal state. It refers to recognition of oneself as a specific entity in the world, with limits, needs, and desires: something for the will to protect or pursue; mortality. Identification with that entity, yourself-in-the-world, and sympathy for it, is the essence of self-awareness. A proper test of it isn’t the Turing Test, or a diagnostic self-test, but a mirror. Can you recognize yourself by your actions?

              Hmmm…or a fire. Would you rescue your self?

              1. Veylon says:

                I’m convinced that “self-awareness” is just a buzz-word in the world of computers. Some kind of cyber entity spreading itself around and doing stuff is a big deal disirregardless whether it meets a particular definition of the concept or not.

            2. ehlijen says:

              What you say is true, but I don’t think that’s how ‘self awareness’ is defined in psychology/AI research.

              Yes, a computer can always tell you what it’s got stored at any given memory address.

              But if you plug a computer into a router configured to incorrectly redirect all outgoing packets back to their source, would your computer be able to
              understand that its own signals are being sent back to it (ie it’s looking in a mirror)? Or would it simply conclude that its requests not being answered and it being sent requests for another machine are two separate faults with the network?

              A computer knows everything about itself, but it doesn’t understand what that knowledge means without a program (outside instructions) giving it context.

              1. Bryan says:

                Well, if the computer doesn’t have *some* outside program running, it’ll just sit there like the big heatsink that it is, not doing anything at all. No packets will even hit the broken router. Depending on what you count as “outside” (BIOS, since it contains flashed instructions? just the hard drive? what about the translation layer between the x86 opcodes and the much-more-RISC-y actual execution units?), it may not even be able to change its own registers’ state…

                1. ehlijen says:

                  That’s my point. Computers think in terms of ‘this memory bit reads 1’. Humans think in terms of ‘this big lot of brain cells form my memory of lunch with bob last week’.

                  Humans are not aware of what each of their cells are doing at any given moment, and yet we like to define ourselves as self aware.

                  With computers it’s the opposite. They know exactly what each memory bit is, but we have yet to be able to successfully write a program that can critically analyse its own memory and observations and come to independent conclusions about its own existence or place in the world.

                  The term self awareness as it is being used here to describe something computers have is not the definition used when talking about AI.

            3. Daemian Lucifer says:

              First thing,when talking about computers acting differently(as in having wishes and desires),we are barely on the verge of them being intelligent*.Self awareness comes after that.

              And again,self awareness does not mean being able to diagnose all of your parts,it means able to distinguish yourself and your own actions from the environment.ehlijen gave a great example,if you just send back the same data to a computer it will not be able to distinguish that it sent out those data in the first place.A self aware being knows that when it produces a sound(or smell,or image,or whatever)it is its own sound,and not something from the outside.You are confusing self awareness with knowing ones internals,and those are not the same things.

              *There are a few ai that can learn,so some ai are barely intelligent.Still dumber than your average lizard however.

  7. AR+ says:

    What if we just made hacking legal?

    My understanding of the situation is that companies pretty much ship products as soon as they’re basically functional, even if they’re basically like cars that explode when you tap them a certain way. So now all the cars on the road are one malicious tap away from a fireball. If hacking is legal, then we could have lines of volunteers bashing all cars that roll off the assembly line w/ baseball bats 24/7, which would make them explode right away instead of latter, which would raise the bar for what counts as “basically functional” and force companies to make more secure products.

    1. Ciennas says:

      Ironically, making it legal would destroy most of the motive for a lot of hacking.

      But the roadblock would be the lawmakers. In essence you’re asking they legalize vandalism.

      You’re also asking them to Invalidate the DMCA, which would kill the bill right there.

      Good thought though.

      1. Alex says:

        “But the roadblock would be the lawmakers. In essence you're asking they legalize vandalism.”

        No, the roadblock is that it’s a stupid idea. To use DHW’s analogy, it’s like encouraging fire safety by legalising arson. Even if every future software product was immune to hacking, you’d still be declaring open season on pretty much everyone until these unhackable solutions become available.

        1. Ciennas says:

          Not exactly. Most hackers on the dark side do it for thrills. Most of those thrills vanish when people stop caring.

          The Streisand Effect, basically.

          On the other hand, malicious hacking being legal does allow for even worse problems- I forgot that companies would then indulge in cyber warfare publically.

          Yeah. Okay. A better idea is Veylon’s: anonymous hackers under the White Hat banner, certified and completely anonymous (So companies can’t silence critics or bribe or otherwise bypass the system.)

          Then they stamp their approval or scorn over the software and the company takes their feedback and rolls with it.

          But yeah, you’re right. I forgot the implications outside the one to one scale.

    2. Veylon says:

      There’s something to be said for White Hat hacking.

      I kind of wish there was something akin to Underwriters’ Laboratories for software. A particular product could have a logo on it saying “Certified Secure by XYZ Hacking Group”.

  8. DHW says:

    Here’s a crazy, out-of-the-box idea: we have some sort of large organization paid for by taxes — we could call it, say, “law enforcement” — track down the people who run botnets and put them in jail.

    I absolutely don’t understand why nobody even suggests this any more. It’s like if a gang of sociopaths was going around setting fire to people’s houses and the solution everyone — government, corporations, columnists, random people in internet comment sections, absolutely everyone — settles on is to spend millions of dollars putting every house in a giant fireproof box, as opposed to finding the arsonists and putting them in jail. And yes, I know that there are problems with international borders, with tracking people down through proxies, et cetera. These are not laws of physics; these are all human-created problems and should therefore have human solutions. I recommend we put some effort into solving them as opposed to just giving up and handing the entire internet over to every bored jerk who wants to ruin others’ day.

    1. guy says:

      A number of design decisions from when the internet was first being set up render that basically impossible, even discounting the bit where people running botnets are probably forwarding them through countries which are unlikely to allow the law enforcement of the nation where the targets reside to look through their servers. We could theoretically fix the internet protocol problems, if everyone running internet services got together and agreed to make everything much more complex and slower and break all the legacy internet code.

      It’s not like there aren’t law enforcement agencies that try to track down the people running the botnets, it just hasn’t been working well enough. Even when they track down the person behind one, more just pop up.

      1. Bryan says:

        …more complex, and slower, *and far less reliable* than it already is.

        Yeah, please no…

    2. Steve C says:

      Because it’s hard and expensive to do. It takes a lot of resources to investigate these sorts of things. Law enforcement will likely catch the perpetrators… eventually. It won’t stop it from happening in the future by someone else… who eventually will also get caught. And then the time after that, and then after that, etcetera. The police and government cannot *stop* crime. They can only punish after the fact.

      Your house analogy is apt. People tend to build things that are unsafe without minimum standards. Not everyone and not all the time, but it happens enough. We have housing codes so that when houses do catch on fire that the people inside can make it out alive. Tons of houses do catch on fire every single day with lots of extra ones on Christmas. The idea isn’t to stop houses from burning but to stop them from becoming deathtraps. Online services should not design themselves so that if a door becomes stuck that everyone dies inside. It doesn’t matter if the door is jammed or someone is illegally blocking it. It’s an obvious and predictable problem.

      It’s not an unreasonable to expect businesses to design their products so they don’t have a single point of failure for obvious and predictable problems when those problems are a certainty. It doesn’t matter the cause is by people or by circumstance. You know it’s going to happen so make allowances for it. Your house probably won’t catch on fire. Just design it with the idea that it *might* and go from there.

    3. Ciennas says:

      Partly because while they try, it is hilariously easy to compromise a networked machine and build a botnet.

      Their are script kiddies out there with illegal or grey area programs that can subvert user machines and perform whatever illegal act the script kiddies desire, no technological knowledge required.

      Presumably the coders who make these UI’s are famously rewarded.

      So the problem stands: it is so easy to do a well trained mosquito could do it. No matter how much capital you throw at prosecuting these guys, you can’t stop them without either squashing all copies of the software or rebuilding the network from the hardware up.

      Neither of which are remotely possible.

      Remember the scorn Sony and Microsoft got for cutting off their library from the older consoles? Imagine being told that EVERYTHING YOU OWN is no longer compatible or functional on the NewNet.

      Either all the companies would have to port and rework all the code in their pipeline, or otherwise help bankroll everyone’s transition, only for it all to be rendered moot if someone anywhere in the process left behind a hole that hackers WILL find no matter what.

      In short, we’ll have to keep patching the leaks in our current boat instead of rebuilding the vessels.

      Until somebody starts…. I dunno, their own colony station or whatever, where there is no infrastructure already in place, I’m just not seeing it.

      1. DHW says:

        Well, but here’s the thing. Yes, it’s (relatively) easy to download your script kiddie kit and commit your crime. But it’s also easy — in fact, much easier — to commit more mundane crimes, like vandalism, mugging, breaking and entering. That doesn’t mean we don’t have police try and stop those crimes, and it doesn’t mean the effort is wasted — criminals tend to be a very small percentage of the population, and effective policing to take them off the street makes a huge difference. It’s also fairly well established that the higher the chance of punishment, the higher a deterrent effect there is. When it comes down to it, black-hat hacking is not magic. It’s just crimes like any other crime, and there’s no reason to think people inclined to commit such crimes won’t respond to incentives as well.

        At the very least, we could _try_ instead of instantly throwing up our hands and giving the Internet to the bad guys.

        1. guy says:

          The internet simply is not designed in a way that allows attacks to be reliably traced. Quite aside from the fact that law enforcement in the country with the target frequently doesn’t have jurisdiction in the country with the attackers.

          The criminal penalties for hacking actually are substantial, and there are law enforcement divisions dedicated to enforcing them. That has utterly failed to prevent black hat hacking.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to ehlijen Cancel reply

Your email address will not be published.