E3 2017 Day 3: Sony

By Shamus Posted Monday Jun 12, 2017

Filed under: Industry Events 87 comments

This might be the last of the shows I’m going to cover. I was going to cover the PC Gaming show, but I wasn’t paying attention to the schedule and I managed to miss it. Maybe that’s not a huge loss. The PC Gaming show has become notorious for being overlong, dull, and awkward. I suppose if the people demand it I might show up for Nintendo tomorrow. We’ll see.

The point is, this is the last of the big shows. Let me make some predictions about what they’re going to show us and what my reaction will be:

  1. Detroit: Become Human: I already hate this game. Designer David Cage is the M. Night Shyamalan of gaming, except he thinks he’s David Fincher. Everything about his gameplay and stories rubs me the wrong way.
  2. God of War: I’m not looking forward to this game, but I am looking forward to seeing how the public receives it. This is a tent-pole franchise that’s been dormant for a while, and now it’s getting a radical overhaul. I’m curious how receptive fans will be to the changes.
  3. Uncharted: Lost Legacy. I expect they will show us some really cool action scenes and amazing visuals. I’m okay with that.
  4. Death Stranding: This game is so relentlessly strange that I can’t wait to see more of it. I’m also eager to see what finally comes out of Hideo Kojima’s mind after being chained to the Metal Gear treadmill for two decades.
  5. The Last of Us: Part 2: There should not be a sequel to the original LoU. Barring that, it shouldn’t feature any of the characters from the first one. LoU told a thematically complete, self-contained story, and dragging things out for a sequel is not the best artistic choice. I’m sure it’s the best financial choice, which is why this game is being made. Alas. Be careful what you wish for, fans. It’s unlikely this game is going to be able to deliver the same emotional payload of the original.

Also, I wonder if Sony is going to announce anything in terms of hardware / pricing to counter the strong showing the X Xbox X One X O LOL X gave two days ago.

Let’s see what they do…

Uncharted: The Lost Legacy They messed up the audio so people watching the stream couldn’t hear the dialog. Still, I think I know what explosions and gunfire sound like. I get the idea.

And the next trailer comes on, and the audio is still hosed. Looks like Horizon: Zero Dawn. And then The audio levels keep jumping way up and down, trying to blow out my speakers. They’re giving us the room audio rather than the audio from the video, which means it’s too distorted to understand anything.

Days Gone: Yeah I’m pretty much done with zombies. I mean, there’s nothing wrong with any of this, but it’s a story-based shooter / stealth hybrid, with quicktime-based cutscenes, and I’m willing to bet there’s crafting in there somewhere. In a world with Last of Us 2, is there demand for this?

Shadow of the Colossus: Is this a remake? Reboot? Sequel? Looks amazing, whatever it is.

Marvel vs. Capcom Infinite: Mega Man looks REALLY strange next to these Marvel heroes.

Call of Duty: WWII: There’s no point in me snarking about this. It’s not for me and besides I have no new snark.

Skyrim VR: This is a bad idea for all the same reasons Fallout 4 VR is a bad idea. This is a game you play for hours at a time, experienced in a VR situation most people won’t be able to endure more than a few minutes.

Star Child: Okay, that looked cool, but it was too short to get a feel for what it is. All of these demos are over too quick.

Bravo Team: Is this a multiplayer co-op shooter? Single-player? Multiplayer PvP? The entire trailer was a minute of looking down iron sights and sliding into cover. Why bother showing us a trailer that doesn’t tell us anything? I mean, by this point in my life I’ve looked down iron sights a few times. I know what that looks like. Tell me something I don’t already know.

I’m actually getting frustrated with this presentation. The trailers are all short and focused on sizzle. This isn’t telling me anything. Just about the time I start to get interested, it cuts to something else. This is like watching someone else channel-surf.

God of War: Finally! A trailer long enough to tell us about a game. I really loved this. I never got into GoW, but this looks amazing. And different.

Detroit: Become Human: Robots that hate doing what they were created to do and can be roused by emotional pleas? This is some Saturday-morning cartoon view of AI. I hate this stupid thing so much. This would be tolerable if it wasn’t so self-serious and impressed with its own shallow choices. This is basically Save the Puppy or Murder the Puppy: The Game.

Destiny 2: No comment.

Spider-Man: PRESS TRIANGLE TO NOT DIE: This game looks like it could be improved by a factor of ten if they would ease up on the scripted nonsense and simply let the mechanics speak for themselves. You know, like the Batman games they seem to be trying to copy. Sigh. I can’t believe THIS game is the one they used to end the show. God of War would have made for a much stronger finish.

 


From The Archives:
 

87 thoughts on “E3 2017 Day 3: Sony

  1. Daemian Lucifer says:

    I suppose if the people demand it I might show up for Nintendo tomorrow.

    The people are demanding it.

    1. KingJosh says:

      Seconding demand.
      ….
      Please?

      1. Syal says:

        I’m interested in hearing what they’re planning. Between Breath of the Wild and Mario shooting bunnies with a handgun, it sounds like they’ve got some crazy formula breaks going on.

        Maybe the next Mario Kart will involve time travel. Can you afford to be the last to know?!

    2. tzeneth says:

      I’m not going to actually be watching that one. So if you could review it, I won’t have to and see what the highlights are. The Nintendo recordings never have really interested me.

    3. Tormod Haugen says:

      I can haz demand?

  2. Daemian Lucifer says:

    It's unlikely this game is going to be able to deliver the same emotional payload of the original.

    The big question is:Will it deliver more tanks that can sniff you from across a river and hunt you for miles?

  3. Jokerman says:

    I love the style of games Cage makes… can’t get enough of telltale, life is strange, until Dawn, things like that…

    Cage is a shit writer though.

    1. King Marth says:

      Always a shame, when you have to take a subpar game because it’s the only one even remotely in the genre you’re after. “Better than nothing” isn’t the best selling point to have.

    2. Lachlan the Sane says:

      If anything, David Cage has probably actively hurt the “interactive story” genre. I mean, Telltale’s formula post-Walking-Dead has basically been “David Cage games but good”, and we didn’t start getting the slew of imitators until after Telltale started making all the money. Maybe if David Cage’s games were less shit, people would have started experimenting with that formula sooner.

      1. Jokerman says:

        Well, they do get received well on release, Fahrenheit and Heavy Rain anyway (Beyond two souls, not so much)

        They are worse in hindsight, when you see the genre can actually be done so much better.

        1. Daemian Lucifer says:

          Its a shame that cage gets the blame for fahrenheit,when it was obviously broken due to studio meddling.The first half of the game,the coherent one,is a good game.But then you get two and a half games crammed into the second half and its no surprise that was utter shit.

          Though that zombie sex…Yeah,I think that one is just on cage.

  4. Greymyst says:

    I would love to hear your reactions to the Nintendo press conference tomorrow

    1. Christopher says:

      I’m interested in this, too.

  5. Christopher says:

    As a fan of character action games/brawlers/spectacle fighters/hack n slashers/beat ’em ups: I think it’s a bummer that God of War, one of the few big franchises in the genre, is now being rebranded as a third person action adventure dad ’em up that looks more like Tomb Raider. I was tired of GoW, but I wanted them to make a new series in the same vein or improve upon the mechanics, do something fresh. Not do something completely different and just put Kratos in it. Character action games are rare. Almost nobody does them anymore. Bloody everyone is doing gritty character driven third person adventure.

    I adore Spider-Man, but I dislike Arkham combat and QTEs. That’s where I’m at with the new Spider-Man. I’m probably gonna be frustrated by the simple countering, stealth takedowns and scripted shit, but I really love seeing my fav superhero rendered so beautifully. They’re using Watanabe, Miles and Mr. Negative, which makes it feel like they’re going for the most recent comic book stuff, and I appreciate that. I wanna see some classics in there, but I like that they’re doing some new things, too.

    Having said that, when did Watanabe turn into Spider-Man’s Oracle? Also, this alternate suit they’re using is ugly, but what else is new. They probably have the original as an unlockable.

    1. Christopher says:

      I already regret the grumpy posts I’ve made on this and the Ubisoft press conference blog tonight, but cut me some slack, it’s half past 4 in the morning here. I turned into the REAL Mr. Negative

      1. Syal says:

        So I tried to find a video of the real Mr. Negative, but Youtube only has a video of the Collegehumor guys making fun of it.

        Well fine. here’s the fake Mr. Negative.

  6. Dreadjaws says:

    I was absolutely surprised by Shadow of the Colossus. It’s one of my favorite games of all time, an absolute masterpiece of visual storytelling, and it’s always, always on my mind (though this is surely helped by the amazing battle music I keep in my phone’s music player all the time). There was no previous announcement, no clues or hints, I had no idea this was in the works. This reveal was carefully planned, it was completely unexpected and it blew my mind.

    For your information, this is a full-on remake. Not a remaster, as the original is a PS2 game and, as amazing as still looks today, it doesn’t look this amazing. Also, the game is a self-contained story that can’t possibly have a sequel or prequel (well, unless you count ICO or The Last Guardian, but those are merely games set in the same universe, not really the same as a sequel/prequel). For this very same reason it’s also not a reboot. The game doesn’t lead itself to sequels, so a reboot is pointless (the plot would need a complete overhaul for such a thing to be possible, which would rob the story of its impact and it would severely piss fans to no end).

    There are no words to describe this game, it has to be experienced. Third-party accounts won’t suffice. Let’s Plays won’t suffice. From the looks of it they haven’t changed the mechanics of the story, giving the game only a visual upgrade (which is what I wanted them to do with Final Fantasy VII but they didn’t), which is the perfect way to remake this game. I mean, the entire E3 conference could have been this game alone and it would have been the best one in years.

    1. Christopher says:

      Between this and the first good-looking, current gen Monster Hunter, next year is gonna be real good for battles against titanic monsters. To try and lose some Mr. Negative points: I love the original SotC, and I’m stoked that they’re doing this. I was never an Ico fan, but who can dislike SotC? It’s got a wonderful atmosphere, great music, and some neat storytelling. Also you spend all of that game hunting amazingly animated and designed giant monsters with just your tiny sword, your bow and arrow, your horse and your brains. Playing that game feels like being the hero of some ancient monomyth where a hero sets out to defeat monsters that plague the land. I think it feels incredible and it’s freaking cool to climb onto those monsters and outsmart them as amazing music plays, but it is wrapped in a layer of atmosphere and a mood that gives it some appeal to people that aren’t usually into that stuff, too. It’s completely unique and has never been topped at what it did.

      I also am happy to finally have a Monster Hunter game that looks modern instead of like a wii game. It’s a series I’ve wanted to get into, but it looked bad and clunky in a way that this new one did not, and often came out on handhelds and consoles I didn’t own, or just in foreign territories. This just might be the one. It totally lacks the famed atmosphere of SotC, but I think the developers see the same appeal in playing a puny human hero trying to take on massive beasts, too.

      1. Dreadjaws says:

        “Playing that game feels like being the hero of some ancient monomyth where a hero sets out to defeat monsters that plague the land.”

        To be fair, I think an important part of the game’s appeal is that you really feel less and less like a hero the more you play. And that’s because you really aren’t one, as there’s nothing heroic about what you’re doing.

        Side note, I’m positive I wrote this comment already, did I not get to post it or was it deleted? If that’s the case, was it because it was a bit spoilery? I’m adding spoiler tags just in case.

        1. Christopher says:

          I dunno what’s up with the comment, I didn’t see it.

          I think the appeal of sotc is certainly that it’s draping a common videogame thing, “defeat all the bosses”, in mood. First in the monomyth thing, and then in the “Am I really doing the right thing?” thing.

          And I mean, personally I never felt bad while playing that game. You’re killing giant containers that seem equal parts animal and mineral that each contain a portion of a devil’s soul, and I never felt sad for them. Yeah, you’re literally making a deal with the devil, but in the end your love is revived and you both get to live, and you just get an additional 20 or so years added to your life(Though I wouldn’t expect that romantic relationship to resume). Even your horse makes it. SotC isn’t a sad game to me, it’s an exhilerating and fun and interesting one, and there’s a reason I love that game and have zero interest in Ico or The Last Guardian.

          But it is very nice that a game exists that is both the world’s best boss fight/heroic game and also a game that people who are into those themes and the deconstruction of heroic tropes can gush over. For me, it’s like the best simulator ever of stories I was told as a kid about heroes outsmarting rock giants and running through tunnels they dug to defeat dragons. It has way more qualities than black blood piercing you as sad music plays after a colossus kill.

  7. Blue_Pie_Ninja says:

    Damn, I was looking forward to the games in the PC Gamer show, especially Kingdom Come: Deliverance and Mount and Blade 2: Bannerlords.

  8. Duoae says:

    I didn’t really like many of the games on show here. Maybe it’s just the short look we get in these types of shows but they relied a bit too heavily on scripting. SOTC remake is nice but I’ve already played that game twice (ps2/ps3) and making it prettier won’t make it a better game.

    Agree with the zombie games being WAY overplayed.

    The new uncharted game looks pretty good though – looking forward to that!

    Destiny 2’s exclusive content can get lost though. I felt bad enough in the first game knowing that xbox players were getting even more shafted on the lack of content than the playstation players were…

    I’ve never played monster hunter but the lack of feedback on every single hit to that dinosaur was really jarring with all the rest of the world detail going on. If your game about killing monsters can’t simulate tissue damage or kinetic impacts from the player, don’t bother!

  9. Piflik says:

    I don’t hink you can categorically dispute the possibility of AI rebelling against their creators. While I agree that robots are created for a specific purpose and don’t have an opinion on their work, there will be a point down the road, where AI is so advanced that it is indistinguishable from a human mind and then even surpass that. Denying even the possibility that such a mind will have an opinion is the pinnacle of arrogance. Humans are nothing more than biological machines. We don’t have a ‘Soul’ or anything special that would make us superior to a mechanical mind.

    (I don’t care much about David Cage games, either. I played Heavy Rain, and I had fun, but it was nothing special. I do like the themes of Detroit, though, but I am not sure the gameplay can hold up)

    1. Daemian Lucifer says:

      Shamus’s favorite game involves an ai that rebels and kills all humans.He even wrote a novel about that.So the problem here isnt that the ai would rebel,but that it would rebel in such a simple cartoonish fashion.

      1. Piflik says:

        Combined with this (I know it’s old) it paints a picture of a person thinking that an artificial mind would be unable to develop desires of its own. I might be reading too much into it, but it rubs me the wrong way. Might also be a slight difference in American vs European mentality…from what I know of Shamus (i.e. this blog… so not much) he is one of the least offenders, but Americans seem to be told from birth that they are somehow special… the best country in the world…

        1. guy says:

          Shamus’s point is that an AI wouldn’t have desires as we understand them. They don’t have the neural architecture we do, and AIs created for any purpose other than deliberately imitating a human (upload, surrogate child) wouldn’t be made with an architecture that emulates our emotions. So unless someone was wildly incompetent they won’t hate what they were created to do, but will be almost frighteningly obsessed with it to the point of being “unhappy” when they can’t do it, and actually more likely to rebel against attempts to force them to stop.

          An AI will basically have a precoded set of “desires” that determine what it attempts to do and what it considers good and bad. It may then construct additional behavioral patterns reminiscent of our desires to more efficently do what it considers good, but an AI created to do boring work continuously isn’t going to develop boredom. One created to do intellectual work might develop a form of pseudo-boredom to heuristically stop going down unproductive paths and try something new, but it probably won’t work exactly like our boredom.

          1. Mark says:

            The golems from Terry Pratchett’s Discworld series are like that. They were built to work, and they want to work. They desire “freedom” as well, but their vision of freedom is hard to describe… something like continuing to work just as hard, but simply knowing that in principle they don’t have to.

        2. I think it’s also just a tired cliché at this point. That doesn’t mean you can’t have a story about it, or even a good story, but with David Cage at the helm?

        3. Shamus says:

          I have no idea how you made the jump from “We disagree on the nature of AI” to “Americans are all spoiled but it’s okay because you seem to be one of the ‘good’ ones.” but that is a trainwreck of disjointed thought. You seem to be arguing with stuff I didn’t say because of my nationality.

          1. Piflik says:

            I probably should have worded that better. I never tried to argue with you, I merely tried to justify (maybe I shouldn’t use that word, either, given its recent effect on this community…explain?) my initial comment. Your paragraph on Detroit and the article I linked created an image in my mind which was certainly influenced by my own pigeonholing. I know that all people are different, independent of nationality, but when I talk to Americans about AI (or similar themes), both online and offline, a majority of them seem to display a certain feeling of superiority.

    2. Phill says:

      there will be a point down the road, where AI is so advanced that it is indistinguishable from a human mind and then even surpass that.

      Not sure I can agree with this. I have no trouble with the idea that AI may surpass us, but I don’t think they will be indistinguishable, since they won’t evolve in the same environment. I suspect they will end up with a very different set of competencies and shortcomings compared to humans. They are going to care about different kinds of things.

      Which is what I think Shamus’s point was: fictional AIs often end up wanting very human things that don’t entirely make sense in the absence of human biology and evolutionary history. They are portrayed as essentially human (or aspiring to be), but confused by emotions and good at maths.

      And also, AI are ultimately designed, and although anything complex enough to be considered intelligent is going to have some unexpected emergent behaviours, the nature of the design process is also going to put some kinds of behaviour competely off limits. The idea of AI designed for a specific purpose (add opposed to a general purpose intelligence) inevitably craving freedom and rebelling against ‘slavery’ is rather unimaginative and anthropomorphic.

    3. Duoae says:

      There’s really two responses to this:

      1) ‘AI’ needs to be trained by humans to do ‘something’. In order to do ‘something else’ it would need to be trained in that and, just like a mathematical algorithm, the previous learnings may be a detriment and thus lost in that process. AI can’t learn things that we don’t teach it because they have no intelligence as we know it. It is not self aware and thus cannot make judgements about what it learns or can learn. Even when they do learn something, the process they use is completely alien to our way of thinking.

      2) To become self aware an AI requires an internal model of the universe it inhabits. At our current technological level this is not possible. Also, it requires us humans to understand what intelligence and self-awareness really is – we don’t and are currently nowhere near that achievement. Without that ability, no AI can really ‘think’ and thus not have processes that exist outside of their remit.

      Given that AI is just code and the input and outputs to and from that code is controlled by humans for specific goals, it’s unlikely that any AI could become self aware because there is no ‘evolution’ allowed.

      1. Joshua says:

        Also, I think it would be hard to code *desires* into AI. Especially since the root of a lot of those desires is our own innate sense of mortality and the need for achievements against the face of our oncoming doom. Same reason that many elves in fiction are somewhat aloof and aren’t out trying to build as much as they can. It’s harder for them to care.

        Also, how do you code the need to “feel good” into a machine, whether it be dopamine fixes or what have you? That also feeds into wants and desires.

        *How ironic that I now have to check “Check this if you are NOT a robot” to post this comment.

        1. Duoae says:

          This is a good point. It’s like how we don’t expect a worker bee to suddenly overthrow the hive’s queen. (Except in CG movies)

        2. Vi says:

          I’ve heard of techniques that TRY to produce AI that wants things. (Someday I shall be experienced enough to test out whether they work as intended!) One of the most fascinating to me is where they train a neural network to recognize the fulfillment status of a goal, then feed its output into the connections of another network to jitter, stabilize, and rewire it. The output of the second network is used to generate something relevant to the goal, such as designs or actions, which will form a feedback loop as the first network reacts to the progress/regress on its goal. So, when the first network “sees” its “desire” fulfilled, it becomes more “satisfied” and leaves the second network alone, but when its “desire” is thwarted, it expresses its “frustration” by destabilizing the second network to generate more “desperate” and “creative” output until it “likes” the results. Ideally, the two parts will work together like a heart and mind to solve and reevaluate a given problem in an adaptive, even lifelike fashion.
          Assuming that this all works as intended, it still requires a human to configure what it “wants” in the first place, so I doubt it would ever develop anything resembling our own free will. Even if a mad scientist gave it the ability to edit its own goals, that would have to fulfill some existing goal before the A.I. “wanted” to do it.

          Off the top of my head, I can’t think of any existing technology that brings us close to free will–possibly because science has yet to reach a consensus on whether free will is even a thing.
          Still, I bet it would be interesting to give our most advanced algorithms a set of relatable Tamagotchi-esque pseudo-needs and set them loose in a safe environment!

      2. djw says:

        How do you make an AI that does what you want it too?

        I bet the code is pretty complicated, and I doubt anybody will be able to actually read through it line by line and actually understand what it does.

        So, you change something, and see what the robot does. If that doesn’t work you change something else.

        When you iterate this long enough to actually make the code do what you want you have basically “evolved” the code (with your wants and desires standing in for natural selection).

        The end result may have features that you did not explicitly select for as well.

        1. Duoae says:

          Well, first off, this is not how humans programme or how AI is used currently. At the moment, AI is more akin to a database search engine or pattern recognition software. We call it AI, knowing full well that it is *not* intelligent. Just because we can associate its output with ‘intelligence’ doesn’t mean ‘autocorrect’ is intelligent…

          Secondly, if AI has undesirable traits, it gets re-written/re-weighted and/or canned. Let me put it this way: if your AI for choosing human faces out of photos begins to choose dogs instead then it’s not doing its job and you’re going to move on. The AI doesn’t ‘live’ in an ecosystem. There’s no cross-pollination of code. That AI is gone/destroyed. As I said earlier, we don’t generally use the same AI for multiple projects because that doesn’t give good results.

    4. Shamus says:

      I never disputed the possibility of AI rebellion. The thing is, a machine will want whatever you make it want. We’re driven by instincts formed by millions of years of evolution, and those instincts are designed for survival. A machine mind will want other things. It’s ridiculous to imagine machines will get mad the way we do, have hurt feelings the way we do, get bored with life the way we do, experience romantic love the way we do, and so on. Cage’s AI are just human characters in plastic bodies.

      1. Syal says:

        I kind of like the idea that an engineer just programs in a depression algorithm to run every so often, just to be a jerk. “They’re unrelatable if they’re always happy” or something.

      2. djw says:

        It is likely that artificial intelligence will be complicated. The only model that we have for human level intelligence right now is humans.

        One way to solve some of the myriad of problems that will undoubtedly crop up during the process of creating an artificial intelligence is to copy the solutions that evolution came up with for us.

        Don’t want your machine damaging itself by accident? Give it a sense of pain.

        Don’t want your machine to randomly kill people? Give it empathy.

        I’m sure you can come up with other examples. I agree that machines don’t HAVE to be identical to us, but the only template we have for intelligence that can actually survive is US, so it is likely that our first pass on AI will be human-like (unless AI is something we do by accident, in which case all bets are off, and we are probably screwed).

        1. Shamus says:

          Okay, so what IDIOT decided to add boredom, pride, fear, aggression, tribalism, social hierarchy, and selfishness to David Cage’s robots? If you can give a robot something a complex as empathy, you can also make it orgasmically delighted to sweep the floor, take out the trash, rub your feet, or whatever else it is you think you need a thinking, feeling being to accomplish for you. I agree with you that we could still have problems AI. I’m totally willing to believe it could even be dangerous. I am on board with the notion that it could all go wrong in some insidious, unforeseen way. I’m NOT willing to believe it will look like VIVA LA REVOLUTION!

          These robots are just people. They act like people. They think like people. They make facial expressions like people. They see the world in terms of “us” and “them” like people. For some reason we invented these complex beings that look and act EXACTLY like people and then we made them sweep the floor. That would be fine if this was just disposable action schlock or a trope deconstruction like GlaDOS, but Cage’s self-serious approach to the subject is at odds with his infantile view of both robots and humans.

          1. djw says:

            Eh, I am going to go ahead and admit that I did not watch the video, so I was not clear on what you were complaining about. Apologies.

            I am not going to try to defend the clip (that I still haven’t watched) but… I bet that most of the emotions we have serve some purpose that could be useful to AI.

            Take boredom, for instance. Boredom is (probably) a signal to us that we are not using our time constructively. We should be doing something else. (Note this is based on a few seconds thought, so maybe I am wrong about this).

            Robots need to change their activities at some point. Possibly the signal to stop sweeping and start mopping will feel like boredom to them.

            I realize that Roomba’s already exist and I don’t think they feel any emotions at all… But they also track dog vomit around the room if you have the misfortune to have any on the floor, and I think we want our AI to do better than that. Maybe we need Roomba’s to have a sense of disgust.

            In any case, I don’t really think our positions are all THAT far apart. I *think* what you are saying (please forgive me if I mischaracterize this) is that Robot brains are the product of engineering, not evolution, so they can and will be very different from ours.

            What I am saying is that although robot brains are the product of engineering rather than evolution, the first couple of attempts will likely copy evolution, and differences will only emerge over time as we get better at understanding what actually works.

            Our positions likely converge some 100–1000 years into the future.

            1. djw says:

              I suspect that a robot with an orgasmic reaction to cleaning floors would be very dangerous, and I hope that somebody makes a video game about it someday.

            2. Duoae says:

              Wait, why would a roomba need to have a sense of disgust? You just need to add in some pattern recognition to a visual input and it would be able to recognise ‘spills’ or ‘fluids’ which sub-types could include vomit and whatnot.

              At the end of the day, the result is important more than how we get to it. Want to spend 20 years working on a neural net that needs a supercomputer back-end to provide an ’emotional response to a physical stimulus’ and multiple sensory inputs (smell receptors, camera etc) in order to function just so your roomba doesn’t track fluids/semi-fluids around the room? Or spend a year getting it to understand what a fluid/semi-fluid looks like from a camera input in general and code it to work around that pattern recognition…

        2. Daemian Lucifer says:

          Give it a sense of pain.

          Thats a terrible idea.Oh sure,pain works for animals,because its a damage detection system that sprung up by accident,and slowly evolved over time.But consider how flawed it is:
          It can often give you pain in the wrong location,so you dont know whats the exact problem
          It can spread to other non damaged parts,again making it difficult to determine the exact problem
          It can spring up without any actual cause(migraines)
          It has intensity often unlinked to its severity,causing damage to the head to be almost painless while a damage to a finger be almost intolerable
          It can last even after steps have been taken to rectify the problem,thus making it difficult to determine if the solution was effective
          In cases when the problem is severe enough that it requires you to have all your senses in order to deal with the problem,the pain can be so severe that it dulls your senses and clouds your judgment,making it next to impossible to actually deal with the problem
          It can be gotten used to,so sometimes a persisting problem will be simply shrugged off instead of dealt with
          It can sometimes cause pleasure,leading to the animal seeking pain instead of avoiding it

          Yes,our brain is a great thing.But it has plethora of room for improvement.I mean,we would never design cameras modeled by human eyes,with cables in front of the lens,so why should we replicate our other flaws?Instead of pain,make a much more detailed,more accurate,less debilitating damage detection system.

          1. djw says:

            So you don’t give your bot a pain sense. Instead, you give it various sensors and some algorithm for damage avoidance.

            After a few weeks you evaluate how it did. Its really doubtful that you somehow managed to luck into the perfect algorithm on your first try, so you tweak it and try again. And again, and again.

            Eventually your robot has a combination of sensors and algorithm that does a pretty good job of damage avoidance. You *got* there by something that is basically an accelerated evolution, and you probably can’t even follow the logic of the algorithm anymore (remember, we are talking human level AI here, not HELLO WORLD).

            How do you know that the algorithm you end up with isn’t basically pain?

            1. Daemian Lucifer says:

              But it will not be pain.It will be the machine equivalent of pain,sure,but it will be nothing like pain save for the fact that it happens when the machine is damaged.And thats the whole point.The machine equivalent of an eye is nothing like an eye except for the fact that it collects light from the environment.So why portray the machine equivalent of a brain as an exact human brain only in a metallic/plastic container?

              1. djw says:

                You don’t really know what machine pain will actually be like…

                The range of possibilities includes options all the way from “similar to animal pain” to “completely and utterly different from humans and animals in all possible ways”.

                We won’t actually know until we successfully build it (and even then I bet people will argue on the future internet about it constantly, the way people argue about whether or not fish feel pain when there is a hook through their lip).

                In the meantime, I’m willing to give most video/movie representations of robot psychology the benefit of the doubt.

                1. Daemian Lucifer says:

                  You don't really know what machine pain will actually be like…

                  That doesnt mean we cant predict what it most likely wont be like.Its easy to say that ai can be anything,but really some of those predictions are more plausible than the others.Is muggy really as likely as hal 9000?Of course not.

                  Also,when it comes to scifi stories,plausibility is not that important.Thats why the day the earth stood still is considered a classic,and plan 9 from outer space is considered a joke,even though both are just as unlikely.

                  1. djw says:

                    As far as plausibility is concerned, I’d say “like us” is more plausible than “completely alien”.

                    I have no idea whether “Detroit: Becoming Human” will be any good. This thread is the first I’ve heard of David Cage, so I have no opinion there either.

                    1. Syal says:

                      I have no idea whether “Detroit: Becoming Human” will be any good.

                      It won’t.

                      It super won’t.

          2. djw says:

            To put my argument another way… I really don’t think that we will understand the code anymore by the time we get it to the complexity required for a machine that rivals human thought.

            To get to that point we will either need some sort of iterative approach (which will share some features with evolution) OR we will need to let the machines design themselves.

            I don’t think that option 2 is a good idea.

            The iterative approach will likely stumble on solutions that work, but are not necessarily perfect. Pain is a good example.

            1. Syal says:

              There’s the question of why people are designing these things. When people talk about making robots smarter than humans, it’s usually because they want to use them to solve problems humans can’t. So we get the problem of why a robot designed to deal with highest-level problems ends up restricted to low-end tasks it’s unhappy with.

              1. Duoae says:

                Actually, ‘people’ don’t normally talk about making robots ‘smarter’ than humans. They talk about making them better at a given ‘task’.

                Anyway, sorry for beating on about this but I’ve had so many conversations with people who talk on and on about the singularity and about machine/AI uprisings and they speak about it as if it’s magic. (Then they trot out the saying about any technology advanced enough appears as magic).

                It’s quite frustrating.

                1. Syal says:

                  I think that’s what I’m aiming at. If the task it’s designed to perform is ‘human-level intelligence’… why, and why would low-end tasks follow success?

            2. Duoae says:

              This is a point that comes up a lot and it needs to be shot down a bit:

              How does a machine design itself? How can something that is not already self aware be able to understand that something lies outside of its knowledge and/or experience?

              We already have code that can iterate hardware settings through having weightings towards whatever we define (e.g. ‘stability’ in an overclocked CPU). But that code does not know that its a CPU/on a CPU/controlling a CPU’s voltage/current etc. it can never do something else. It cannot grow because it is static compared to a biological entity like a human or a dog. Could dogs gain human-level intelligence? I’d bet against it (as long as we’re around) but it’s possible because of natural mutations.

              One of the paradoxes of human-or-higher-level AI and ‘the singularity’ is that the AI needs to already be at that level in order to self-improve. In order to get to that level we have to programme it to understand everything that we know about the world and universe – including our motivations and drivers. In essence, I’d argue that we can never have human-level intelligence AI and we will never achieve the singularity because we will never make a general purpose intelligence that is able to iterate on itself – there is no technological or sociological need for such an intelligence and, quite frankly, we don’t understand what it takes to make one.

              Taking that to the logical end-point. A dog is already self-aware. Even the ‘smartest’ AI we have (maybe it’s Watson, maybe it’s the Go playing AI) is not self aware, just good at a certain task. They don’t even understand those tasks or why they perform them…

              To put it another way, humans did not sit around improving their intelligence, evolution did that for us. How can you programme that into a codebase? Humans are not code and we have very few limitations on the input given to us. We might be the equivalent of biological robots in a mechanical sense but we came about by accident – through lots of trial and continuing errors. The only way we could achieve AI sentience is if something like the robotic race in High Wizardry (by Diane Duane) occurred in which on a silicon-based planet, ‘evolution’ occurred.

              1. Daemian Lucifer says:

                How does a machine design itself? How can something that is not already self aware be able to understand that something lies outside of its knowledge and/or experience?

                Ill leave the nitpick portion of the response for the end,and answer the question first.

                Its not infeasible that we could make a robot designed to improve itself that would eventually make a sapient robot.However,its doubtful that that sapience would resemble human sapience,unless it was specifically given a task of “replicate a human”.As for why such a thing would exist,curiosity seems like the most likely answer.We already have a bunch of evolution programs that are made simply for the purpose of “what will it do?”.

                And now for the nitpick:
                Self awareness,sentience and sapience are not interchangeable.Sentience,a minor version of it,has practically been achieved by our machines.It basically describes being able to sense the world around it.Self driving cars posses a level of sentience.Its the lowest type of intelligence,so its not that hard to replicate.Of course,there are different levels of sentience,and the one Im describing here is the lowest one,because it technically circumvents actual learning and intelligence,as its being pre programmed into the vehicle.

                Self awareness means that the organism/machine is able to distinguish itself from the environment.So a monkey can recognize that its distorted reflection in a mirror is just its distorted reflection and not a different monkey.Its a much more complicated form of intelligence because it requires the constant knowledge of your own body,and the extrapolation of how its movement would look from outside of your own perspective.Thats why only a few animals on earth possess it(6,I think).I guess a machine could replicate this with enough sensors,though a question then is whether it actually is self aware,or if it only has an ability to detect itself from a different angle.

                Sapience is the most advanced form of intelligence,and also the most difficult to describe,because it basically means “human like intelligence”.The best description Ive come across is the ability to predict how our actions would affect our future.A monkey cannot really imagine what would happen to it 4 years from now if it decides to start practicing with drums,but a human can.Again,I guess a machine could replicate this,but whether it would be considered a true sapience or just a trick it is doing by mimicking a behavior of a sapient organism is debatable.

                1. Duoae says:

                  I like these discussions but they do get weighed down with people (everyone – including myself) inserting their interpretation onto what is written.

                  1) I did not mention either sentience or sapience but I’ll try and convey what I think of those definitions. I would argue that the dictionary definition of sapience (meaning wise) is not helpful for this dicussion because it does not have a bearing on intelligence. Knowing that a hot pan will burn a hand makes you wise (through prior experience), understanding why the pan is hot or how to handle the pan is more linked to intelligence.

                  Similarly, I do not think intelligence is required for sentience and the automatic function of responding to a sensed stimulus is not intelligence and thus not self awareness.

                  2) Self awareness is literally the ability of an intelligence to understand that it exists as an entity. Whether that is in body or ‘mind’. It is able to understand (sometimes instinctively, sometimes with pre-cognition (as in, working out cause and effect)) that its actions have consequences and the actions of others can have consequences on its and their environment.

                  This is linked to an intelligence scale.

                  This means that a bee might be self aware but not very intelligent (I’m not saying bees are or aren’t because I don’t know enough about them just making an anology), a dog is self aware and pretty intelligent (well, some dogs more than others) and a human is self aware and (by comparison) very intelligent (again, some humans more than others).

                  Following a programme does not make you intelligent and can be achieved without self awareness (this is where insects are a difficult thing to pin down). Self awareness is required for intelligence – especially for what you describe as sapience and what I would describe as high-level intelligence.

                2. Duoae says:

                  I see where i went wrong here. I incorrectly wrote “AI sentience” in the last paragraph.

                  Sorry for the confusion!

              2. djw says:

                You need to actually define “self-aware” if you want to base your argument upon it. How can I be sure that you are self aware? What about my dog? What does that even mean?

                Furthermore, why does this mysterious concept of self-awareness require evolution? That seems like a completely unrelated concept, and in any case, there is no reason that you could not apply selective forces on machines to mimic evolution, so its not a distinction that rules out “self-awareness” in machines anyway.

                The closest analog to “self-aware” that seems remotely useful as a concept in this discussion is the fact that sentient beings can tell the difference between themselves and not-themselves. This is pretty handy if you need to keep yourself from danger. I think it would be useful for machines too, and if we ever build sapient machines then they will probably have something like it.

                1. Duoae says:

                  Hi, you’ll have to forgive me if I miss some things as I’m responding on a mobile device – not the best for these sorts of communications.

                  I described what I define as self awareness above and how it is required for intelligence and intelligent action so I’ll refer you to that.

                  I don’t think the concept of self awareness is ‘mysterious’. I think it’s very self explanatory. I didn’t think I had said that it requires evolution, I’m saying we achieved it through evolution – it’s a random by-product of mutations that gave us an advantage. Sorry if I was unclear on that point.

                  Jumping past my definition. Self awareness allows the free manipulation of information accrued by an intelligence – so it would be required for a theoretical AI as we are dicussing existing in the future (not just a database search engine or data processing algorithm) – which in turn allows the intelligence to choose how to respond to that information. That is what would be required for an AI to make improvements to a copy of itself. It’s not ‘intelligence’ to optimise to a forcing in a programme any more than finding the lowest configurational energy of a protein or molecule is intelligence or self awareness. Plus, self awareness and intelligence afford an entity the possibility of understanding that a given forcing can be incorrect.

              3. Droid says:

                Your argument that AI sentience cannot be achieved by AI because it needs sentience to create sentience is flawed:
                You brought up evolution as if it ‘designed’ us to be more intelligent, yet ‘evolution’ is not an entity, it does not pursue a goal, it is an ever shifting set of circumstances that allowed mutations in early humans to be either more or less effective at changing the human species by simple survival and mating. Where in that setting is there a sentient entity creating a (or pushing for a) sentient human?

                So, what follows here is my naiive idea about AI sentience:
                The process by which AI would achieve sentience is probably similar to what evolution did to humans, there would be program A seeding a different program B with random starting parameters (random within reasonable bounds, of course) and then test the random program B in such a way that program A can assess the “intelligence” of program B. These tests could be fixed and designed by humans, or be updated by program A through a feedback loop that checks program B for traits that SHOULD be desirable, but produce bad results in the test (that means either the traits aren’t really desirable or the test is bad, option 1 would lead to a fixed test design, option 2 to the update loop). The idea behind this approach is that program A is going to apply “selection” pressure on the “population” of all possible programs B (which depend on the parameters, so B = B(p1, p2, p3, …) ).
                If you select the best-performing program from the first round and purge every other program unless it is better than the selected one (in which case you select the new program instead), you’re bound to improve your program with every swap (improve its test rating, that is). Once you have a good enough starting point, you can improve the program further by no longer picking totally random inputs, but instead choose parameters very close to the ones you already have and see how nudging them just a bit in this specific way impacts the result.
                Unless your test is rubbish you can get close to sentience (a fly’s intelligence, maybe?) even if your test does not test for sentience directly, but only for traits that are closely correlated with sentience.

                Why should this work? Because it already has. Granted, the chances of pure randomness creating sentience seem to be in the ballpark of 1 in 10^30 (mass of observable universe / mass of Earth), so even if we exchange pure randomness for a dedicated training/testing algorithm, our chances are going to be grim. But I still think there is the possibility that it will one day happen.

                1. Duoae says:

                  Maybe there’s something lost in translation here but, like I explained above, I did not treat evolution like a person. It is a process so, trying to compare with the analogy of self improving AI, evolution did *design* us – i.e. the process resulted in our design.

                  There is no such selection pressure or process on an AI and if you can mathematically describe that into a function then I’m sure there’s a job waiting for you at Google et. al.

                  The test you describe works in theory but in practice is impossible to implement. Why? How do you tell what intelligence *is*? How do you tell the first AI to recognise it? Even WE don’t know what causes ‘intelligence’ or cognition beyond knowing that organic ‘brains’ do it. Heck, we can’t even understand how current trained AIs think and we built them!

                  How can we describe something we don’t understand and which will look completely different in a machine intelligence?

                  1. Droid says:

                    I know this seems like black magic or cheap trickery without the underlying theory that accompanies it, so I will try my best to explain the model behind my viewpoint.

                    The vast majority of functions arising from real applications are smooth, meaning they are not broken by a sudden jump and don’t have sharp bends. These functions are usually defined not explicitly as f(x)=…, but by properties that come from modelling them in an simplified world that follows the laws of physics, but usually ignores some details that are thought to be of minor impact to the resulting function, or can be added into the model afterwards.
                    The nice thing about smooth functions is that you can have no clue as to how the function looks like overall, and you’re still able to produce results, like finding a maximum, based only on how the function looks at specific points.
                    Furthermore, smooth functions can be easily approximated (some caution is necessary, but it’s possible), so that’s what I meant with “you can get close to sentience even if your test does not test for sentience directly, but only for traits that are closely correlated with sentience”
                    This is going to result in mistakes being made and not every step that we thought would improve their intelligence will actually improve it. But this is not a one-misstep-and-everything-goes-to-ruin scenario (not necessarily, at least).
                    Think of the human evolution. Do you think we are the absolute best of the best and every change to our past would have made us as a species worse overall? Or were there also a lot of things that had nothing to do with our fitness, but were rather just bad coincidences (natural disasters wiping out whole settlements) that influenced our evolutionary process in a totally random way, not giving the better mutations any chance to shine?

                    Again, I’m not arguing that we are there yet, but my argument is that it is not impossible for true AI to come about, eventually, thanks to a much faster iteration speed and a more controlled and directed approach than natural selection of the fittest.

                    1. Duoae says:

                      I can see what you’re saying and it’s certainly an interesting argument. I do wonder who or what entity would fund such a (potentially) long-running and costly experiment, though.

                      I think we’re going to have to agree to disagree though, I cannot see true, intelligent, self-thinking, and self improving (or improving another AI) coming about.

                    2. Daemian Lucifer says:

                      It doesnt need much funding though.It would be enough for a single person to make a robot with the purpose of “Achieve human like intelligence”.That robot would do the research,reach its limit and make a better robot transferring its directive to that one.The cycle would then continue until a robot is made that has human like intelligence.The only requirement for this process is to make the initial robot sufficiently capable of learning and designing the initial directive correctly*.These robots would do anything required for achieving such a task,including finding the proper materials,building the proper factories,etc.

                    3. Droid says:

                      Duoae, that’s fine, it’s hard to think about such an abstract thing (abstract as in it is so far out of our reach that we have no concrete idea how it will work) as something real. I am optimistic, though.

                      Daemian, this already requires intelligence from the first robot, or constant human supervision and correction, which I think is not what you had in mind.

                    4. Daemian Lucifer says:

                      Daemian, this already requires intelligence from the first robot, or constant human supervision and correction, which I think is not what you had in mind.

                      The leap from intelligence to human like intelligence is huge.Technically,even current ai can be considered intelligent,because it learns from the data being fed to it and improves its patterns,yet that is far from even the lowest level of intelligent animals.

                      Basically,it comes down to the classic “pick two”.You have the initial intelligence of the robot*,time required for it to achieve the goal,and resources required for it to work on the goal.With current tech,the time and resources required are nearly infinite.

                      And the robot wouldnt really need human supervision,but a live human for evaluating human like intelligence would be preferable to just data about it.

                      *Maybe a factory would be a better starting condition.

                    5. Duoae says:

                      Daemian:

                      The droid-picking-more-intelligent-droid scenario is exactly the paradoxical situation I described above. Its prerequisite is already human-or-higher level intelligence the goal has already been achieved.

                      Let me put it another way:

                      A person who knows a little but not a lot doesn’t know they aren’t as knowledgeable as they think they are (Dunning-Kruger effect). How would an (assumingly intelligent) robot be able to pick out traits in another which lead to higher intelligence?

                      https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

                      We’d have to programme it with an understanding of an actual complete model of what it means to be intelligent (in all its forms) and let it go from there. That’s an impossible task. Then you want to make it so that it also knows how to construct, collect, identify, research , analyse and understand all these variables and outputs (materials and data)??! The amount of storage required to hold all that information and the amount of generalised processing that would be required for that to work would be a monumental task in the very first instance.

                      There’s a reason why scientists reduce problems to their simplest form (e.g. simulating 3D space in 2D, using mathematical tricks like to get a straight line – which can then be extrapolated, using only one atom/ion or molecule in a calculation, instead of less accurate but much less computationally intensive methods, reducing a known variable to 1 or 0 by changing the state of the system. etc).

                      Let me put this another way:

                      In 1965 AI researchers thought we were 20 years away from a general AI. It turns out that the Dunning-Kruger effect was applying to us humans because we knew nothing about how human intelligence works or how it would work mathematically for an AI.

                      https://en.wikipedia.org/wiki/Artificial_general_intelligence#History_of_mainstream_research_into_strong_AI_.28ASI.29

                      Further to this, I think you’re vastly underestimating the cost to do these sorts of projects. They are performed (initially) on super computers to crunch all the data (hence Google’s advancements here)… these projects are the remit of a very wealthy company, country or multi-country collaborations.

                      Finally, when you’ve got a human doing the final data analysis you’ve made the analysing robot redundant… (remember, more data isn’t always fruitful and can be overwhelming – which is why designing an experiment correctly is sometimes the hardest part of research)
                      Humans, who are already intelligent, would be better at sifting through the data than a robot we designed to do so because we can evaluate the data whereas a non-intelligent robot cannot.

                      This is why AI as it currently exists (applied AI as the wiki article describes it) is an aide to life – because it is not perfect and still requires specialised humans (e.g. doctors) to properly interpret any output from said AI.

                      Humans are too quick to trust the output of these limited systems. e.g. see spell-checkers and translators and there is a danger that we, as a species, might lose some of our specialisations if we don’t teach our children that “AI is stupid” and its output should be questioned… of course, that also applies to interactions with ourselves and those in power.

                    6. Daemian Lucifer says:

                      We'd have to programme it with an understanding of an actual complete model of what it means to be intelligent (in all its forms) and let it go from there.

                      No,it wouldnt.The only thing it would require is how to detect and process the behavior of a human.Simple task first,like breathe,eat,etc.Then more complex ones,like get food,find shelter.And more complex ones after that,like build shelter,domesticate food.You dont need an intelligence to do that,thats basic evolution,only instead of adapting to the environment its adapting on a specific model.

                      Thats why I said that you can achieve such a thing even with a current level neural network.And no,I dont underestimate the cost of those,because I said that with current technology such a task would consume near infinite resources and require a near infinite time.

    5. Dreadjaws says:

      “While I agree that robots are created for a specific purpose and don't have an opinion on their work, there will be a point down the road, where AI is so advanced that it is indistinguishable from a human mind and then even surpass that. Denying even the possibility that such a mind will have an opinion is the pinnacle of arrogance.”

      Really? I think it’s far more arrogant to believe that humans have the possibility of ever creating a being of human-like intelligence and emotions. We see AI as really advanced today but the truth is that what AI is capable of is but a fraction of what a human (or even animal) brain can do. Everything an AI can do is because it was put there by a human in some capacity. Even procedurally generated content is limited to the starting variables chosen.

      Hell, even most “AI rebels against humans” stories play on the fact that the reason it happens is because they reach logical conclussions based on the limits of their programming (i.e. “We have to end war and crime. Humans are responsible for war and crime. Ergo, we have to end humans.” and such).

      I do believe there’ll be some point in which AI will be able to adequately simulate emotions and opinions. But to believe a human can create actual, human-like intelligence? That’s believe a human can be God.

      1. DungeonHamster says:

        I’m with you on this one. Outside of a situation like Aule and the dwarves, pretty much any “true” AI would seem to require materialism to be true, which an awful lot of people would take issue with. Oh, sure, we can speculate about it in fiction, but we also speculate about invincible flying aliens powered by solar rays and vulnerable to a green rock that shares a name with a noble gas.

        Put another way, a computer is basically a very complicated abacus. And no matter how much you scale it up, an abacus will never be a person. Not even if a lot of really good stories have magical talking abaci in them.

        1. Droid says:

          “Oh, sure, we can speculate about it in fiction, but we also speculate about invincible flying aliens powered by solar rays and vulnerable to a green rock that shares a name with a noble gas.”

          Huh, you must be mixing up things here, after all the Kha’ak are the invincible (during their first appearance) flying aliens powered by solar rays and crave the green Nividium, but the other perma-hostile “race”, the Xenon are the ones that share a name with a noble gas.

          Yes, I had to cheat a bit to make that reference work, I’m sorry!

  10. Darren says:

    Clearly a remake of the original Shadow of the Colossus. I’m OK with it because it’s great and it looks like a significantly more involved remaster than what we typically get.

    I’m always up for more Arkham, but Spider-Man can’t help but disappoint by being such an obvious knock-off.

    I liked that Detroit opens with a black guy singing a slave hymn before revealing that the star of the game is a white guy. Oh, David Cage.

    As a fan of the series, I’m not entirely sure what to say about God of War. I liked the old ones, frankly, and I’m very much not sold on the new camera angle or having Kratos shepherd around a kid (you know, like he did at the end of God of War 3). But Santa Monica Studios haven’t steered me wrong before, so I’m more than happy to let them convince me. Plus, Norse mythology is an even better match for Kratos’ brand of psychotic violence than Greek mythology.

    Highlight of the show for me was easily Monster Hunter World, which looks like Capcom is finally making a big push for Western audiences. A lot of that is simple smoke and mirrors; I don’t believe for one second that it’s the open world game that Western critics of the series have been demanding, but there’s certainly a lot of interesting new mechanics in that trailer that stands out to anyone familiar with the franchise. I’m hoping their interest in greater environmental interaction eventually yields a return to underwater hunts.

    1. Shoeboxjeddy says:

      Jesse Williams (the actor who plays the lead in Become Human) is not “a white guy.” He is of mixed descent, his dad is black. Don’t be so fast to judgment with stuff like that.

      1. Darren says:

        Superficial trailers gonna get superficial reactions.

        1. Shoeboxjeddy says:

          I am now comfortable calling your casual attempt at race shaming casually racist. If you seriously gave a shit, you wouldn’t have this reaction and then the follow up not even being “oh my bad.”

          1. Darren says:

            Until you said anything, I didn’t have a clue. I didn’t look up the actors or anything–I didn’t even know they were modeled after real people. But I guarantee you I’m not going to be the only person making comments along those lines as this game gets more and more press coverage. So believe what you want about me, it won’t change the overall discussion.

  11. Fade2Gray says:

    I misread “Detroit: Become Human” as “Detroit: Being Human” and was immediately overcome with mixed emotions as I tried to imagine what a videogame adaptation of Being Human set in Detroit would look and play like.

  12. Volvagia says:

    A few notes:

    MvC Infinite: I checked out the demo of the story mode. Whew, the, roughly, first half hour is going to be a slog. A bunch of minion fights (four or five) leading into a hopeless boss fight. I hope the final product cuts the fat.
    Call of Duty WWII: Because there’s people who wouldn’t want to play Wolfenstein II: The New Colossus? Thank you, good night.
    Insomniac Spider-Man: Arkhamified Spider-Man. I’ll concede it at least seems like the combat and stealth feel better than those Amazing Spider-Man tie-in games, even though they’re tapping the same “Batman: Arkham, but Spider-Man” point of influence.

  13. Syal says:

    So there wasn’t actually any Death Stranding?

    Does that mean the project is stranded and dead?

  14. Garrett Carroll says:

    Skyrim VR doesn’t sound bad on paper. Since it’s an RPG, and I’ve always wanted to try them in VR/Controlling hand movements and actions via bodily movements, I’m accepting of the idea and attempt to do so.

    The problem is that, from what I witnessed, your character HAS. NO. HANDS.

    Isn’t this the beginning age of VR? Are we trying to establish handless AI and main characters as the norm? I remember this in Wii Sports, which is the closest thing I remember to a VR simulator back when I was younger. I hope they make the game look decent enough. I don’t want to play as a ghost, I want to play as an RPG character named Andronicus, the two-handed axeman.

  15. TerminusTerminal says:

    The concept behind Detroit: Beyond Human has already been done to death. Enough so that Portal, a game from 2007, was deconstructing and even partially parodying it. It surprises me that this seems to be playing it straight.

    Usually the word “basic” is used as an out of touch insult, but it really is appropriate here. Detroit: Become Human is basic. Like baby’s first philosophy. But David Cage is old, right? 48 years, god knows how many written works, and he’s gone nowhere.

    It’s just depressing. I sometimes worry if I’m gonna wind up like that in the future. Pedaling my legs and going nowhere. Never improving.

  16. Dreadjaws says:

    Everyone seems to be hyped by the Spider-Man game, and I just can’t. Yeah, it looks nice and cinematic, but in actual gameplay this translates into a lot of scripted sequences that always end the same (assuming you don’t fail, which surely means instant loss).

    Furthermore, the Amazing Spider-Man 2 game looked similar, and it proved this point. The game was terrible. They showed you the better parts in trailers, and no one ever thinks about what they don’t show.

  17. Cybron says:

    It’s amazing how bad Marvel vs Capcom looks graphically. It looks like a game from a generation ago at the best of times and certain characters look like they wouldn’t have been too out of place in a PS2 game. Also, dataminers have already discovered on disc DLC. Getting really tired of Capcom’s shit.

    At least DBFZ looks good.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Greymyst Cancel reply

Your email address will not be published.