It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Always a strange trope when a game "in the future" has worse "in game" digital rendering than the game itself. I think the effect looks neat, but I can't help but laugh at the idea that rendering technology is so horrible in 2077 that you actually see pixels when editing video.
avatar
Merranvo: Always a strange trope when a game "in the future" has worse "in game" digital rendering than the game itself. I think the effect looks neat, but I can't help but laugh at the idea that rendering technology is so horrible in 2077 that you actually see pixels when editing video.
It's not a video ...
I'd argue it's because "editor mode" detaches you from the video source (the BD rig on whoever's doing the filming). Arguably the things directly in their focus could still be sharp and pretty, but all the things out of direct line of sight would wind up invisible. Maybe going pixelated is the happy middle ground they settled on for the visuals.
avatar
wingchild: I'd argue it's because "editor mode" detaches you from the video source (the BD rig on whoever's doing the filming). Arguably the things directly in their focus could still be sharp and pretty, but all the things out of direct line of sight would wind up invisible. Maybe going pixelated is the happy middle ground they settled on for the visuals.
BD requires a lot of space to record and a lot of processing power to record in high quality.
avatar
wingchild: I'd argue it's because "editor mode" detaches you from the video source (the BD rig on whoever's doing the filming). Arguably the things directly in their focus could still be sharp and pretty, but all the things out of direct line of sight would wind up invisible. Maybe going pixelated is the happy middle ground they settled on for the visuals.
It's just the trope to me, I mean even some of the hollow displays use a similar tactic. The devs want to make a visual distinction between "reality" and "virtual" and pixelation tends to be a favourite.

Of course, I think it would have been awesome if they used depth of field in this situation, instead of pixelating everything out it blurrs out. This would also allow some optically interesting effects, such as how what the BD recorder is focusing on would likely have more permanance than what you as the editor focus on. So even after the BD recorder moves on, she might be thinking about an object she was looking at before and that would remain in focus for a while longer.

I mean, I get it, but you could definately have some better fun with this than just pixelation.,
avatar
Merranvo: Always a strange trope when a game "in the future" has worse "in game" digital rendering than the game itself. I think the effect looks neat, but I can't help but laugh at the idea that rendering technology is so horrible in 2077 that you actually see pixels when editing video.
I remember hearing in the game that V is not using the best gear for viewing and editing BDs.
Well, the tech in CP2077 isn't an upgrade from our own tech. It's based on tech according to Mike Pondmith's vision when he created this setting (it's retro-futuristic).

There are no flash-drives/pen-drives, just shards (basically memory sticks) for example. Also, Johnny had his cyber-arm already in 2013 as a commercial product (even if it was a prototype). We barely see that today in 2020 and it's still not as agile and responsive as it is in the game...

You should be able to download original CP2020 source book if you own the game legitimately, so you can see for yourself how dumb some predictions were in hindsight (for example, origanl setting had basically no WiFi connection to the Net but you could install various cyber augments).
avatar
Archonsod: It's not a video ...
Truth. It's a memory. That's how memories look. At least that's how they've chosen to portray them. Think of a memory in your head. It's not clear high def 4K ultra definition lol.
avatar
wingchild: I'd argue it's because "editor mode" detaches you from the video source (the BD rig on whoever's doing the filming). Arguably the things directly in their focus could still be sharp and pretty, but all the things out of direct line of sight would wind up invisible. Maybe going pixelated is the happy middle ground they settled on for the visuals.
avatar
Merranvo: It's just the trope to me, I mean even some of the hollow displays use a similar tactic. The devs want to make a visual distinction between "reality" and "virtual" and pixelation tends to be a favourite.

Of course, I think it would have been awesome if they used depth of field in this situation, instead of pixelating everything out it blurrs out. This would also allow some optically interesting effects, such as how what the BD recorder is focusing on would likely have more permanance than what you as the editor focus on. So even after the BD recorder moves on, she might be thinking about an object she was looking at before and that would remain in focus for a while longer.

I mean, I get it, but you could definately have some better fun with this than just pixelation.,
Don't mention the psychological tricks used in games to make gamers feel better at something their not, Like Aiming on a Console with that constant magnetic pull effect towards an enemy to make them feel good about being crap.

Competitive gaming is dead because of Console players.
The braindances are supposed to be depth information pulled from a single source (the person).

Similar to how Tesla cars' cameras can generate an idea of what is around them in 3D space since each pixel is judging depth.

youtu. be/HM23sjhtk4Q?t=156
avatar
BlackKnightSix: The braindances are supposed to be depth information pulled from a single source (the person).

Similar to how Tesla cars' cameras can generate an idea of what is around them in 3D space since each pixel is judging depth.

youtu. be/HM23sjhtk4Q?t=156
That's a good point.
I don't think that's the reason though.

I'm guessing it's a stylized thing.
I was going to say it's probably for realism since you can't see what was there and would just have an idea.... but you can't see what's straight in front of them without the fuzz either when in edit mode. :-o
avatar
BlackKnightSix: The braindances are supposed to be depth information pulled from a single source (the person).

Similar to how Tesla cars' cameras can generate an idea of what is around them in 3D space since each pixel is judging depth.

youtu. be/HM23sjhtk4Q?t=156
That's actually a more interesting video than I thought I would be (I thought you were just linking to another lidar presenatation). You can tell Andrej Karpathy actually knows what he is talking about and isn't reading off a rehersed speech.

It's been long been a gripe of mine how we've outfitted these vehicles with more sensors than your human actually has yet still have that issue of not being able to surpass human driving capabilities. Going back to the basics makes sense, we're not taking full advantage of the sensors we already have... adding more isn't going to change that.

(Of course, we do know that people do not drive "with vision only"... but that screams of marketing telling him to say it. It's actually quite interesting if you pay attention, the force feedback of the wheel, the g force while turning. if you're on snow or gravel you as the driver can feel how the vehicle changes.
Post edited December 20, 2020 by Merranvo
Oh... car autopilots... Those things are SCREAMING "accidents will happen". Because it doesn't matter how many radars, lidars, sonars, cameras, or any other possible sensors will you have - thing is software won't be able to react to a lot of unpredictable stuff. Like - a crash is unavoidable but bumping in one car means a few pizzas worth of damage while bumping into other means serious accident which can possibly kill a passenger or someone else, maybe even creating a chain of crashes. A human driver with some experience can see that and make a decision in less than 1/10 of a second.
Or when a kid gets almost right in front of your wheels and the only way to avoid killing him will be to crash into other car with severe damage but in that case everyone will be alive. Even making a computer to tell a water puddle from some super slippery oil or something isn't that easy. How to program a computer to drive dangerously and speed up instead of braking when a human sees easily that it's the best option to avoid collision or car spinning out of control on some ice or whatever. No computer can get even close to human intuition based on years of experience when it comes to dealing with unpredictable chaotic stuff.

And even for sensors - cities got a LOT of interferences. Even GPS signal can bounce off tall buildings sometimes and create echo.

And then, if you'll miraculously manage to make some super complex AI that will properly react to all of that without getting your car price into jet airliner range - oops... a simple error in the code. Calculating trajectory to Mars for an automatic spacecraft is waay easier than making a car to drive around a busy city. The code for the whole mission is waaay simplier. And yet... how many if those things failed because of simple error in software or data, like just trajectory data miscalculation or race condition or something, failing completely or partially?
Post edited December 20, 2020 by Thunderbringer
avatar
Thunderbringer: Oh... car autopilots... Those things are SCREAMING "accidents will happen". Because it doesn't matter how many radars, lidars, sonars, cameras, or any other possible sensors will you have - thing is software won't be able to react to a lot of unpredictable stuff. Like - a crash is unavoidable but bumping in one car means a few pizzas worth of damage while bumping into other means serious accident which can possibly kill a passenger or someone else, maybe even creating a chain of crashes. A human driver with some experience can see that and make a decision in less than 1/10 of a second.
Or when a kid gets almost right in front of your wheels and the only way to avoid killing him will be to crash into other car with severe damage but in that case everyone will be alive. Even making a computer to tell a water puddle from some super slippery oil or something isn't that easy. How to program a computer to drive dangerously and speed up instead of braking when a human sees easily that it's the best option to avoid collision or car spinning out of control on some ice or whatever. No computer can get even close to human intuition based on years of experience when it comes to dealing with unpredictable chaotic stuff.

And even for sensors - cities got a LOT of interferences. Even GPS signal can bounce off tall buildings sometimes and create echo.

And then, if you'll miraculously manage to make some super complex AI that will properly react to all of that without getting your car price into jet airliner range - oops... a simple error in the code. Calculating trajectory to Mars for an automatic spacecraft is waay easier than making a car to drive around a busy city. The code for the whole mission is waaay simplier. And yet... how many if those things failed because of simple error in software or data, like just trajectory data miscalculation or race condition or something, failing completely or partially?
Cars cannot even autopilot in 2077. If there is a vehicle parked right in front of them, they stop and wait until that vehicle is moved off the road. What hope do we have in 2020?