Discussion in 'General Discussion' started by mantasisg, Jun 7, 2021.
Just watch it.
Wow, just wow.
Achieving such graphics (in valid resolution..) without post process effect would be far more interesting. The above is just some kind of layer.
It's not so much a layer as it is a total reprocessing of the visual data to transfer photo-realistic data sets onto .
This is similar to AI style transfer (where you see Van Gogh painting style transferred onto photos of cats). The major issue for using this as a way of making photo-realistic game engines is that a) it's very slow and requires a lot of computing power, and b) it will be full of visual artifacts because the AI does not (yet) know how to draw each frame consistently. Bump everything to 90 FPS / 4K and you will see so many artifacts your head will explode.
The future of photorealistic game engines is to improve the scanning of textures and to improve lighting. RTX Ray Tracing is in its infancy. Using GTAV with no post-processing and using a heavy duty style transfer engine to turn those frames into a photo-realistic render is a sexy way to get the professor excited, but it's not the future of video game engines, IMO.
Disclaimer: I work in comp vision, AI and video automation tech and we do R&D with video GANs; that does not mean I'm an expert, but I do understand the hype in this field.
p.s. it is worth adding a note about the role of laser scanning in the production of visuals. If you want the visual fidelity of a kerb or bump in a road to match the FFB fidelity and fulfil its physics role, then the game engine needs the data from the laser scan to match up with the visual data from the high resolution photography (think Quixel Megascans on steroids), and it also needs to match up with the dynamic environment generation. Standing water is an example: it needs to build up over time in areas of the track where it does in real life, it needs to be able to be cleared away by cars driving over/through it - differently based on the tyre type - it needs to reflect its surroundings dynamically, and it needs to impact the physics of tyres through surface tension and its various cohesive properties, as well as be influenced by changes in the weather. In theory, the water could evaporate under a hot sun, or freeze in a winter night race.
I am pretty certain that UE5 and other game engines that are approaching photorealism, along with Ray Tracing, are going to allow developers and artists to bring a whole new level of visual fidelity to games. But, it's going to take another generation of computing power before we'll see 90+ FPS 4K, and racing games are one of the genres where higher FPS (and refresh rates) means faster lap times, all things being equal.
None of this means that AI cannot be used to improve visuals. For example, onboard cameras from various 24h races could be used to train neural networks to improve lighting. Ultimately, I think you will find that lighting (shaders, etc) has a greater impact on our perceptions of "photorealism" than the source textures applied to the car and environment models. There are companies dedicated to providing photorealistic environments for training self-driving cars. If a self-driving car "sees" a reflection of itself in a huge shop window, does it know that it's looking at its own reflection or if it's looking at another car coming towards it? Achieving that level of fidelity is a huge (and important) task.
Self driving car after seeing its own reflection
On topic, i watched the video and i liked the improvements for sure. but can something like this be done in already resources intensive simulators such as RF2 ?
@prceurope Yes, I imagine it probably would be far away from being usable. Although as said in the video, only parts of this algorithm could be used, so for example just for grass shading maybe.
Yes, which is why I wrote:
One of the most difficult thing for people, who have never raced a car (especially at night), to understand is that what we see through cameras is not what a driver sees. And what a driver sees is also influenced by the windshield/screen, dirt and grime, glasses, and whether the driver is using a visor or not. So, the training data that's used to provide a ML shader is going to be biased unless the camera footage is first normalized to represent what the human eye is actually seeing (btw it's the same for GoPro microphones that do not accurately capture cockpit sound the same way that dedicated sound model equipment does). Otherwise, you get a great shader for an onboard camera look that doesn't correspond to reality.
Overall, super exciting tech in general, but one that is going to be more helpful to filmmakers than to game engineers in the near term. Again, it could be super useful to automate the application of shaders/lighting and to recolor textures (think AI colorizers).
Not yet. Years from now you will be able to select a style and you will be able to turn every car in rF2 into something from Disney's Cars if you really want to.
Not to be negative, but I don't think that looked like reality, it looked like a low quality, low resolution video camera with incorrect white balance. Our brain percieves it as looking "real", but that is because it looks like it was shot by a camera, and we associate stuff that was shot by a camera as looking real. But it dosn't look like "real life".
Not sure if that makes sense or not because I'm confusing myself now, but I will use this photography anaology. I spend everyday trying to make my photos look as "real" as possible (I'm a photographer). When I say "real", I mean, as close as possible to the way the human eye sees a scene. The thing is, the harder you try to do that, the faker it looks. People have an in build preconception of what a photo looks like, so if you put huge effort into taking a photo and make it look much more like real life, people automatically percieve that as looking fake, becuase it can look too good. The unedited image however looks totally "real", but in reality the unedited image looks NOTHING like the way my eyes saw the scene in real life due to the limitations of a camera.
So my point is, the software made the image look "real" like "video camera real", but not real as in actual high resolution ''real life" real. I assume that would be orders of magnitude more difficult. It seems cool, but I would have to see it in HD and with correct white balance to make up my mind.
Need quantum computers, these computer are lame as hell .
Personally i can settle with sim/game that looks as good as PC2 or ACC or driveclub.
other areas of sim matters much more to me anyways, RF2 doesn't look bad at all. some the screenshots looks amazing tbh.
rF2 on the updated PBR tracks looks pretty good. If you like to play around with Reshade, it can look even more pleasing to your own eye.
ACC looks really nice. AC with Sol looks really nice. AMS2 looks really nice. rF2 looks pretty good. Just make it less than 10 mins to load the Nordschleife with 10 cars and I'm a happy man.
I have had included few screenshots before, in which I have adjusted brightness and contrast in those images. I removed those images, because I found out that on my other (much better) monitor brightness and contrast looked a lot better in original. But I would still have had boosted up some colors, like greens and reds.
The low quality thing may be related to video being uploaded to youtube, and then reuploaded multiple times. White balance or exposure stuff may not have been perfect in what the scientists collected themselves.
But overall to me it looked a ton more like a reality. It indeed looked more like it was shot by camera with the values being imperfect as you say, than seen by own eyes. But we still perceive things shot by cameras even with messed up parameters because whole view will always be "wrong" consistently. All of the colours of all things in the scene will match up one another in realistic ways, not to mention things like realistic reflections, shadowing, lighting and various small details.
And finally colors are not even necessary at all for images to be perceived as realistic. If I remember correctly it is because our brain prefers values over color, basically you can switch colors however you like and if you keep values right stuff will still look correct. Thats also why black and white images looks true:
It did look much more realistic I totally agree. If you took the same footage from the game, slightly reduced the sharpness, added some slight video artifacting, and adjusted the colors/reduced saturation, it'd probably look very similar to the AI image. But I guess if the AI can do that dynamically in real time, then that is pretty cool.
I never really used any reshade in any game tbh. as long as it looks good enough i'm happy.
I agree on loading time though, The U.I itself is quite slow and sluggish
But it cannot - and yet it's still pretty cool.
Separate names with a comma.