Forums » Suggestions
rendering peripheral vision vs. straight on
I noticed that asteroids/objects tend to render in the peripheral of the view, but then fade/unrender toward the middle... that may be a bug? but likely it's a distance/rendering feature?
I'd like to suggest a tunable rendering option for the client that allows us to set the depth of rendering vision. large asteroids should render until they are specks, not disappear.
Maybe a slider that lets us set the depth of field (depth of render? distance of render?) for objects. It'd be useful for people that have more GPU horsepower and want a more immersive experience.
Also, gamma should be adjustable for those that have monitors/panels that display darker.
https://en.wikipedia.org/wiki/Gamma_correction
I'd like to suggest a tunable rendering option for the client that allows us to set the depth of rendering vision. large asteroids should render until they are specks, not disappear.
Maybe a slider that lets us set the depth of field (depth of render? distance of render?) for objects. It'd be useful for people that have more GPU horsepower and want a more immersive experience.
Also, gamma should be adjustable for those that have monitors/panels that display darker.
https://en.wikipedia.org/wiki/Gamma_correction
+1 for the distance scalar; sometimes its pretty noticeable, specially in recordings
"that may be a bug? but likely it's a distance/rendering feature?"
Yeah, it's an optimization technique, not a bug. Instead of computing the true distance to each point and determining whether it's within the rendering limit, you instead examine the individual coordinates after everything's been transformed into the viewport's coordinate space. So basically, imagine that you're inside an invisible box. Everything that lies inside the box renders, and everything outside does not. The box is fixed to your head, so when you turn, it turns with you. The distance to the corners is farther than the distance to the side directly in front of you, so as you turn, those corners sweep through regions that are outside the box when you look directly at them.
Using a sphere instead of a box would give nicer results, but it would also be less efficient.
Having more options would be nice, but I'm assuming the devs already intend to do something to better support rendering massive objects from a distance due to their interest in eventually having much larger sectors.
Yeah, it's an optimization technique, not a bug. Instead of computing the true distance to each point and determining whether it's within the rendering limit, you instead examine the individual coordinates after everything's been transformed into the viewport's coordinate space. So basically, imagine that you're inside an invisible box. Everything that lies inside the box renders, and everything outside does not. The box is fixed to your head, so when you turn, it turns with you. The distance to the corners is farther than the distance to the side directly in front of you, so as you turn, those corners sweep through regions that are outside the box when you look directly at them.
Using a sphere instead of a box would give nicer results, but it would also be less efficient.
Having more options would be nice, but I'm assuming the devs already intend to do something to better support rendering massive objects from a distance due to their interest in eventually having much larger sectors.
We do plan to change a lot of things that will be impacting all of this. What the OP is describing are Z-buffer precision issues, impacted by the view frustrum that Pizzasgood describes. We're most likely moving to an imposter rendering system for distant objects, which will inherently condense the z-buffer precision to a much closer area, and also make more complex scenes faster to render on all devices.
Neither of these are "new" ideas, they're quite old. But game tech is a moving target of solving problems the way that works best for the goals of the specific developer, and our needs are evolving.
Plus, while these specific ideas are quite old, the implementation methodologies keep evolving, and modern APIs (like Vulkan) give us access to on-GPU rendering that opens up a lot of new possibilities. A constant in graphics is that the old becomes new again, re-invented with new opportunities and available technology.
We'll be considering changing how we handle gamma as well. I want to basically redo all of the game's lighting, color correction, and final render processing, which will also include future support for HDR devices. But that's a whole other discussion.
Neither of these are "new" ideas, they're quite old. But game tech is a moving target of solving problems the way that works best for the goals of the specific developer, and our needs are evolving.
Plus, while these specific ideas are quite old, the implementation methodologies keep evolving, and modern APIs (like Vulkan) give us access to on-GPU rendering that opens up a lot of new possibilities. A constant in graphics is that the old becomes new again, re-invented with new opportunities and available technology.
We'll be considering changing how we handle gamma as well. I want to basically redo all of the game's lighting, color correction, and final render processing, which will also include future support for HDR devices. But that's a whole other discussion.
Inc, So... as for a tunable Z/depth... not right now, but maybe in the future? or do you think it'll always be a fixed value for everyone?
What you're asking for will not be necessary, or relevant, given the changes I wrote about, above. The whole use-case will go away.
Out of curiosity, will the future changes also impact the way the star fields are rendered. As it is now, if you zoom in your field of view to see a distant ship or station, the stars get proportionally bigger (albeit fuzzy). This wouldn't really happen in reality since the distance to a star is thousands if not millions of orders of magnitude greater than a station 3,000 meters away. A subtle, but probably noticeable bit of realism if corrected for.
It depends. For stars that are point-sprites, they could be rescaled to remain points at any zoom level.
But, for stars on the background texture, which we use for areas of high density, that could be a bit more involved. The two types tend to be mixed in together, except for low-end devices where the pointsprites are "baked" onto the background texture.
We're aware it's unrealistic, but it's a tradeoff like so many things. At some point that might be improved, but I care more about making things look better when "not zoomed in", at the moment.
But, for stars on the background texture, which we use for areas of high density, that could be a bit more involved. The two types tend to be mixed in together, except for low-end devices where the pointsprites are "baked" onto the background texture.
We're aware it's unrealistic, but it's a tradeoff like so many things. At some point that might be improved, but I care more about making things look better when "not zoomed in", at the moment.