Forums » Suggestions
Placeholders for objects outside the Rendering range.
I propose that static sprites be used to replace an object that exceeded the rendering distance. This would allow players to see distance objects without worrying about the performance hit.
For example, currently when you arrive in a sector with groups of asteroids spread far apart, you may not see all of the asteroids in the sector, and is you're looking for some ore, you may miss half a sector work of decent asteroids.
If sprites were implemented you would enter the same sector and see the asteroids in 3D that are within the rendering distance but asteroids outside of the rendering distance will have an asteroid image, roughly scaled the same size/shape as the asteroid it is representing. (which would have to also take into account your actual distance from the asteroid). As you move close to the asteroid sprite and the asteroid enters the rendering distance from your ship, the sprite will fade away revealing the actual 3D asteroid.
As you move away from a given asteroid and it eventually gets outside of the rendering distance, a sprite representing that asteroid will fade in again. Lighting could potentially be calculated on the fly, if desired, so that the sprite will blend in more with the sector and the actual asteroid. Since the same-type asteroids look very much the same, one or two sprites for every type could be made (for variation). These would be generic sprites and would be scaled/lit depending on the asteroid and sector environment.
This concept can be applied to any object in the game and would provide a more functional and better looking solution for the optimization issues surrounding the previous larger rendering distance.
For example, currently when you arrive in a sector with groups of asteroids spread far apart, you may not see all of the asteroids in the sector, and is you're looking for some ore, you may miss half a sector work of decent asteroids.
If sprites were implemented you would enter the same sector and see the asteroids in 3D that are within the rendering distance but asteroids outside of the rendering distance will have an asteroid image, roughly scaled the same size/shape as the asteroid it is representing. (which would have to also take into account your actual distance from the asteroid). As you move close to the asteroid sprite and the asteroid enters the rendering distance from your ship, the sprite will fade away revealing the actual 3D asteroid.
As you move away from a given asteroid and it eventually gets outside of the rendering distance, a sprite representing that asteroid will fade in again. Lighting could potentially be calculated on the fly, if desired, so that the sprite will blend in more with the sector and the actual asteroid. Since the same-type asteroids look very much the same, one or two sprites for every type could be made (for variation). These would be generic sprites and would be scaled/lit depending on the asteroid and sector environment.
This concept can be applied to any object in the game and would provide a more functional and better looking solution for the optimization issues surrounding the previous larger rendering distance.
I propose that, instead of Sprites, we use Pixies.
This sort of technology is called Imposters, basically doing a render-to-texture job on distant objects, even beyond the far z clipping plane. However, this is not.. trivial to implement. But we'll consider it. It would be useful for very large objects that are very far away, something that will become more prevalent as we move forward.
I seem to remember Homeworld 2 having this...
I don't remember them from Homeworld 2, but Guild Wars uses them extensively.
I was under the impression that they were relatively simple to implement. What makes them non-trivial?
I was thinking that instead of creating the sprite directly from the rendered object in realtime, you could just take a couple of images of the object in question and save them as textures to use and just switch between the image and the rendered object. If you're only going to be using this for larger objects, you wouldn't get the game size increase that you might get if you were doing this to every object in the game.
I was under the impression that they were relatively simple to implement. What makes them non-trivial?
I was thinking that instead of creating the sprite directly from the rendered object in realtime, you could just take a couple of images of the object in question and save them as textures to use and just switch between the image and the rendered object. If you're only going to be using this for larger objects, you wouldn't get the game size increase that you might get if you were doing this to every object in the game.
it's non-trivial because it would take a long ******* time to implement.
rant
the one thing that no one who posts in this forum realizes is how much code it takes to do the simplest of things. To set this up would be the equivalent of someone building an entire addition to their house. It's a significant modification to the engine that would take a lot of work and testing across all 3 platforms. It's not minor at all- it'd take a lot of math and code, and a ton of trial and error.
/rant
it's an excellent suggestion, but it would take at least a month of hard work, if not more (I'm guessing)
rant
the one thing that no one who posts in this forum realizes is how much code it takes to do the simplest of things. To set this up would be the equivalent of someone building an entire addition to their house. It's a significant modification to the engine that would take a lot of work and testing across all 3 platforms. It's not minor at all- it'd take a lot of math and code, and a ton of trial and error.
/rant
it's an excellent suggestion, but it would take at least a month of hard work, if not more (I'm guessing)
Given that no one here (to my knowledge) has seen the actual backend code of VO (besides the devs themselves of course) then yes, you would be correct in stating that we probably don't know what would be involved for them to change or add something into their application.
With that said, implementing a simple version of this (one that only draws from a pre-loaded image, not one that bakes an already rendered 3D object onto a 2D surface) wouldn't take a whole lot of coding. "Check the distance of the player. If the distance is greater than rendering drop off, place the 'imposter' in the location of 3D object and stop rendering the object." Since they already have a means of checking the distance of the player and 3D objects are already being unrendered at the drop-off distance then you would need to place a sprite object of some sort on the coordinates of the 3D model (or thereabouts) and start its rendering.
Testing on all 3 platforms of course would take some time. I doubt it would take more than the average addition/change to implement and test however, saving the placeholder images put aside. A lot of the tech/code needed to pull this off is already implemented, I would imagine. This is of course, assuming that changing one line of their codebase won't require changing 30 other lines somewhere else. If they did do a more complex system like baking a 3D model onto a 2D surface then it probably would take more time/code to implement but most of the stuff would likely be there too.
Now with all of that said, I didn't ask what made them non-trivial because I was being a cocky fool (which is what I assume brought on your rant! :D) I was asking because I'm honestly interested!
With that said, implementing a simple version of this (one that only draws from a pre-loaded image, not one that bakes an already rendered 3D object onto a 2D surface) wouldn't take a whole lot of coding. "Check the distance of the player. If the distance is greater than rendering drop off, place the 'imposter' in the location of 3D object and stop rendering the object." Since they already have a means of checking the distance of the player and 3D objects are already being unrendered at the drop-off distance then you would need to place a sprite object of some sort on the coordinates of the 3D model (or thereabouts) and start its rendering.
Testing on all 3 platforms of course would take some time. I doubt it would take more than the average addition/change to implement and test however, saving the placeholder images put aside. A lot of the tech/code needed to pull this off is already implemented, I would imagine. This is of course, assuming that changing one line of their codebase won't require changing 30 other lines somewhere else. If they did do a more complex system like baking a 3D model onto a 2D surface then it probably would take more time/code to implement but most of the stuff would likely be there too.
Now with all of that said, I didn't ask what made them non-trivial because I was being a cocky fool (which is what I assume brought on your rant! :D) I was asking because I'm honestly interested!