Wednesday, May 18, 2016

Terrain Synthesis

This is just a teaser. We are still working on this, but we got some results that are already good enough to show. It is not about where terrain types appear (that was covered here and here), but how a particular terrain type is generated.

We want to make procedural generation as accessible as possible. Just like a movie director who shows a portfolio of photos and concept art to the CGI team and just says "make it look like this", we wanted the creator to be entirely clueless about how everything works.

This is how it feels to create a new terrain type. You provide a few pictures of it and we take it from there:


This system builds a probabilistic model based on the samples you provide. That is enough to get an idea of the base elevation. On top of that, several natural filters are applied. It turns out we do know a bit more about this landscape. We know how dry it is, what is the average temperature among other things. The only fact we are missing and have to ask about is how old do you think this is. The time scales range from hundreds of millions of years to billions of years. (If you believe your terrain is 6000 years old we cannot accommodate you at the moment.)

You can provide one or more sample pictures. The more pictures you provide, the better, but just one picture is often enough. Ready to see some results? The following terrains were synthesized out of a single photo in every case (do not mind the faux coloring, this is only to indentify the different terrain layers for now):




Providing multiple samples creates some sort of mix, similar to how you find both mother and father features in their kids:


This works with any kind of image. It could be some fancy concept art as seen below:


The natural filters in this case added some realism to the concept, and eroded some of the original hill shape. This could be avoided if you are after a more stylized look. But if you are short on time, and want to prototype different realistic terrains, the ability to quickly sketch something and feed it to the generator is a big help.

Of course you can still look under the hood and tinker with generation frequencies, filter parameters, etc. You can still have terrain models imported from Digital Elevation Models, or from third party software like World Machine. The key here is you do not have to anymore.

I'd be glad to enter into details of how this works if you guys are interested. Just let me know. I still owe the Part 2 of the continent generation. That should come shortly.

29 comments:

  1. Great, when is it coming out?

    ReplyDelete
    Replies
    1. We are aiming for summer 2016.

      Delete
    2. Living on the southern hemisphere is confusing =P.

      Delete
    3. Yes I keep forgetting about you guys. Before end of September 2016.

      Delete
    4. Don't worry, I never thought about southern hemisphereans before I moved here either =P. I'm pretty sure everyone on the southern hemisphere is used to everything on the internet being targeted at people on the Northern Hemisphere =P. I was just joking a bit.

      Delete
    5. I sincerely hope it comes out before septembre! I'd love to make the game I plan to do starting in august, but each time you make new posts about awesome new stuff in the coming update and I don't see it getting out I'm like: "Meh, I'll wait for the update to not have to start again when it's out".
      Oh well... guess I'll wait again

      Delete
  2. Looks awesome, I've always thought about how interesting it would be to dive into pictures and the worlds they describe. How much harder would this be with Buildings/Artificial things?

    ReplyDelete
    Replies
    1. I think it would be very hard, but possible. We have worked with architectural grammars for a while now. I wonder if there is a way to "see" structure into pictures and match it to an existing vocabulary of grammars. Once you have recognized what the thing is, you can use the same grammars to synthesize more of it.

      I guess it can be worked out in reverse. You can have a large set of grammars created by people who understand the subject. Using the grammars you can build a much larger visual training set for a classification engine. When you feed an image to the classifier it will tell you which grammar comes closer. Once you have identified the grammar you can synthesize as much as you want.

      Delete
    2. Kinda reminds me of facial recognition, you'd need to have a program that recognises part of a building. Maybe a job for machine learning? =P.

      Delete
    3. Let's say that we run a genetic algorithm on building grammars and try to fit them to building photographs from a database. Would it learn how to build any existing building on its own, or is the solution space too large?

      Delete
    4. Definitively machine learning, and the training set could be created by grammars. The solution space is large but manageable. Also there are little storage requirements as each grammar run can be discarded after it is feed to the learning system.

      Delete
    5. That's really cool. If you can nail automatically generating grammars from 3d shapes like buildings/etc, you will have an IMMENSELY useful tool. Google street view could have seamless integration/transitions with google maps. 3d content creation could start shifting back toward traditional art forms which are captured through pictures/videos.

      Delete
    6. That would be awesome, but it is not exactly what I said.

      Consider an automatic translator. It may not infer the language grammar rules of English from reading English text. Somebody had to define the English grammar to it beforehand. The English grammar is a fairly small set of info, so it is not a big deal.

      I suggested something in that light, where shape grammars would be pre-fed to the system. If instead, the system also has to figure the grammars from what it sees, it becomes a much more difficult problem.

      Look to how they trained the system for Microsoft Kinect. The system had beforehand knowledge of the grammar, that is how the body is divided into different parts and how each part connects to another. This allowed the system to be trained across a large number of sample poses. Now, if the system also had to figure out the shape grammar of a human body it would be a much taller order.

      Delete
    7. I think something similar has been done here: http://www-sop.inria.fr/reves/Basilic/2016/NGGBB16/

      Delete
    8. Thanks for the link.

      This is exactly the system I described above, just not applied to photographs. It has a preset library of grammars and will find the grammar and its parameters from the input sketch.

      Delete
    9. As much as it'd be very cool to get a 'filter/base' structure type thing from feeding in pictures of terrain (and existing buildings), I can see that might get accurate, but none functional/eye pleasing structures. Think we need to do a bit of manual effect for now, and that's not a bad thing.

      Delete
    10. Hmm, selecting the grammar would be hard, but what might be more practical is selecting the grammar manually, and it filling out the pallet and parameters for it based of the picture.

      Delete
  3. This is awesome, of course you should enter into details!

    ReplyDelete
  4. Can you feed 360° pictures into the system? It would be neat if you could specify things like a mountain range in front of you, plains beside you and a river running under you all with a single picture.

    ReplyDelete
    Replies
    1. It is a good suggestion, we did not think of 360 views. This system will not see rivers BTW.

      Delete
  5. Can I just say that I absolutely love your concept art =P.

    ReplyDelete
  6. i am loving all this news!!

    instead of having 3-5 instances of a rock that is placed everywhere on the terrain could this technically make all rock instances unique when generating the world?

    ReplyDelete
    Replies
    1. Well not exactly this system. It deals with larger terrain features, in the 10m and up range.

      Making every rock unique is appealing, but then you are faced with large memory/download constraints. There hasn't been a game released in 2016 that does not rely heavily on instancing.

      Our long term plan is that geometry and texturing should be unique and coming as a stream from the cloud, but the industry may not be there yet.

      Delete
  7. I'm a little appalled that you didn't use the Windows XP Background for a teaser photo ;) Very cool demo none the less.

    ReplyDelete
    Replies
    1. Good idea, we will make sure to try it. Maybe for the video post :)

      Delete
  8. "If you believe your terrain is 6000 years old we cannot accommodate you at the moment."
    Actually I'm a creationist interested in this stuff and I can see how to do that. Terrain emulation of Noah's flood and post flood processes should not be too hard.
    Mass heavily unmixed erosion and deposition (smoothing) with some big vulcanisation: Sub aerial then sub aqua.
    Followed by Coriolis driven eddy erosion (flood peak depth), then large scale sheet erosion and then valley erosion as the water recedes into sinking ocean basins/ rifts.
    At or between each stage we would have some accelerated continental drift, large scale folding of wet sediments, block faulting and metamorphosis.
    Then topped off with some post flood glaciation, loess soil deposition (wind blown glacial dusts) and more vulcanisation.

    It would be complex since the flood models are not as simplistic as some may think but it could be done. It would in most cases be redundant since it would produce largely the same result as some of your current work. What we think looks right is what the flood gave us.

    ReplyDelete
    Replies
    1. You describe a model that is much more precise than anything we currently do. I think the more we can simulate the better so these are great suggestions.

      If you are running physically based simulations, I do think you will need to hack some of the math if you want a 6000 year simulation span to produce the range of features we see on Earth. Being a virtual world I guess it is not a big deal anyway, but if you are following the laws of physics the fewer exceptions you make the more maintainable your code-base will be.

      Delete
  9. This looks awesome! I just discovered your blog today and I've been looking through all your posts... you really do some amazing things!

    I wanted to implement something like your terrain synthesis and I was wondering how you do that. Do you have any tips, links or anything else to point me in the right direction?

    Thanks a lot if you take the time to anwser this, even if it's an old post! I'll keep looking at all your blog posts from now on :)

    ReplyDelete