GAN-Generated Terrain for Game Assets
Yogendra Sisodia

Yogendra Sisodia, Director, Department of Machine Learning, Conga, Thane (Maharashtra), India.

Manuscript received on 25 September 2022 | Revised Manuscript received on 28 September 2022 | Manuscript Accepted on 15 October 2022 | Manuscript published on 30 October 2022 | PP: 1-3 | Volume-2 Issue-6, October 2022 | Retrieval Number: 100.1/ijainn.F1060102622 | DOI: 10.54105/ijainn.F1060.102622

Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Published by Lattice Science Publication (LSP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Multimedia applications, such as virtual reality models and video games, are increasingly interested in the ability to generate and author realistic virtual terrain automatically. In this paper, the author proposes a pipeline for a realistic two-dimensional terrain authoring framework that is powered by several different generative models that are applied one after the other. Two-dimensional role-playing games will benefit from this ability to create multiple high-resolution terrain variants from a single input image and to interpolate between terrains while keeping the terrains that are generated close to how the data is distributed in the real world.

Keywords: Deep Learning, Generative Adversarial Networks, Pix2Pix, Procedural Content Generation, Terrain Generation.
Scope of the Article: Deep Learning