We are processing your on-demand request

// Please, do not close this window.







Job Status: PENDING

Random Polyhedral Scenes:

An Image Generator for Active Vision System Experiments

John Tsotsos Laboratory for Active and Attentive Vision

We present a Polyhedral Scene Generator system which creates a random scene based on a few user parameters, renders the scene from random view points and creates a dataset containing the renderings and corresponding annotation files. We think that this generator will help to understand how a program could parse a scene if it had multiple angle to compare. For ambiguous scenes, typically people move their head or change their position to see the scene from different angle as well as seeing how it changes while they move; a research field called active perception. The random scene generator presented is designed to support research in this field by generating images of scenes with known complexity characteristics and with verfieable properties with respect to the distribution of features across a population. Thus, it is well-suited for research in active perception without the requirement of a live 3D environment and mobile sensing agent as well as for comparative performance evaluations.

The purpose of this system is twofold:

  1. Dataset Creation - The user is able to render many random views of a randomly generated polyhedral scene or an uploaded 3D scene in Wavefront format (.obj).
  2. On Demand - Like in an active vision task, the user is able to set the next view to gather another angle of the scene. Available through this web page and web API.
For further information please read the associated paper [pdf].
For a quick introduction please watch the video.

Submit a Request

On Demand

  1. ID:
  2. // No ID? Please, submit a "Dataset Creation" request first.
  3. Random or defined camera    
  4. // For defined camera: consider using the "Camera Setter" on the right.
  5. X: Y: Z:
  6. // Please enter the camera position.
  7. Qw: Qx: Qy: Qz:
  8. // Please enter the camera orientation as quaternion.
  9. Lighting Conditions        
  10. // Define your lighting. Check "Rendering Setup" on the right.

Dataset Creation


    // Use our generator to create a polyhedral scene or upload your own (.obj only).

  1. Number of Objects
  2. // Value between 1 and 180 is possible. Result might differ +/- 10%.
  3. Scene Layout        
  4. // Objects to be separate from each other, touching or intersecting.

  5. // Upload your scene (Wavefront .obj only!).
  6. Lighting Conditions        
  7. // Define your lighting. Check "Rendering Setup" on the right.
  8. Number of Views
  9. // Choose the initial number of views you want (dataset size).
  10. Email address
  11. // Once your dataset is ready, we will send you the download link to this address.

How to Use the System

If you want to find out how to use Polyhedral Scene Generator, please watch the video bellow. The video will walk you through both main features of the system; dataset creation and issuing on-demand requests.

Using the API

The Polyhedral Scene Generator also provides an API for on-demand requests. The user can choose between a random camera position or a pre-defined camera position and a fixed or homogeneous lighting conditions. An example using python can be found in the paper [pdf].
Much more information on how to use the API, including examples for MATLAB, Java, can be found at our accompanying GitHub Page.

How to cite

In case you use this work in one of your publications, please make sure to cite us:

Markus D. Solbach, Stephen Voland, Jeff Edmonds and John K. Tsotsos.
"Random Polyhedral Scenes: An Image Generator for Active Vision System Experiments"
arXiv preprint arXiv:1803.10100 (2018).
[pdf] [bibtex]

Notice

  • We provide no guarantees for system performance.
  • The system is provided as is.
  • Developed for research purposes only and commercial use is not permitted.
  • For comments, problems and questions, please send us an email (replies are not guaranteed).
  • If you use any aspect of this system, we respectfully request that all relevant publications that result from any use of this paper or system cite this paper as stated in Section How to cite.