Virtual environments show great promise in the area of training. Although such synthetic environments project homeomorphic physical representations of real-world layouts, it is not known how individuals develop models to match such environments. To evaluate this process, the present experiment examined the accuracy of triadic representations of objects having learned them previously under different conditions. The layout consisted of four different colored spheres arranged on a flat plane. These objects could be viewed in either a free navigation virtual environment condition (NAV) or a single body position virtual environment condition. The first condition allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe form a single viewpoint. These viewing conditions were a between-subject variable with ten participants randomly assigned to each condition. Performance was assessed by the response latency to judge the accuracy of a layout of three objects over different rotations. Results showed linear increases in response latency as the rotation angle increased from the initial perspective in SBP condition. The NAV condition did not show a similar effect of rotation angle. These results suggest that the spatial knowledge acquisition from virtual environments through navigation is similar to actual navigation.