The human brain is able to generate guidance strategies in unknown environments and improve performance over multiple trials. Such capabilities are challenging to implement in autonomous systems. This research uses a computational framework based on a spatial value function and invariants, described as interaction patterns, in the function to investigate human planning and learning in unknown environments. A simulation system was used for human guidance experiments in a simulated obstacle field unknown to the subjects before the experiments. The system recorded vehicle trajectory, control inputs, and human gaze over multiple trials, between specified start and goal locations. Human guidance policy is evaluated in comparison with an optimal control baseline, formulated using interaction patterns in a Dubins optimal solution. The subjects' control characteristics explain why exploratory behavior varies from one individual to the next. The authors utilize gaze spatial density and an information gain model, employing gaze location data, to investigate the visual cues that support guidance and navigation.