4D-RGB model created by Vinoculer 3D model created by Vinoculer 3D model created by Vinobot Vinobot and Vinoculer

Welcome!

The Vision Guided and Intelligent Robotics Lab (ViGIR Lab) is affiliated with Department of Electrical Engineering and Computer Science (EECS) in University of Missouri at Columbia. The ViGIR lab was first established in 2003, at the University of Western Australia, and it moved to the University of Missouri in 2005.

In the Lab, we conduct research in areas such as: Computer Vision, Pattern Recognition and Robotics. We develop models to represent the world as perceived by cameras and other sensors and we devise algorithms that make use of such models to extract real time information from a sensory network in order to guide robots to perform various tasks. Our main goal is to build new Human-Robot Interfaces that can be used in robotic assistive technology, augmented reality, automation, tele-operation, etc...

The ViGIR Lab is currently situated on the first floor of the Naka Hall (formerly Engineering Building West) and it is equipped with 8 dual-core and dual-headed workstations and a 16-core server running Linux. The computers share (NFS) over 20TBytes of disk space interconnected by two 1Gbit/s networks. The networks also interconnect two NVidia servers(a) (with GTX480's and C1060's with multiple cores) and 9 embedded vision-sensor devices which control various Firewire cameras and Kinect cameras spread throughout the lab. The cameras are used to create a Virtual/Augmented Reality Environment and to control an industrial Kawasaki UX150 robot. Other equipment such as two P3DX and one Husky mobile robots, one MICO and one JACO2 arm robots, a power wheelchair, a 3D structured light scanner, a linear slide, etc. are also available for various researches.