WPIs Robo-Ops Run

We know that some of you may not have been able to watch the livestream of us competing yesterday. Fortunately, we were able to record the live stream so you can watch it any time you want. The video below shows our run through the entire competition, from beginning to end, with our official start time occurring at the six minute mark. It’s pretty action-packed, but if you want to see specific highlights, check out our channel at http://www.justin.tv/wpiteamoryx/videos. Over the next few days, we will be going through the video and picking out the most exciting portions of our run.

Testing the Arm

We just finished making some modifications to the arm and decided to try picking up some rocks and dropping them into a makeshift sample storage bin. The rocks in the video below are the same five rocks Oryx 1.0 picked up in last year’s competition.

Though this was a somewhat controlled test, we are hoping to do a full Robo-Ops simulation later this week, including a live stream of our driver station. Stay tuned for more information on exact dates and times.

ROS EPOS Package

Due to generous donations from Maxon, our rover exclusively uses Maxon motors and EPOS2 controllers. While there is already a ROS package that can control EPOS controllers, we found that its capabilities are somewhat limited. Furthermore, it does not fully utilize the simple, dynamic reconfigurability that ROS allows. For this reason, we decided to make our own wrapper that is designed to by highly user configurable.

Some of the benefits of our EPOS package include:

  • Ability to control any number of motors without recompiling code. We load the motors and their parameters and runtime. This means that adding additional motors is as easy as adding an additional argument to the program at runtime.
  • YAML based parameter initialization. By using the YAML parameter format that ROS commonly uses, you can store each motor’s parameters in a text file. This makes it easy to modify or add different motor parameters without having to modify a single line of code.
  • Dynamic reconfiguring of parameters. By using the dynamic reconfigure ROS node, the user can modify a number of parameters (such as acceleration/deceleration profiles) while the program is running. This enables the user to finely tune different motor parameters to their liking without having to restart the program.

While the code is almost complete, we still need to do some bug testing to make sure there aren’t any hidden problems. In the mean time, here is a link to the code repository for EposManager.

Additionally, we have received a number of requests from both universities and private entities for more information regarding how we got the EPOS library to work under Linux. Because this seems to be a common problem, we will be adding an EPOS specific tab to our website detailing all of challenges we have encountered, as well as how we eventually solved them. If you have any questions about the getting the EPOS to work under Linux, please let us know and we will try to help.

Video Bandwidth Testing

One of the major challenges with Oryx 1.0 was providing a useful, high quality, video stream over a limited bandwidth network. Due to the low upload speed of Verizon 3G, coupled with the lack of video compression, Oryx 1.0 was only able to stream 240P video at a few frames per second. While the operators were still able to view the rover’s surroundings, the resulting video feed was not ideal.

This year, we are using an open-source video compression format, Theora, to significantly reduce the required bandwidth while still providing a high quality stream. Theora is able to significantly reduce the video bandwidth through spatial and temporal compression. The spatial compression aspect of Theora works in a manner similar to MJPEG compression, where it analyzes the images and reduces the quality of portions of it that the human eye is less sensitive to. To reduce it further, Theora temporally compresses the video by breaking each video frame into multiple subimages and comparing each of these subimages against the previous frame’s subimages. Theora only transmits subimages that are different from the previous frame, resulting in less data being sent across the network.

To test the network load of our video feed, we streamed high definition video at 10fps with no compression, MJPEG compression, and Theora compression, and monitored the bandwidth of each stream over 30 minutes. As seen in the graphs below, the bandwidth required by the raw stream was about 11MB/sec, far more than is available over a 3G or 4G network. Using MJPEG compression, we are able to reduce this to just 500KB/sec, which is pushing the network bandwidth to the limit. The bandwidth on the Theora stream, however, averages only 22KB/sec, well within our bandwidth requirements. The resulting video stream is still of high enough quality to drive and operate the rover without demanding a high-speed internet connection.

Bandwidth Data For Different Compression Formats

Colored Object Detection

Even though we do not yet have a drivable robot, it is never too early to begin programming the basic controls of the robot. One of our major design requirements involves our rover assisting the operator in picking out objects of interest. In the case of our rover, these objects of interest will be rocks of various colors. In order to accomplish this task, we are using a C# wrapper for the popular OpenCV library. In the case of rock detection we look at the RGB values of each pixel in the image, and run a simple color filter to find pixels of a certain color. Pixels of the same color in similar locations can then be grouped together as “blobs”. The centrode of the blob can then be found, and overlayed over the original image to alert the user when a rock of interest is found.