Thursday, December 19, 2013

VIS198 - MPIScope for Distributed Graphics Applications

Within the context of our data visualization project my role was mostly to develop a framework with which the other members of the team could implement their graphical ideas. To accomplish this goal I developed a Python module to contain the data retrieval and basic structure of distributed rendering and provide a simple hook for other people to plug their renderers into. Also provided is a hook for a parsing function used to transform the datasets retrieved into a smaller or refined data set before being passed to the renderer itself. By abstracting away the data retrieval and distributed nature of the program this framework allowed rendering classes to be implemented without concern for the retrieval of data or the distributed nature of the cluster. The python module itself is available here.

The module itself is divided into two main source files and three classes, the main of which is the MPIScope class within MPIScope.py. This is the user facing class which handles initialization and setup of the program. When provided with a rendering object and a dictionary of keys with URLs to retrieve data from (and optionally a parse function and delay time), the MPIScope object will handle the communication between nodes and synchronization of the different displays.

A simple example is provided with the module and is repeated here.

from mpiscope import MPIScope
from mpiscope import DummyRenderer

urlList = { "gordon" : "http://sentinel.sdsc.edu/data/jobs/gordon"
          , "tscc" : "http://sentinel.sdsc.edu/data/jobs/tscc"
          , "trestles" : "http://sentinel.sdsc.edu/data/jobs/trestles"
          }

def parse(data):
    print(data)
    return data

mpiScope = MPIScope(DummyRenderer(), urlList, parse, delay=60)
mpiScope.run()
In this example a new MPIScope instance is created from a DummyRenderer, a list of urls for supercomputers at the SDSC, very basic parse function that simply functions as the identity, and a delay of 60 seconds. In this case the parse and delay arguments are largely redundant; parse simply prints its output for debugging purposes and by default delay is 60 seconds. After this instantiation the program is run by the invocation of mpiScope.run().

The parse function here requires some elaboration. The "data" it receives is simple the urlList dictionary with the URLs replaced with a dictionary of keys and values corresponding to the JSON data retrieved from the URL for each key. The parse function can them be used to filter or reduce this data set before it is given to the renderer. A final thing of note about this function is that it and the data retrieval itself runs on a separate thread than the renderer. A result of this is that if a data set is sufficiently large and your hardware is sufficiently weak it may help performance to offload this data processing to another thread.

For our purposes this MPIScope class seems to fulfil its purpose well enough, but we haven't had a chance to test our work on the actual Raspberry Pi cluster we are targeting. Some of the implementation details are somewhat sketchy, but until it is running on the actual target hardware I don't believe it to be worth attempting to optimize it. For the moment the public interface is fairly simple, but one pain point I've already noticed is the lack of a simple way of retaining old data sets when new data is obtained. The solution for now is to store the older data sets in the Render object in the renderer's draw method, but this is far from ideal. At this point the problems I see with the module don't seem serious enough to complicate the class to fix.

The other important file in the module is the DummyRenderer file and class. While this class shouldn't be used in a real application, it does provide an example of how a renderer class to provide to MPIScope should look. The comments in the file are fairly self explanatory, but essentially the start method should be used to create a graphics context, draw should be used to do any actual drawing based on the data set, and flip should be used to notify your graphics library to display the newly rendered buffer. This project was a large learning experience for me for two major reasons. Firstly, learning how to program a cluster of computers is very new to me and I learned what I did while working on this module. Secondly, and perhaps most importantly, this was my first time working on a real team programming project with clear division of labor. I'm used to having complete creative control over my projects, so learning to allow others to do their part without interfering took effort on my part. Another part of this challenge for me was in having to think more deeply about the API I created than if I was working alone; not being able to break things whenever I wanted required more thought about my changes and how they would affect the other project members. This requirement of communication and cooperation made things more difficult at first, but eventually made the project less stressful as I learned to focus on my parts of the project rather than the project as a whole.

Tuesday, May 14, 2013

Simplifying the Joysticks' Wiring

Our Parallel Pong game uses a couple of Zippy Joysticks, which have been connected via a breadboard. In order to free the joysticks from the bulky breadboard and ensure more secure wiring, we acquired a small Adafruit PermaProto board and a ribbon cable to transfer the functions of the Raspberry Pi's GPIO pins directly to the board.

PermaProto Board and ribbon cable

Tuesday, April 23, 2013

Raspberry Pi Cluster Rack Sketch


If you've looked at the earlier posts, you probably noticed the spaghetti with Raspberry Pis laid out in front of the display wall. +ET Parreira has certainly mentioned it to me, after he had to set up Pong after Triton Day. That's what happens when you're feeling your way around a new piece of hardware and just trying to get the code working.

Thursday, April 18, 2013

Parallel Pong on Raspberry Pis

When building a cluster computer, you need software to run on it. We thought that games would be a great demonstration and this lead us to embark on making the greatest game to ever come to distributed programming, pong.

Everyone needs one of these

Wednesday, April 17, 2013

Interfacing Zippyy Joysticks with the Raspberry Pi

In a later post, you'll read about a parallelized version of pong painted on a grid of computer screens. This sub-project was to interface joysticks up to a controlling node in the pong cluster and give it a classic cabinet feel. The only difference is that it is displaying on 15 screens! Until we got this working, we had to watch an AI enjoy the game.

Monday, April 8, 2013

UCSD Triton Day 2013

This Saturday, we moved the Sandbox into SDSC's lobby for Triton Day, UCSD's open house for newly admitted students. We set up the OptIPortable that we've been using to try out building tiled displays with Raspberry Pis, since Erik's code (details in a later post) has reached the working demo stage. We thought it would be cool to let next year's students see what kind of things they can find on campus. Some of them had worked on some serious electronics projects in high school, including underwater autonomous vehicle and a NanoRacks experiment.

SDSC Sandbox students working with Raspberry Pis.
The SDSC Sandbox undergraduates (seated) working Raspberry Pis. From left: Amy, Alex, and Erik.