The purpose of assignment 3 was to parse XML data into processing, using existing XML data of different x-positions, y-positions, widths, and heights. This data is turned into boxes and drawn using the draw() function. Using the Box class, we go through the data and pick out the information relevant to the boxes, and using getChildCount figure out how many boxes to make. The println() function prints to processing the information already provided in the XML data. The beauty of XML over traditional spreadsheet tables is that it is readable by computers and humans using code and pseudocode. For example, a box would be called as <box id=”1″ height=”40″ width=”50″></box> and through the script, draw a 40×50 size box. The script can be found here and screenshots below.
My idea of the final project is highly interactive. I was thinking of creating a new game that requires all four players, connected with a Kinect camera and a TV. The players become actors of a greater game, which is projected in the center of Crown Hall. The image above is a quick sketch of the set up. Each corner of the building will have a booth set up, designating their role in the game. In terms of programming and processing, the cameras will accept all visual input and the TV will output what the other players will see. However, none of the players will see what the audience sees.
The purpose of the game is to work together to overcome a certain challenge. Using no audio feedback, the players focus on miming their instructions, much like Charades, to the other players. The screen, I’d imagine, would be divided into four screens. The audience watches the four players in Center Core by the use of a projector on a screen, or maybe the projector would project onto the floor.
In terms of the game’s structure, I haven’t considered what types of challenges the players would endure together. I figure that it’d be some sort of sequence of puzzles, much like Myst. For example, player 1 may need to hold open a door while player 2 runs in to get a key from a cave, but since it’s so dimly lit inside, player 3 and player 4 hold mirrors to reflect the light. The challenges can be altered so the game doesn’t require four players at all times.
I imagine the most challenging aspect of this project, should we venture my way, is to coordinate the inputs between four different inputs into one “base”, and then programming a game so that each of the four inputs all control different variables in the game. In a dumbed down version of the game, I imagine a similar set up, but each of the players have a turn making their bodies into a certain shape–either their own creation or suggested by the game–and it is the other three players’ responsibility to match the pose or lose.
Suggestions in the comments would be helpful.
With all the attention that cloud gate’s been receiving lately with the “Luminous Field” installation, I felt it necessary to make it my topic in my Image City course. The assignment was to cover a contemporary issue, object, topic, etc, and to make an image piece using a brief 250-500 essay. Below are my iterations:
We were also asked to make a video to accompany these slides. My iteration of the Bean as a social gathering place, through the selective use of Flickr photos (with creative commons licensing!) can be found here.
Assignment three addresses the input of XML data and output as triangle information. Through the use of classes, I am able to call upon functions within a particular class in another attached file to streamline the retrieval, update, and output process. Below, we can see some of the script used to generate these triangles.
One of the things that we covered in class is incorporating movement and mouse input. While the original class assignment was intriguing in its potentiality, I felt that it lacked color. The three images below show, with the code above, how you can add color to the processing program by declaring three variables, presumably the RGB value, and set a function to randomize the colors chosen.
To explain, the first image is the last remaining of the three original triangles. The second image shows that by clicking the mouse, the user generates a series of triangles that emerge from the mouse click position. Notice how each of the triangles is a different color. The third image shows that as the mouse is no longer clicked, the triangles move outward and disappear from the screen.
A copy of the script can be found here.
This interesting post illustrates the potential of creating dynamic art using static photographs. The artist, Nobuhiro Nakanishi, takes pictures of a landscape and laser prints it onto acrylic, as the original post points out. The effect of movement occurs as the viewer moves past the piece, observing the subtle differences in each frame. I found this especially interesting since I’m taking a class on image and the city, and our most recent assignment was to create a video clip using photographs. I really like the composition of this piece, and I feel that it allows the audience to look at the pictures from different angles, a mere hemispheric perspective of a series of photographs. In terms of future implications, this type of art can influence the way we present our drawings and how users interact with space through this medium. We let the audience take a tour of the tour we’ve designed from them, removing some freedoms (by choosing the photographs ourselves) while they are still free to look at the slides freely.
My biggest inspiration was a blog post by a fellow student found here that illustrates several beautifully executed examples of using processing. For the second assignment, the project was aimed towards utilizing a for loop; essentially, making the program repeat itself until certain parameters were reached. First, I began playing with a script called “Box Fitting Img”, which would essentially take an existing picture and use different sized squares to recreate such image.
I began with an image on google. I found this one to be pretty interesting:
The script was relatively robust; it had a couple of errors in referencing the image file, but besides that minor inconvenience, it ran perfectly. Basically, the script begins by creating an empty canvas with a white border. It draws five squares of relatively large widths and expands them to a predetermined size. It creates a Box class that pretty much draws a box, like we have done in class.
The first boxes are created and placed on the canvas. As the script runs, it checks for pixels using the checkPixel() command and runs through the entire image (which was specified in the script). The beauty of this script is that the bulk of the for looping is done in the expand() command. It takes the dimensions of existing squares and then checks for collisions and obstructions.
The boxes continue emerging in groups of five at random points in the image, constantly checking (using for loops) for no obstructions. As the canvas fills with squares, I noticed that there were several areas with no squares drawn in. I think this is because the size of the squares get exponentially small and can’t be seen unless the resolution of the screen was even bigger. The other beauty of this script is that it is user generative. If I wanted a different iteration, I simply clicked the mouse and another set would begin to draw.
Notice the changes between these two iterations. The square in the middle of the second one is clearly an area that has been overemphasized.
Now, to apply this to my script, I wanted to take our class example and elaborate on the collision bit. I wanted to create these generative squares–predetermined at 500–and then have them regenerative per mouse click. As of now, the clicking sets the canvas back to 0, but I can’t seem to figure out the draw function or the collision functions. Hopefully keeping this tabbed as a “work in progress” will remind me to come back to this.
Edit – February 18, 2012
After consulting with my computer science major friend, I have fixed and resolved all issues! The file will now create all squares in a series of for and if loops. The beauty behind this script is that by creating a white border, it is able to determine the intricacies of collisions and avoid them. Through the expand() function, we can visualize the relationship between squares. By adding or subtracting the number of starting squares, we see the relationship of early squares and later squares, where the later squares are significantly smaller than the older ones. You can find the script here.
In summary, this video is a stop-motion short flick composed of several photographs taken from cameras hooked up to a styrofoam apparatus of sorts. The Canadian teens used a professional weather balloon and helium for propulsion, and also included a cell phone with GPS for tracking purposes. Their efforts were astounding, something that I wish I could have conjured while in high school. I think it’s interesting to see what kids are doing with technology, and the future of turning ideas into actions. They were reimbursed for their materials, and will probably get a full ride to a prestigious college. I like this kind of creativity, and hope that soon more kids will become empowered to carry out their ideas big and small.