California love is a an experimental VR experience that explores time (the 70s, 80, and 90s) through a series of videos created during the given decade.
For our midterm, Mai and I were interested in using songs to change our environment. Eventually we grew interested in using songs that had sampled from other songs. We found ourselves experimenting a lot with this project. We came across the song California Love by Tupac and found that he had sample two songs: Roger Troutman’s California Love and a Joe Cocker’s Woman to Woman.
Something interesting that we found was that Tupac created his song in the 90s, Roger Troutman in the 80s and Joe Cocker in the 70s. Realizing this, we decided to create use these decades as a theme for our each of our scenes. We then found footage from each of the years in attempt to ground the user in each decade.
WEEK 3 & 4
A Re-creation of Home
Below is a street view of my the house and neighborhood I grew up in for seven years of my life. I have some really great and some very traumatic memories of this place. I wanted to reconstruct this environment. So, I decided to search my address in google maps. In the street view I took several snap shots of the block I grew up in.
In unity, I tried to recreate my neighborhood with free assets from the asset stores. Below are some snapshots of what I created.
Below is a 3d scan of myself that I will include in the environment
The next steps are:
Animate the model of myself so that he can walk within the environment
Create destinations within the environment that will be trigger points for memories
Make 3d models of the actual houses
This video includes a visual representation of Keerthana’s write up.
Here I’ve come to live a lie
To forget where I lay
And the reasons for I cry
Here I’ve come, away from the concrete array
Amongst the city that never sleeps, I see
The resilience of nature evidently
Of bustle and hustle, a sea
But it is this that I seek so ardently
With music touching my face
And a wind chill warming my cheek
As I walk, I race
The leaves across a creek.
For then , I forget
All that I regret
Of everyone I lost
And everything it cost
I walk back with a smile
That I turned this wheel
It’ll get me going for a while
And this is how I feel
West 4th St. Station, waiting on the train. Here are my observations:
I am standing next to one of the wooden benches, waiting for the A train to go home. I live uptown. There are two dudes sitting on the benches. One is Black with long dreads and the other appears to be hispanic. Around me are people waiting on their trains. Some are talking with another and others are sucked into their phones. There is only one person who is reading a book.
The E train just arrived and many are getting on it. The A train just arrived a seconds later.
I created a video using 360 video of me painting in my studio. It’s really my bedroom but it also operates as my studio. One way I thought about creating presence is seemingly giving the viewer a body. I did this by attaching the Ricoh theta to my face. Since the video is 360, I covered one of the cameras (using a soft clothe, careful not to scratch the lens). This limited the viewer to have a view of 180 degrees. Also, by covering one of the cameras lenses it seems as if the viewer is wearing a hoodie or jacket. noticing this I put on a jacket in hopes to make this experience more immersive and realistic.
“Misundersthood” is an interactive installation hoodie/narrative that addresses the issue of police brutality. Users will be instructed to place on a hooded sweatshirt (hoodie). Once the hood is flipped up, an audio clip will play. Users will then hear a story about a young black boy’s encounter with a police officer. This piece aims to spark both an inner and outer dialogue with users and the public about the issue at hand. This project was inspired by an actual encounter I had with a police officer when I was thirteen years old.
In this week I thought through how the placing on the hood of the sweat shirt would activate the sensor.
Thinks to think about:
What is the threshold of the sensor, when hood is off and when hood is on
What will be the best place to place the flex sensor
how can I embed headphones within the hood.
In thinking about the audio piece, I found myself questioning should I include the sound effects and would they add to the feeling of immersion.
For example, In the narrative I talk about about the police pulling up next to me. I am considering including the sounds of the engine.
I believe there is something important in telling personal narratives. And I wish to include my narrative in this piece. When I was in the seventh grade, I’d gotten stopped by a police officer. I will never forget the way it happened. I had just bought some candy from the store and was riding home on my bike. Shortly after I turned the corner on which the street I lived, a whit police officer pulled up next to me, demanding that I get off my bike. I hadn’t done anything. Eventually after refusing to do so, she aggressively yelled for me to do so again. I stood on the curve with my hands behind my head, she frisked me. I don’t remember much after that.
My story ended with me being able to go home. But, it very well could have had a different ending. So this project aims to highlight my story as well as the very possible turn that could have occurred.
I have spent the past week trying to figure out what narrative I wish to portray.
The idea change a bit and I realize that too often, the same stories are told. So, I wanted to tell a less known story. I am still trying to figure out whose story I wish to tell.
I have also gained inspiration from the lunch counter exhibit at The Center for Human Rights in Atlanta, GA. Below are images from this exhibit.
The idea I had was to a person to stand in front of a hoodie which would have a monitor embedded in the face of it. As the viewer looked into the hood the would see their own reflection. Then the face of a young black boy would appear and he would tell the story of how his life was taken from him.
Dream: To allow people to walk one day in the shoes of a black person. Providing them with the insight to our lives, the beautiful and hardships.
Vision: Create an entire body of work around hoodies and what they symbolize when placed on the black male.
Goal: Project idea: ” Hooded” (a tentative name)
As a storyteller, I am always trying to figure out how to play a viewer in someone else’s shoes. The idea I have is to create an installation piece that highlights an aspect of the black male experience, wearing a hoodie. When this garment is placed on the black body what transformation occurs? Often, such men are criminalized and stripped of their humanity. Media portrayal has shaped this narrative.
I wish to highlight and challenge these notions. From a historical standpoint, I want to understand the process of being hooded (of placing on a hood). Imagine walking into a room and you see a hoodie dangling from a ceiling by wires. It is positioned in the center of the room. And as you approach the hoodie you are instructed (either through visuals or audio) to put on the piece of clothing. So you unhook the jacket from the wires and place it on. When you flip the hood up onto you head you are then taken taken down a journey about someone who lost their life due to racial injustice. Embedded within the hoodie are headphones that will play the audio and there are sensors that can detect the hoodie is place on your head.
Here is a sketch of the hoodie with the headphones.
Mai and I decided to work together for our midterm. We both had an interest in including videos in our project and creating characters in tilt brush.
The type of VR experience we have decided to go with is the Ghost without impact. In this experience the user is accompanied by a cat that takes them on a journey down a long path. The there are two large walls that are on either side of the the user. The walls will have videos playing on it; the content is still being determined. At the end of the of the path, the cat will stop and the user continues on into another world. This is the first scene fo this storyline.
This is a street view of my the house and neighborhood I grew up in for seven years of my life. I have some really great and some very traumatic memories of this place. I wanted to reconstruct this environment.
So, I decided to search my address in google maps. In the street view I took several snap shots of the block I grew up in.
Ideally, I want to place a 3d scan of myself within this environment and when I walk around to certain areas I would activate some memories.
Interaction in Ureal
For this week I attempted to create an interaction where the user is able to collect coins. In the games, the user is able to move around through the process of teleporting. I realized that this was a technical limitation. To collect the coins I would either have to move through them and I realized that this was not effective.
Videos and snapshot coming soon.
The four ways of Storytelling in VR
Observant Passive – the mist traditional form of medi
Participant Active – a participant is placed in a creator’s world
Observant Passive – an observer lacking any identity that is placed within a story
Participant Active – a participant is place in a world and has the ability to manipulate and influence the storyline.
Super Hero Scene
I created this environment in unreal. Above are three screenshots of my character within this world. My idea was to think of this as a start to a game. The goal would be to get to the top of the mountain where the mansion is located. The character would be able to go from house to house by jumping on a series of floating panels.
I had the chance to try VRChat and while conceptually, I think it is a great idea, my experience fell short of my expectations. Imagining a virtual environment potentially filled with avatars where one could interact with real people sounds entertaining. When I entered into the chat there were about 3 other people there, and all of them were relatively young. Their conversations were immature. One of the only ways to communicate with others is by talking. Also, I did not like the way the user can maneuver within the game.
The murder of 15-year old Latasha Harlins and the the beating of Rodney King sparked controversy, which ultimately led to the 1992 LA riots. But in the midst of this chaotic moment, two Los Angeles gangs long divided by misunderstanding, territory and color, agreed to a truce.
“1992” allows users to experience a rare moment in South Central’s gang history, a peace treaty between Bloods and Crips through augmented reality (AR). They were determined to come together for a common cause, to stand up against police brutality.
For my final I want to continue with with the project on gangs in LA. I want to particularly want to focus on two major years. 1992 and 2016. In both years the Bloods and the Crips took the initiative to have a truce. Growing tired of the killings of many Black and Latinx, they decided that their needed to establish peace amongst one another.
This project aims to depict these stories through animated characters and found videos which addresses the LA riots and the two peace treaties.
The videos below serve as my inspiration.
With all the hype around Black Panther, I decided to do this assignment around this topic. Btw, the movie was dope!!!! I saw it twice.
The object I chose to augment was a remote. When the remote it tracked a TV appears and the trailer of Black Panther plays.
If I had more time I would have liked to incorporate virtual buttons which will let you change through movie trailers as you change the channel.
If you can Augment Anything
The Soundtrack/Black Panther Album for the movie Black Panther just dropped this morning and of course Kendrick Lamar was selected to do it. Kendrick and the Black Panther are culturally important to the Black community, especially now. So, if I could augment anything, it would be an interaction between Kendrick Lamar and Black Panther. They would probably be fighting side by side against villains. Kendrick’s character will be Kung Fu Kenny from his album Damn.
This video shows me using a virtual button to rotate a cube. In my next step I would want to use it to animate the characters in my previous project.
Multiple AR Trackers in Unity
This is a shows me using two images as AR trackers.
Commonly, gangs are known for their association with guns, drugs and violence.
This augmented reality piece aims to highlight a point in history when Bloods and Crips united during the Los Angeles Riots of 1992, and alternative narrative to gangs. This is an ongoing project and is in its early stages.
This project was created using Unity, Vuforia, Mixamo and Adobe Fuse.
Augment something in your homeor any place (with any program that you are comfortable with)
For this assignment I augmented one of my drawings. There is a animation of a bird sitting on a street light looking around and airplane flying in the background. I also included audio of a bird chirping and a plane flying.
Write down 3 ideas for projects that you think act as socially engaged art practice, one paragraph each. These are thought experiments, and do not have to be something you can do right now.
Idea 1: The Alley-way Project
This topic came about right after class when Lauren, Ellen and I were in conversation about this assignment. We extended the discussion about public spaces and who has access to them.
I thought of the idea of transforming spaces to change how people feel about them while being in them We have this common perception of an alley being a narrow, unsafe, dirty, and sometimes dark place. The idea is change this notion. So the overarching question is, how can we make an alleyway inviting to others. What can we do, adding or subtracting to them to make people want to walk through them? Our idea was to possibly fill them with things that one would find at a festival: music, food, and much more.
Idea 2: My Shoes, Your Shoes, Our Shoes
The idea is to be able to step into the shoes of other people. This will be done in a public space, possibly on the street or in a park. Participants will be given chalk to outline their shoes on the ground. After outlining their shoes, they will be asked to type a message on their phone that is reflective of their personal narrative. An Augmented Reality (AR) tracker will somehow be associated with their particular outline and message. Participants will be able to both tell their story and experience the stories of others. And the participants will use some type of app on their phone that detects AR trackers. This will be placed in front of the outline they created. Once another user steps into the outline of a person’s shoes and use the app place it over the tracker, they will be able to read what the person was able to write. In this way people will be figuratively stepping into the shoes, or at least partially, of others.
question: Would including audio into this project have a better impact?
How to make it simple enough that large crowds can participate?
Idea 3: Living Walls
This idea aims to highlight what exactly New Yorkers like/dislike about their particular neighborhoods. There will be multiple, portable walls placed in various neighborhoods around NY. The walls will initially be blank(white) and they will be left in each neighborhood for about 2 weeks. After that time all of them will be collected to be displayed in a public art show. The goal is to spark a conversation about the similarities and differences across communities that have been disconnected by affluence. Affluence will be operationalized further.
For this assignment I decided to create a game in unity.
This is the first iteration of the game layout. I had a difficult time figuring out how to make the blue plane with the red planes. If I had more time, I would keep with this idea. The direction I would go in would be towards an Assassin’s Creed game. That was one of my favorite games. Ideally I would want the main character to be a black woman.
I also enjoyed the mobile app Subway Surfers. There are many many games that are very similar.
For this assignment I challenged myself to use already-existing footage. Specifically, I used green screen footage that I compiled from YouTube. I believe there is something interesting in using ready-made materials and repurposing them.
I was restricted in how I could tell a narrative because of so. My goal was to try and tell a seamless narrative. However, I learned a lot from this process. For example, I had to re-think my shots and angle.
How did I do it?:
With my current interest in the topic of racial injustices, I knew from a general perspective, what my project would be about.
Green screen footage was perfect because many of the videos I came across stood alone, they were not contextualized. It was when I added background images, and other object into the same frame that these materials began to tell a story.
I was extremely limited in what I could do with the footage because i was limited in After Effects. After learning a few tips and tricks, my footage began to come together.
At first, I contemplated on whether thevideo needed audio. I was interested in the silent nature of the story. I also wanted the view to be alone with his or her thoughts.
I then revisited this thought. Again, because there were were some limitations, I decided music has the power to connect the gaps. At least, that was my take on it. In the end I believe the music complemented this piece.
If I had more time I would have made the film more. . .
I used this week for compiling green screen footage that I would use in my animation. I know I wanted to develop a piece around police brutality. I did a simple Youtube search and that is where I found all of my green screen videos.
For this assignment I created an animation of my character (doesn’t have a name yet) being hit my a ball. The character first assumes that since the ball is initially traveling at a fast speed, that the impact will cause a lot of pain. However, as the ball approaches, it suddenly slows down and impact is low. The character is “blown” or upset that its expectation was not the reality. So the after the impact of the ball hitting its face the character says “WOW.”
The main challenge I has with this assignment was establishing a new position for each part of the character’s face that I chose to move. For example, when I tried to move the eyes brows it would move and that be the permanent location. And it did not move to the new position once I played the animation.
Our group decided to make a video of us traveling from our hometowns to ITP. We wanted to make sure each of our travels were uniquely different.
Final projects are a creative idea inspired by the concepts in this class. There is no requirement to use a particular aspect of programming. The idea and your enjoyment and interest in the idea is what counts. Some things to remember.
Keeping things simple and small in scope is a plus. If your project idea is a big one, consider documenting the larger idea but implementing just a small piece of it.
Also think about making a final project for a small audience, even one single person like a family member or friend. . . or yourself. This can be a good way to focus your idea and design process. “Generalizing” the idea can come later (or maybe not at all.)
Final projects can be collaborations with anyone in any class.
Final projects can be one part of a larger project integrated with Physical Computing or another class.
For my final project I decided to create a visual to the song HiiiPower by Kendrick Lamar.
How it works:
There is a boombox and a flat screen tv preloaded onto the screen. And there are six buttons on the boombox. As you press each button a section of the song will play. Once the song plays the text will play also. In addition to the audio and text pictures relevant to the lyrics will play on the tv.
This is how the screen will first appear.
Once each button is pressed the screen of the tv will display a visual that is specific to the lyrics that are playing.
Once you press another button the same will happen.
Create a sketch with one or more of the following. Feel free to add DOM elements to a previous sketch.
Pre-defined HTML Elements
Pre-defined CSS Styles
HTML Elements generated by your p5 sketch
Some kind of mouse interaction with an HTML Element using a callback function you write.
If you are feeling ambitious, try replacing a DOM element with a “physical sensor!”
Questions you might ask yourself while working on the above.
When does it make sense define HTML elements in index.html?
When does it make sense to “generate” HTML elements with code in p5?
When does it make sense to apply styles in code with the style() function vs. predefined styles in style.css?
The idea this week is to explore re-organizing your code. It is 100% legitimate to turn in a version of a previous assignment where nothing changes for the end user, but the code has been restructured. You may, however, choose to try a new experiment from scratch. Aim to keep setup() and draw() as clean as possible, and do everything (all calculations, drawing, etc.) in functions that you create yourself. Possibilities (choose one or more):
Break code out of setup() and draw() into functions.
Use a function to draw a complex design (like this) multiple times with different arguments.
Write a function to that returns the result of a mathematical operation that you need to do several times in your code.
In general this week, you should work with rule-based animation, motion, and interaction. You can use the ideas below or invent your own assignment. Start by working in pairs/groups as determined in class. Try pair programming, 1 person at keyboard, the other keeping overall picture. Can you divide an idea into two parts and combine those parts? Can you swap sketches and riff of of your partner’s work? You can post together or break off and complete the assignment individually.
Try making a rollover, button, or slider from scratch. Compare your code to the examples below. Later we’ll look at how this compare to interface elements we’ll get for free from the browser.
One element that changes over time, independently of the mouse.
One element that is different every time you run the sketch.
(You can choose to build off of your week 1 design, but I might suggest starting over and working with one or two simple shapes in order to emphasize practicing with variables. See if you can eliminate all (or as much as you can) hard-coded numbers from the sketch.)
Write a blog post about how computation applies to your interests. This could be a subject you’ve studied, a job you’ve worked, a personal hobby, or a cause you care about. What projects do you imagine making this term? What projects do you love? (Review and contribute to the ICM Inspiration Wiki page. In the same post (or a new one), document the process of creating your sketches. What pitfalls did you run into? What could you not figure out how to do? How was the experience of using the web editor? Did you post any issues to github?
For the final I decided to create an ITP robot. This print is composed of multiple parts: the head, the body, two arms, a foot and a base. The only part that is detachable it the head. I also included two LEDs for the eyes.
I decided to create a new design because I was not truly satisfied with my previous idea.
The first challenge I had with the final project was coming up with a design that I was happy with. after several drafts of project I found myself satisfied with this project. Secondly, as I was printing several printers broke in the process, and they happened to be the ones I was currently using. I also had complications with my print. During the printing process, parts of the print raised from the bed of the printing causing it to move as the printhead added a new layer.
For this assignment I decided to make a 2-piece print that could snap together. I chose to use Baquiat’s face.
The first step was to mirror his face. I also decided to adjust some of the points from the drawing by turning on the points. I mirrored the shape across the y-axis.
From here I extruded the curve and created a solid. Next I created a box and placed it within both parts of the face. Then I performed the “BooleanUnion” command and joined the left half of the face with the box.
After I subtracted the volume of the box from the right half of the face (BooleanDifference). The result is below (was a whole in the the right side of the face.)
Below is the original 3D print (far left) and two models that I printed of the above object.
I printed a total of three prints because the first two prints had a hangover at the location where the left print is supposed to insert into the right. To accommodate for this problem I included a support system so that there would not be a resulting overhang.
I had fun with this week’s assignment. I 3D printed a sketch that I previously created in Illustrator. In order to make this sketch printable I had to make it a solid. I this that by extruding the curve (“ExtrudeCrv”) and making that a solid.
Below is the image I had drawn in Illustrator. It is an image of both Jean Michel Baquiat (left) and Tupac Shakur (right).
Believe is the second version of the image. I wanted to make Tupac look more realistic.
From that image I was able to trace the outline of each face and create a shape. Next, I patched the surface to make it 2D (“Patch”). Then I extruded the surface to make it a solid. (“ExtrudeSrf”).
The hard part was done and I printed it. I loaded the image into the Cura software to print. Then I sent it off to the 3D printer. Below is the final product.
My challenge with this project was that some of the 3D sketch did not print. This was because the width at some places were not large enough to be printed by the printer.
This week’s assignment was to model an object (can be useful or just for fun) that is composed of at least 2 parts that fit together.
The object I created for this assignment was a refrigerator. The first step that I did was create a box. I knew this would be the foundation of my fridge.
My next step was to create the handles of the fridge. I did this by creating three smaller boxes. I mirrored each of then across the axis where I had split the box. Then I joined each of the handles, respectively to either the top or bottom half of the solid. (Note: I think it would have been easier to create the complete top handle and then group those solids and then mirror the handle).
Then, I created a line. My hope was that this would be used to slit the fridge in half. I then realized that this would not work since the object had to pass through the box.So, I created a rectangle and then carried out the “patch” command which filled the rectangle. Next, I positioned the new rectangle in the middle of the refrigerator so that it would bisect it (horizontally). And then I carried out the “Booleansplit” command. This then cut the solid in half.
After that I had to figure out how could I make the two pieces fit together. To do this I decided to create another box within the two boxes. Once I completed that, I joined the top half of the fridge to the box that was inside both sections of the fridge.
Then, I decided to perform the “BooleanDifference” command to make the the “fit” happen. What I realized was that once I did this, one of the solids would delete. So, I decided to duplicate the piece that would delete (copy and paste in the same exact position). When I did this the once section would delete and the other would remain satisfying the assignment.
The first assignment for this class was to create a 3D sketch in the program Rhinoceros that can be printed. I enjoyed the challenge of this assignment. For this sketch, I decided to draw a car. The difficulty I had with this assignment was figuring out how to make my image 3D.
The below image shows the 4 major stages of my 3D model from a top, perspective, front and right view.
This image is a close-up on the “Perspective” view. The first stage (top left) was a wireframe outline. Initially, I was not sure how to create the car in 3D. So, I figured I would start by drawing the outline. I then moved to stage two (top right) of the sketch. Here, I used shapes that I was familiar with to reflect the sketch that I drew. I assumed that I could create various solids, join them and then trim them so that they can look like my wireframe outline. I found this to be hard for me to do. And I thought there could be an easier way to do. Therefore, I moved to the next phase.
In the next stage (bottom left), is the actual sketch that I would use to print. I had a bit of help. I used the first sketch and joined shapes to close the object. Lastly, I moved to stage 4 (bottom right) and I just enlarged the image so it can be larger when I print.
Two users step up to a halo of light in front of two pressure sensor mats that are labeled “Yes” and “No”. This set up is positioned in front of a large screen TV.
The users are presented with a brief description of the project.
The users are presented with direction to run in place for 2 seconds on the “Yes” mat, and standing in still for 2 seconds on the “No” mat.
The users answer a series of yes or no questions. Each answer “yes” signifies privilege.
The users are presented with a final calculation of privilege in the form of a percentage. It’s here where they find out how close their starting line is to their finish line, i.e. how big of a head start do they have in life.
We tell the users that privilege is a powerful tool that can be used to help others and encourage them to discuss their experience.
Finally, we provide small cards with contact information for various organizations where they can use their privilege in a constructive way.
The image above shows our system diagram.
Our Arduino code is ran on our computer and sent to the microcontroller (serially).
We have four analog sensors and one digital sensor connected to the Arduino.
the input from the sensors are sent to arduino (via wires) and then to the computer (serially).
This information is then sent to P5 (via P5 serial control) which is shown visibly on a TV monitor.
For our final documentation, I’d like to present it visually with a rough cut of our trailer video. We intend to refine it with additional footage from the show and other users, but for now, here is the Starting Line in its (currently) working entirety:
This week we made a lot of progress with our project.
Lauren and I went to Canal Rubber to get material that to laser cut. At first we were going to use use a yoga mat and cut it to our needs. However, many yoga matts are created with vinyl and that material cannot be used by the laser cutter. So, we had to rethink our material. Rubber was the most economic and efficient.
Before cutting the rubber material we first used paper as a precautionary measure. Then we play tested our game.
After we cut the matts we still had trouble with out code. With the help of some we fixed it. Then we moved our game to an area where we would have more space and a larger monitor. We did several rounds of user testing.
After using testing we had to consider the following:
One of the questions, ” I have never been racially profiled.” Some users had confusion with this question because it included the negative “never.” Therefore, we are considering a new way to frame this question.
Things to figure out:
How to print “yes” and “no” on the screen when the users respond.
What the final fabrication will be so that the wires are better connected. (Currently we are using alligator clips to connect the wired to the sensors.
When placing our sketch in full screen the image shifts and the progress bar as well. As a result, part of the images are cut off.
Final Project Progress
Play-testing our project was extremely helpful in that it allowed us to see how we could improve our over all idea.
We came up with the name for our project, “The Starting Line.”
We decided that a motion sensor would not be the best sensor to use. So, we moved to using digital input rather than analog. Our current thought is to use two mats, one for each player as our sensors. The mat will be divided into three parts (see below). The top left would be for answering “yes,” the top right for answering “no” and the bottom for a home state. There will be no sensor in the home portion.
Below is our systems diagram
We had to consider the following:
How do we us minimal instruction while remaining clear in how we wanted the users to interact with our game
How to make our questions culturally universal
How many questions to ask
what would be the best sensor
how to not shame anyone
Come up with an initial design for your final project’s user interface and develop a plan to user test your design with users with an interactive sketch.
Lauren and I decided to partner for our final. The idea was developed was to create a game based on one’s privilege or lack thereof. This will be a 2-player game in which the only way to advance is if you are are privileged.
This is how it works:
The will be a monitor that is in slit screen mode. The players would stand a in front of the monitor. They will be prompted to answer questions.
If they answer “yes” to the question then they are to run in place. A motion detecter will detect their movement and as a result they advance forward in the game. On the flip side, if they answer “no” they are to remain in place and stay where they are in the game.
A potential question that they may get asked is, “Did you grown up with books in your house?”
Another question may be, “Have you ever studied abroad?” As each question is answered each user will know exactly where they are in the race based on an indicator.
At the end of the game we want the users to takeaway something from their experience. If they choose to, they can provide us with their email and they will receive a list of organizations that they can get involved in.
Questions for the class:
If both users receive the same question and can see one another, does this affect them in any way? Their behavior, feelings, etc…..
i.e., if player 1 is male and player 2 is female and the question is “I am a man,” if one user sees the other advancing or remaining in place how will each react.
Should there be a border/divide between the two until the game is over? in this case they will not know what question will each person is receiving.
How do you feel about not knowing the intent of the game?
How do we make a time sequence where the game goes from one phase to the next?
How to start
***I’ve been having trouble uploading any type of images for the last two weeks. That is why there aren’t any for this week or the previous week.
Final Project Concept!
The idea that I developed for the final is called “We Were Given Wings.” This project is meant to commemorate the many innocent black lives that have been lost due to racial discrimination and white supremacy. This will be done so by highlighting three to five people who have lost their lives to such injustices.
I would suspend a hoodie from the ceiling using wires. Inside of the hoodie will be a monitor that would play a short video of each of the innocent people who lost their lives. (The “hoodie” is a symbol of the black body and how it has been criminalized throughout history.)
Beneath the hoodie there will be three to five shoes that the participant can “step” into. When they step into a specific shoe, a video will play.
On the floor beneath the hoodie would be a wooden platform. And on the platforms there would be a number of shoes respective to the number of videos. Under each outline will be sensor that will activate once the viewer steps onto them. This will begin the video.
The viewer would be able to rotate step into any shoe as they shoes. Once they step out of the shoe the video will automatically stop.
Midterm Project !
For this project I worked with Erin Cooney to build a water sound box. The idea of this application came about because we had a shared interest in sound.
Using the water sound box:
The sound box is created by placing water and a speaker inside of a transparent container. We incorporated neopixel LEDs beneath the container to make it more visually appealing. Theres is a switch that turns on the sound and LEDs which change color in a circular motion. We used a function (p5.oscillator) in p5.j to create the sound. There are two photo sensors placed on either size of the box. The photo resistor on the left controls the frequency of the sound and the waves produced by the waterproof speaker emitting sound waves. The other sensor controls the speed rate of the neopixel LEDs.
As the user’s hand approaches the left sensor, the frequency decreases producing more visible vibrations in the water. As their other hand approaches the right sensor, the speed at which the LED’s colors are changing increases.
Below is the first sketch we created.
Below is the second iteration.
Below is the circuit for this application.
For this lab I used a joystick to control the the x- and y-direction of an ellipse. This application was pretty straight forward because I used the lab as a guide. I am still having troubles understand what some of the code means and that’s the primary reason why I relied heavily on the lab to help me with this application. One of the problems that I had was figuring what went value to place in the x and y arguments for the ellipse so that when I moved the joystick, the the ellipse could move also.
I did not realize that nums was an array, and I spent about an hour trying to figure out why it did not work. After some help, I realize that I should have included nums instead of just nums.
Rubber Band Gun
The assignment for this week was to create an application using a servo motor. I created a rubber band gun.
This project began learning the functionality of the servo motor. I took a lot of time researching projects online to understand this further their limitations — only being able to turn 180 degrees. Based on this limitation I noticed that if it is oriented horizontally then I would be constrained a certain way and respectively the same if it were oriented vertically.
After hours of research and dead ends, I landed on the idea to create a rubber band gun. Below are are my beginning sketches.
I began first by searching the internet to see if I can find an illustrator template of a gun. I was not able to do so. My next step was to find an image of a gun and then trace the outline of it in illustrator.
Below is the illustrator outline of the gun.
The next step was to figure out where the servo motor would go. I figured the grooves of the gun would make for a perfect place. Also, I had to figure out the dimensions of the servo motor so that I can out the negative space. I also had to the the same for the breadboard.
Next, I laser cut the layers. I had to go back and hand cut the negative space of the gun just so there could be enough space to conceal the wires within the gun.
This is the product.
Servo Motor and Potentiometer
1 Mini servo motor
9 layers of cardboard (laser cut)
6 rubber bands ( 5 for holding the layers of the gun together and 1 to shoot)
1 voltage regulator (used to hold the rubber band in place)
Pick a piece of interactive technology in public, used by multiple people. Write down your assumptions as to how it’s used, and describe the context in which it’s being used.
For this assignment, I partnered with Stephanie Paige since we both had a hard time figuring out what interactive technology to use. We decided to observe how people interacted with their environment as they played an Augmented Reality game called “Zombies GO!” This game was played on an iPad.
In this game the user’s reality is partially transformed as they fight to survive a zombie apocalypse. While in game-mode, the user can see both their actual surroundings and zombies at they uproot themselves from the ground. What I observed was participants rapidly turning left and right in effort to shoot and kill their enemies.
One of the troubles that I witness was that the app is extremely sensitive to motion. So, when the participants would turn one way to find the zombies, they would often have difficulty hitting their target. It is hard to aim in the game.
The easiest part this game was having fun.People found the game to be quite entertaining. They often laughed if they shot a bystander by accident (a person who happened to be in the view of the camera). Also, learning how to play the game was quite simple. All one has to do is hold the iPad and turn to look for zombies. The next step is to tap anywhere on the screen and shoot. there is a crosshair that is centered in the middle of the iPad.
Lighting an LED
I used my basic understanding of circuits to create a complete circuit and make a LED light up. In order to make this project work, I first had to understand Ohms law (V=I*R).
V = voltage, I = current and R = resistance
The challenge I had on with this project identifying what resistor to use, mainly because there are so many. Another difficulty that I had was selecting the correct cottage resistor. Other than these challenges, this project was fairly straight forward.
1 Solder-less breadboard
1 Voltage regulator (7805)
1 Long red wire (eventually cut into smaller pieces)
1 Long black wire (eventually cut into smaller pieces)
1 DC Power jack ( female) [eventually soldered]
1 DC Power jack (male)
1 9V battery
1 9V battery connector
1 Wire strippers
1 Needle nose players (for bending wires)
1 Soldering iron
1 Hot glue gun
1 Set of helping hands
Creating a Switch
The instructions for this lab was to create a switch. I did not want to do a traditional switch. This was the hard part. I tried to look around the shop to see what was available. I decided to go over to the junk shelf and see what I may come across. I eventually found a fan that I thought about using. After connecting the fan to a power source, I realized that it was not strong enough to blow copper tape. So I resumed my quest on finding that “something.” I had wore sunglasses that day and saw them before me. I picked them up to move them out the way and noticed the interaction of the two pieces. This sparked the idea of the above project.
The overall challenge of this project was trying to get the the copper tape to hold the red and black wires in place. Since they were low in malleability, it was hard to orient them in a position to calling with the glasses.
The materials from the previous project are needed here.
1 Additional red wire
1 Additional black wire
1 Pair of sunglasses
1 Piece of copper wire tape (eventually cut into two pieces)
What is Interaction
In respect to this class, physical interaction is the two-way process of at least two actors (human and non-human) engaging with one another via touch. Good physical interaction calls for interplay that includes more than just the use of hands. For some cases, the use of the entire body will suffice. A good example of digital technology that is not interactive is a television. I suppose more recent “Smart TVs” can be considered somewhat interactive because they have the ability to respond to the user’s voice. But I have chosen the television as a good example because it entertains people.