Raven Serenity Glover

2023

https://ravenserenity.artstation.com/blog

END 2022

Journal 22

4.26.22

Renders Galore! All time spent on this project between this and the last update were on renders. I setup many of the shots to be put onto the render farm, while Liz worked on her own shots and the composition on her computer. Below is a big collection of various test renders before they were sent to the farm, or edited by Liz.

Journal 21

4.22.22

Motion builder refinements on all other takes. Pretty much every take we still had to convert to Maya was done on this day. I began getting used to the motion builder workflow, but still feel like the program is an obstacle in the way of getting the motion data to our animation.

Journal 20

4.17.22

Motion Builder Refinement on full dance, female edition.

Today I spent the most time trying to get the female dance down. I found out in motion builder that the IK pull can be used to move the entire mixamo rig to align with the mocap rig. This was very useful for refining the sections in which Meng laid on her back, and her crawling on the ground. Unfortunately, this method makes it harder for the feet to stay on the ground.

Journal 19

4.14.22

Motion Builder Prep + animatic crunch + booleans

Journal 18

4.13.22

Mocap day.

 

Journal 17

4.11.22

Today I modeled a henry moore statue that may or may not make it into the final piece. The henry moore statues interested me because they have human like forms, so it may fit somewhere as a play on objectification/body dysmorphia/dysphoria.

 

Journal 16

4.6.22

Today Liz and I worked on creating a shader that’s inspired by the Final Girls art piece we found during our in-class tour of the Columbus Museum of Art.

We used Arnold’s example Rim Shader as a base for our shader. https://docs.arnoldrenderer.com/display/A5AFMUG/Rim+Shader

In order to get the yellow rim, we tweaked the color blend node’s color.

To get the transparency, we used the same color blend nodes as a grayscale for the transmission weight.

 

Journal 15

4.6.22

Over the weekend, I prototyped a couple of the mirror (identity) imagery concepts we were considering. Through this concept, I think it could be utilized into one of the scenes we were planning; where multiple characters separate from a single body. Each separation could show up in their own mirror (and their mirror only).

Suggestions: Pose, Move figure instead of mirror -> different actions

Journal 14

4.2.22

Visual Concepts for “Empathy” project.

Concepts for having intersections between models cause emptiness (representing agenderness)

Transparency between models being gray (Material based on the Final Girl art piece)

Interactions between the mirror

Through the mirror. Reversed dance in mirror. Different visuals between the mirror. Mirror with no reflectionss.

 

Reverse animation, forward animation, interaction in the middle.

Journal 13

3.30.22

For Assignment 3, I once again want to do more in the animation space. I’m interested in experimenting further with VR animation, but also could see myself working under other mediums. The use of motion capture could prove itself a valuable experience to learn the process of for my thesis as well.

Since I’m very much interested in the immersive potential of the LED virtual production spaces, I think another outlet I could work on is creating an animated space to be used in Virtual Production. I’m not sure how the technologies work, but I think this could fall under the use of greenscreen in real-time, as I can imagine that would basically prototype how it could look/feel in an LED set. As far as what the imagery, how I would achieve this, I’m still very unsure, but I think it could serve as a big learning experience.

After meeting with my advisors, the most important component for this final project would be to start with my conceptualization and have that drive the technology I use. As such, I would like to explore methods of visualizing marginalization and identity. Specifically, one concept to drive my exploration is the fact that Filipino languages do not have words to describe nonbinary genders. I see animation (and 3D graphics in general), as a medium that would be able to embody the emotions attached to this unique issue. Through doing so, I would hope to create an empathic response.

My interests in using the technologies described above come from my outlook in their empathetic potential:

  • VR animation having a personalization quality through the 3D brush strokes.
  • Motion Capture being able to be utilize in embodying a person’s experience, transferring it into a digital and replicable form.
  • Virtual Production having the ability to immerse a person in an environment, impacting their perspective.

These empathetic qualities would allow me to explore gender identity through allowing the audience to experience the introspective exploration done to discover one’s gender.

Journal Entry 12

spring break

During the break I spent a majority of time creating my 2D animations, setting up render layers for the OSC renderfarm, rendering out the animation, and compositing it in Nuke. The 2D animation was a challenging in finding the right work flow for frame by frame animation, as this was my first time doing 2D animation. Since the movements were relatively simple, there was room to not necessarily have to redraw each frame, but even doing so, it still took me about 15 minutes per frame. Luckily we decided to do the 2D animations at 12 FPS (half of the 3D animation’s framerate), but was still around 60 frames regardless.  Finding out the projections just worked like I anticipated was a huge relief.

The render layers were setup so that the background is on its own layer, the lights and characters are on their own layer, the 2D animation projections have a layer, and the fog has its own layer. These process made it really easy to get renders out early on, as I didn’t need to wait for 2D animation to be done in order to do 3 of the major layers. It also made the Nuke compositing process just work like a breeze. I was so relieved to see the renders came together so well in Nuke. A lot of this project was just “We’ll do this process and it SHOULD look like this” and thankfully that was actually the case (and it NEVER is).

Journal Entry 11

3.8.22

This week we started working on creating our character rigs and animating them. We also did some experiments with the lighting of our scene, which I think we’ll need to continue to tweak as we go.

Journal Entry 10

3.2.22

Today I made a very basic prototype of what I imagine the pacing of our animation will be. It comes out to about 25 seconds, with 7-8 seconds eventually being included for 2D animated. Since we’d only be animating our own characters, it comes out to about 5 seconds (5 x 12FPS = 60 Frames) of animation. We’re going to look into using a vector based 2D animation program, which will hopefully allow us to still get the painterly aesthetic we’re looking for. We’re going to try and get everything exported and rigged tonight, and hopefully we’ll be able to work on 2D animation over the weekend. That way we’d be able to setup Arnold and render everything out starting Wednesday next week.

Journal Entry 9

2.26.22

This week we got some good feedback on our project. After Kyoung’s suggestion to make sure the objects are presented in a personal manner, I’ve thought of two ways we could do that. 1 would be to model as many objects as possible in Quill, this adds imperfections that will give it personal feelings. Or 2, model in maya and use vertex painting (the same process quill uses) to add textures.

One of the suggestions Emily made about unifying all of our objects stuck out with me. Then, after seeing how Liz and Thomas used Mixamo for their project, I had an idea. We could make “characters” out of all of our objects and assign a very basic rig in mixamo. Then we could animate these objects coming together in order to show the objects “coming to life.” This gives the representation that although the disconnection is there, everything is still a part of us.

One thing we’re going to try and experiment with is:

  • Setting up the cameras in Maya, with the characters setup
  • Then playblasting our animation with the characters
  • Then Animating over that playblast in 2D
  • Then projecting the 2D animation back into  the 3D space using projection mapping.

In theory, this should just work because the camera will line up perfectly, but we’re crossing our fingers for this.

Journal Entry 8

2.22.2022

Over the weekend I put together a 3D-Prototype of our project using primitives so that we have a general idea of what we can do. I’ve also worked on making props for the project. I made a 3D sand bottle in Maya, which I’ll need to texture in order to get it to look right. I also experimented in Quill, which was a little challenging for me to get the exact model I wanted, as depth perception was hard to get in the software. The wooden elephant I made looks okay, but I think I can get better at making in order to improve the project. I also need to figure out how to make the textures show up in Maya.  We’re also experimenting with using projection mapping in order to have a 2D/3D crossover, since Megan has more experience in 2D, and I have more experience in 3D, so this would allow for both of us to have new experiences.

Journey Entry 7

2.17.2022

Megan and I had a productive conversation during yesterday’s class. It seems like our thesis directions have gone in similar paths, so our projects were already pretty related, which made conceptualizing a lot easier. We both new we wanted to work in the animation space, possibly a game space. We also had similar themes we wanted to explore, identity being a main one. Since we both had Asian-American, I tried to see if she felt a similar disconnection to her heritage’s culture that I did. We kept throwing back and fourth ideas of the best ways to represent this, thinking of creating a 15-30 second animation, using both 2D-3D mediums for animation. With our theme being “Disconnection” I was trying to think of the best way to visually represent that. I originally kept thinking about robots, which could be a way to represent that, but I didn’t want to risk it taking too much of our time to model and rig a character. Eventually I landed on this idea of a shattered room, which was inspired by imagery from the game Psychonauts 2. The idea being that our identities are a mishmash of several different cultures as a result of being Asian American. Megan played off this idea well, adding that we could have an area that’s uniform which represents “our culture,” and the areas outside being more and more disconnected. With this idea in place, we have a more object oriented approach, in which we can enhance with 2D animation later on. I really like this direction we’re going, and I’m interested in seeing what where this path takes us.

PROJECT 1 ENDS

Journal Entry 6

2.7.2022

I just spent all day working on getting most of the online aspects working for our project. Now there’s a win condition, lose condition, timer, UI markers for each match. I also set it up so that in the final build, the “Scanning” player will not be able to see the shapes (but will still see the connections made). The next steps will be to add the visuals, figure out how to let the player choose their role, and bring everything together. I’m hoping and praying that everything just works in AR, because I haven’t been able to test it on my own.

Journal Entry 5

2.5.2022

After meeting with Scott on Monday, I asked him about networking in Unity. By adding networking to our project, the scope would grow significantly, as instead of having one device that tells the players they win, all devices will be able to know certain variables. This would allow not just for all devices to know when a puzzle has been solved, but would also allow for multiple layouts to be involved.

Scott put me in contact with Matt Hall, the programming student that helped with the networking end of The Woods. On Wednesday, I spent about 1 and a half hours with Matt after class in a zoom call making a working prototype scene that was able to send 1 variable between devices. Now that I had that, I spent all day on Thursday implementing the same techniques into our game. There were some challenges I faced that were unique to how our game was setup. Since some functions relied on a boolean that gets set on and off immediately, I had to make it so the host waits a little bit before turning the boolean off. The reason for this is that the server tick rate updates slower than the game itself, which means actions that happen in only 1 frame likely don’t get transferred to the other devices. By using this workaround, each device is able to be updated. With that in place, I was able to setup the game in which all 3 devices showed a green/red “Particle” depending on if they got right or wrong answers.

I was only able to test it on my own computer on Thursday, but when our group went into our breakout room on Friday to present, we found out that even with three computers in three different places, the devices were able to send and receive data.

I was elated.

We got some good input on how to approach moving forward. I especially liked Juan’s idea of using the physical runes of the cube as a basis for some of the puzzles. The visual communication of rules that Thomas suggested also was a great idea.

 

Journal Entry 4

1.30.2022

I tried to see about implementing multiplayer, but I was having trouble understanding the components necessary for it to work. I’m going to try and get some extra help later on to hopefully be able to get it working. On the bright side, the puzzle now fully works for one of our scenarios, just not in AR space. I did some testing with Vuforia, and I was able to get one touch to happen in AR, with it using two markers to detect the two cubes. In the final version, there will need to be a lot more markers detected at once.

Journal Entry 3

1.25.2022

I’ve begun adding some very basic logic to our project. So far, I just have the ability to set a puzzle piece’s color (red, blue, yellow, white), and whether or not its blinking. For the most part, these will be the main identifiers for our game. I will still need to implement the batteries, and determining the number of batteries.

Journal Entry 2

1.22.2022

When we were brainstorming our game further yesterday, I began to question if our original concept would be that engaging for the players involved. Originally we wanted one person to hold both boxes, and have the person with the phone dictate communicate what they need to do in AR, how they’d need to rotate the boxes. The problem being that the person with the boxes wouldn’t be that engaged because they’re just being told how to rotate the boxes. In the previous class, Liz came up with the idea that the two players each hold 1 box, both using their phones. This idea made more sense to me, but there was still the problem of adding to the puzzle element. I eventually reminded myself of the game “Keep Talking and Nobody Explodes” where the rules for the puzzle are dictated by constant communication. This would make the game more engaging because the people would have to communicate what the other person has (since they’re only able to see the other person’s cube through their phone in AR). This way both players are trying to figure out what the condition for their puzzle is, and the added pressure of time will make it so they’d have to develop a code of conduct. I sketched out a basic concept of what this game would play like, and we kept building on the concept overtime. I think we have a pretty good idea of how we’ll approach the rest of the project thanks to that experience.

Journal Entry 1

1.16.2022

This week I’m trying to brainstorm some ideas for Team Joker. I haven’t been able to come up with concrete ideas, but I imagine the area that would most crossover with our interests would be a game space. Interactivity seems to be something our group all have an interest in, albeit with different levels of experience.

Leave a Reply

Your email address will not be published. Required fields are marked *