360 Films was our first project, and when we talked about interactive films it reminded me of a Netflix interactive film I had just seen recently, Black Mirror Bandersnatch branch, which fascinated me with its story line and the interactive experience of being in control of the story, so I was initially inspired by that film.
Story Line of Bandersnatch
I then wrote my own story line based on the film’s story line and the project requirements.
After setting up all the scenes, I started testing with the first person controller, adjusting the size and speed of the controller toAfter setting up all the scenes, I started testing with the first person controller, adjusting the size and speed of the controller.
I ended up adjusting the colour temperature of the light to create the feeling of a sunset.
I then imported the collated models into unity, and added the first-person plug-in.
Wall obtained by splicing and Boolean modifier、
After confirming the position of the model, I started the terrain editing.
I started by dividing the terrain into sections: grass, stone tile ground, road, asphalt ground, and playground. Then added texture maps, normal maps and reflection maps to each of them.Stone tile floorI started by dividing the terrain into sections: grass, road, asphalt, and playground. Then added texture maps, normal maps and reflection maps to each of them.
No Normal MapWith Normal MapTop view
To keep the scene from looking empty I added a lot of detail to the scene, such as trees, guardrails, shoulders, barricades, etc.
Before importing the 3d models into unity, as I mentioned before, most of the models I purchased were in 3dmax format, so I opened them in 3dmax first, but as I hadn’t used 3dmax before I just exported them as fbx files in 3dmax. The fbx files were then imported into Maya for adjustment.
But when I import this fbx model into maya, for some unknown reason, the following error occurs.
Then I tried to change some settings when exporting, but it still didn’t help. So I had to try to fix it in maya. This is the result of my repair, I repaired the missing face by copying the complete face and then reassigning a material to the missing part of the mateThis is the result of my repair, I repaired the missing face by copying the complete face and then reassigning a material to the part with the missing material.
I then tried to import the repaired model into unity, but there was a problem with missing maps. Despite spending a lot of time on this, the reality was that I had to abandon this model to find another one. It seems to me that one of the main reasons for the mapping error is the complexity of the model, which has thousands of faces, making it difficult to locate each mapping from the folder when it is imported into unity.
It’s a sad thing indeed, it looked perfect when it was in maya.
After that I downloaded another model with fewer faces, and although the level of detail is not as high, it has no errors in unity.
I started searching for 3d models corresponding to the three places I had researched. The main softwares I used in this part was Quixel Bridge and Taobao (Taobao is a Chinese online shopping software, similar to Amazon, but you can buy almost everything on it). Quixel Bridge Mainly used to download some texture maps and some scanned models, and I bought 3d models of the main buildings on Taobao. However, the 3d models I bought on Taobao were not always in unity format(Most of them were 3dmax source files), so this caused me a lot of trouble later on.
When I heard about the project’s requirement to create a virtual world representing your hometown, it took me back to my life in Beijing. At first many places popped into my head that I remembered, such as the Sanlitun shopping street that I often visit, the Great Wall, one of the most famous places in the world, the Forbidden City and so on. But considering the difficulty of presenting them in unity, and how best to represent the scene, I decided to make some places that were not so huge. After a series of filters, I targeted the Temple of Heaven, the CCTV Headquarter and my middle school, all of which represent my hometown and hold memories for many years.
Next, I started looking for references to these three places and made a mood board.
Mood Board
After that I sketched out what I had envisaged, drawing out where each place was and the route of travel.
Based on my experience with the last test project, I became more comfortable with the final production, but I still encountered a lot of difficult problems.I won’t go into detail about the parts that have been described before, but here I will only describe the parts that are different from the previous part.
Because the patterns to be made this time is not just an even division of the face into four parts like in the test project, I I need to have the uv of the face to know exactly where my patterns are distributed on the face to help me draw the patterns more easily. So I went to the SparkAR website and downloaded this face uv.
I then put this uv image into Photoshop and drew the pattern on it according to the size of the face to get these masks.
The next steps are the same as in the test project, the only difference being that I have replaced the black NO.0 material with a texture.
I used Midjourney AI to create this texture, a portrait of Cyborg.
Midjourney
Then, again, I put this picture into Photoshop to make it the same size as the UV map.
Finally, the most exciting moment of all, I added these images to SparkAR and created the effect of a flipped face by moving and rotating the face meshes.
But before that, I ran into a few more tricky problems.
I found that when one face mesh was covered over another, the face mesh underneath would be blocked, and while I was thinking about how to make this face mesh only non-transparent part mode, a feature came to my rescue, the Alph test(A contril that discards pixels with opecity less than the cutoff threshold), which solved this problem very well.
After that when I turned to the back of the flipped face it became transparent, then I found that when I turned on double-sided rendering it copied the texture from the front to the back. This is a good solution to the problem of transparency on the back.
After solving these problems, here is the final result
Before making the final AR avatar, I made a test version. Because I wanted to try out the effect I envisaged first to see if it would work. With what I learned in class on how to make the 3d glasses model track to the face and some tutorials I learned on the YouTube I made the following test version of the split face.
First, I create a face tracker, then add five face meshes 01234 underneath them, at the same time create five materials 01234, then assign the materials to each face meshes by number, then set the material map for 1234 to face tracking and the material for 0 to black because I like to create a sense that the original face is missing.
After that I drew four png layers in photoshop that divided a canvas into four parts(The black parts are transparent layers)
Then I put each of the four png images into the alpha setting(a control that masks the alpha chennel of texture) of the 1234 material, so that the face is divided equally into four parts.
After that, I moved the four face meshes to the position I wanted, by moving and rotating them (the rotation in the SparkAR is very difficult to use I don’t know how to set the coordinate system to the position I want instead of the world or object coordinate,and in blender there is this very good function for moving cursors)
When I was given the subject of this virtual world artefact, I associated it with my favourite genre of literature and film, cyberpunk. For example, the film Blade Runner series, the anime Ghost In The Shell series, and the game Cyberpunk 2077, so I chose to set my virtual world setting in a cyberpunk world.
Prostheses are an important element of the cyberpunk world, as they have been used since their inception as a tool to repair people’s disabilities. The first facial prostheses were used to help disfigured soldiers to repair their faces after World War I. Modern prosthesis technology is also used in the medical field, such as artificial hearts. In the future of the cyber world, people will choose to wear cyberwares for a variety of reasons: rigid needs, self-enhancement, entertainment, or fashion. Cyberwares are as much a part of modern culture as tattoos and mobile phones are in our time, but they are also a means of cultural expression, a symbol of fashion, and an efficient and convenient tool, such as the mantis blade in Cyberpunk 2077. At the same time, the cyberpunk world is an era in which the world is connected by a vast information network, so some people have modified parts of their bodies, people have kept only their brains and mechanised their entire bodies, and people who have turned themselves into a cyborg. Almost all human beings have been modified to varying degrees, and have a body with a port for connection to the network (at the back of the neck); for them, the body is simply a computer terminal, a container for the soul. I think it is highly likely that this will happen in our current world, where Elon Musk is working on his brain-computer interface, and I believe that this technology will be used in the near future.
In the Ghost In The Shell, the heroine, Suiko Kusanagi, is a full-blown cyborg, with a human appearance but a cold prosthetic body all over, even her brain is a machine that holds human souls and memories (similar to the concept of the relic chip set up in Cyberpunk 2077). So the question arises, is the cyberized me still ‘me’? As the name of the anime Ghost In The Shell suggests, which is the main body of me, GHOST or SHELL? And how can I prove the existence of ‘I’? So with this question in mind I created this AR avatar, when I use it, my facial skin will flip open like the geisha robot in the Ghost In The Shell, revealing the machine structure inside. I wanted to find the definition of the human through this process of transforming from a human exterior to a mechanical interior.