As per discussed in the previous post, a tutorial was added in the beginning of each segment. Below is a sample of the tutorial card.
Tutorial in the beginning of the 2nd segment
The question cards were also updated to match with the overall design look. Below is a sample of it.
Question card sample
As for the ending/tutorial to segment 3, a more seamless video was created to simulate the user browsing on a website displaying the instructions. Below is a screenshot of the video
Video Screenshot
At this stage, the project has been completed and will proceed to the user testing and review phase.
As per discussed in the previous post, a new method of interactivity was changed. Below is the most recent prototype of the project.
Customization and a part of the Interview Segment
Informational Segment
Although the prototype is more or less final, there are still room for refinement. A tutorial instructions will be added in the white slot in the beginning of every segment. This will be added by the upcoming weeks (above are images of its copywriting). In addition to that, there are some changes that was made on this prototype version:
Additional dock in the Interview UI
Minor exposure edit on the interview footage
Question and answers to just info card on the informational segment
During the course of this project’s production, I encountered an error in the interactivity feature of it. Kindly read this post for further explanation.
In the later weeks, I decided to try to combine everything in one scene instead of separating each interview section to their own scene. This method of interactivity will be invoked using the SetActive method, similar to the one created previously for the informational segment of the project.
Each object will be set inactive until invoked by an attached script from the current object. The script varies on the choices provided; however, the invoked GameObjects (which will be assigned through the Inspector window) remain more or less the same. The function created in the general script (ScriptB) might interfere with the options provided, making the desired active object inactive, thus the need of an additional variant for the next script. For example, the object with one choice will be using ScriptA, and objects with two options will be using ScriptB.
Upon experimenting with the controller settings, it is found that its button value differs per device. For instance, on PC, the value button for A is “joystick button 0,” whereas, on an Android phone is “joystick button 4”. This issue was later fixed through experimentation of each button value. Below is a simple note determining its value on PC and Android.
As all of the assets have been mostly completed, what’s left for me to do is to stitch it and determine the flow. As for now, I have completed the first (customization) segment of the project. Below is a video of it.
During the scene management stitching process, I encountered a problem. The camera flipped its axis upon a scene change, making the viewer appear upside down. Below is the video of the error.
error messageSettings Fix
I changed the Device Tracker settings and it seems to have fixed majority of the issue (refer to the image above). However, a tiny glitch could be seen upon each scene change due to a target invocation error. Below is an image of the error.
error message
Unfortunately, no tutorial in figuring out a way to fix the glitch was found and many other unity users experience the same error.
This blog post will be an update of this post. As per sketched and described on that post, below are the final look and feel of the printed attributes.
However, there are some revisions in the package content. Firstly, the introduction card will not be scan-able as it is intended for users to read and find out more about the project through the assets given. Secondly, the sizes are slightly altered to make a more balanced and polished look when the assets are stacked together. Lastly, the illustration cards will be post-cards instead so that the users are able to utilize it for other means. Other than that, it should be noted that some of the copy-writing are still a place holder and additional textures overlay might be added.
Cover, Introduction, PosterCVLaptopPost Cards
Below is a sample of how it would look like stacked together.
Last week I did the shoot for the Interview segment of the project. You can view the raw footage here. The videos were shot online through Skype due to the Covid-19 lockdown in Singapore (which was extended until June 1st). With that situation in mind, shooting in person with a real-life camera would be impossible.
After sharing the footage and update with my lecturer, Mr. Michael Loo, he mentioned that the footage are quite dark and laggy. This might affect the final outcome as importing and exporting videos will cause a file compression, which might worsen the quality of the video. Thus, a reshoot plan might need to be scheduled. However, I still wanted to try to work with the already available footage. Below is an attempt at it.
To make it more realistic, I added a glitch and lagging effect to the video. I also added a little bit of static noise to make it sound realistic. It should be noted that the videos above are still a rough edit. In the near future, I will be adding a call screen and editing the UI to make it seem more realistic.
As previously mentioned in this post, this project will consist of 3 parts: customization, interview, and additional information. After some consideration, I decided that the project needs to look appealing or interesting while it is presented. Thus, I decided to package the campaign with attributes. Kindly refer to the images below for reference.
The package includes: an introduction card (which you could scan through the MR), a CV (for the customization part), fake laptop for the set, and 4 illustration cards (that could be scanned through MR for more information). Below is a simple sketch of those.
As mentioned prior to this blog post, the project will be a result of a mix in between AR and VR. To combine these two, I decided to come up with a full Mixed Reality concept and add a small “customization” part in the beginning. Below is a simple sketch flow of the project.
To summarize, this project will be separated into three main parts.
1. Customization – Users will be given a series of questions/instructions with two choices each. These choices will play a role during the interview. In this case, users will be scanning an object or image that represents the question/instruction given. Below is the flow of each choices/questions.
2. Interview – Users will have the chance to experience 3 different scenarios, corresponding to their choices in the previous part. Below is the flow and question and scripts. Each scenario will have a duration of less than 10 minutes. You can view the script here.
Furthermore, it should be noted that the Interview will be staged to be conducted via Skype. This change of setting is due to the Covid-19 situation and the inability of booking a set due to the nation wide lock-down. Other alternatives were considered, however it is decided that this is the most time-efficient and effective outcome. Below is a sample of how the project will look like.
During the final presentation, a proper physical set will be set-up to provide the viewer with an immersive experience. Other than that, a physical button clicker/controller will be provided for the users to choose the options displayed on screen.
3. Informational – In this section, users will be able to gain more information about the issue through a questionnaire. The content and flow of this part will be similar to the prototype mentioned in the previous post, just in a form of MR.
In conclusion, there are some changes that had to be done due to the current international Covid-19 situation. However, I believe these changes will improve the project. I hope to be able to present it and let the users experience it.
This blog has been left idle as there was not a lot of progress. I will be compiling and writing notable progresses from the previous term in the upcoming weeks. Throughout Term 3, I was focusing more on the pre-production and AR aspect of the project. Below is a video of the pre-final prototype.
it should be noted that there will be 4 artworks in the final project.
However, the prototype above will not be utilized in the end product as during the final week our lecturer, Mr. Michael Loo, gave a couple of inputs regarding the project. He suggested that in order to optimize the project for the users, it is better to combine both the AR and VR concept to tie it all together. Throughout the break, I managed to compile and finished the technical and prototype skeleton of the whole project. What is left for me to do this term will be to shoot the VR video, edit it, and stitch it together.