Squint/Opera recently produced a short film to help launch Bjarke Ingels design for the final World Trade Center tower. The project is potentially one of the most complex urban renewal projects in the world due to its many political, emotional and architectural aspects. The film featured some ambitious techniques including the use of greenscreen motion controlled footage combined with 3d animation. Here we discuss some of the technical challenges we faced and how we overcame them.
The greenscreen shoot with MILO motion camera rig
Scenes shot within the building depict occupants going about daily activities: from the bustle of the newsroom, to staff taking a break on a lavish garden terrace and a window cleaner, working at the dizzying heights at the top of the tower.
All these scenes were built in 3DS Max but to attain the realism we were aiming for, it was decided that we would shoot real life actors for the main characters. This presented a production challenge, as the camera was doing quite complicated circling movements, which are difficult to control, so we opted for a MILO motion control rig with a RED Dragon camera which was an exciting Squint first.
Due to time and budget constraints, we knew we would have to work quickly on our 1 day shoot to complete the 8 shots required. The 3d team broke down each shot, supplying data for each camera move and that was then tested.
We have a lot of experience with green screen shoots, so were comfortable achieving a good result which would help the comp team key in the footage quickly. This, along with the use of the RED Dragon camera shooting 6K full frame meant we had a very high quality image.
The idea was then to load the data from the MILO crew into 3DS max to give us a camera that would match the live plate with some post processing, and even though we were confident that we would achieve what we needed from the Motion controlled footage, we were also very aware we were trying to do a lot in one day!
Footage in 3DS Max
With this in mind, we had a plan B in case of technical issues that might mean we would need to track the shots instead of using the Motion Control data within 3DS Max.
For every shot, we placed tracking markers on stands in places where they would not interfere with the actors. Then we did another pass with only tracking markers, stands and anything else we had handy so if necessary we could track that camera pass and use that data to create a camera for 3DS Max.
This proved very worthwhile as we had a couple of shots that didn’t work out well from the motion control data so we made the decision to fully track them in either PF Track or Nuke.
However, we didn’t just rely on the Motion control camera to help us achieve some of these shots. We knew some of the characters in the scene were far back in the shot and so we could shoot a static version of them and place them on cards in Nuke’s 3D space. As there was little to no parallax for these characters, this worked very well!
We also decided to use a simple old-school technique of placing our character on a turntable and spinning them at a similar pace to the 3D camera rotation. When placed on a card in nuke and rephotographed with the 3D camera (which includes translation) it gave the impression of the camera moving up and around our character. This worked very well and was a great cheap cheat so we could spend more time on the more complicated shots.
The final film
On reflection, we were pleased that we managed to get the footage we needed to help complete the film as well as data, either via the MILO directly, or using the MILO’s ability for multiple passes to aid the speed of our tracking results.
So, here is the final film. What do you think? If you have a burning question about the production please feel free to ask through our comments section below.