Field Test in Photogrammetry

This is a summary of my field test in photogrammetry. For the full project details click here: mno613-field-test-white-paper

Photo=light, Gram=drawing, metry=measurement.

In a Neiman Lab article published December 2016, photogrammetry is one of several cited emerging technologies that are expected to really transition in the new year from a passive video experimentation to a full immersive experience. Newsrooms across the country will be able to fully implement the new areas of photogrammetry, ambisonics and stereoscopic rendering. Given how easy it is—using old technology to make new—it makes sense that such emerging technology will become established technology as soon as the turn of the New Year. Photogrammetry has been around for sometime, most recently for constructing maps and topographic landscapes, it’s only with the use of three-dimensional technology that photogrammetry has earned a bigger place within the media landscape. The recent evacuations from the Syrian town of Aleppo could be told in a more immersive way and perhaps move a larger population of readers to a call for action.

We learned about many new and emerging types of technology in class, and while I wasn’t necessarily ignorant about them I had never delved into the technology until this class. Learning about virtual reality, augmented reality, 360 and 3D video, photogrammetry, sensors and drones were quite eye opening for me and my background in television sports and journalism. What most impressed me was the speed by which these technologies were becoming more common and easy to use.


My hypothesis for this project is to use two of the emerging technologies we covered and demonstrate a more immersive storytelling experience. I chose to use photogrammetry to capture a museum exhibit and model it in 3D with annotations to tell a more immersive story of the subject. For this project I decided to cast a wide net and use a popular exhibit at the National Constitution Center in Philadelphia called Signer’s Hall. This exhibit consists of 42 life-sized statues of the founding fathers that signed the Constitution. I will use photogrammetry to capture this exhibit and make it accessible to more people regardless of where they live or their socio-economic level, and I will do so using equipment common to most people: a smartphone (iPhone 6) and desktop computer and free educational access to Autodesk Remake and Sketchfab software programs.


The statues that comprise this particular exhibit are life-size bronze statues so I knew there would be some shading adjustments I would have to make. Another realization was that most of these statues were the same height as me, 5-foot, 6-inches tall and I did not bring or request a stepladder to get shots from above the statues. I began taking test shots of a group of three statues to see how the overlapping between the

Charles Pinkney, Charles Cotesworth Pinckney and John Rutledge. Photo taken at The National Constitution Center, Philadelphia.

three would translate when I brought them into the AutoDesk Remake software and then how difficult it would be to clean up the models in Sketchfab. This became a little challenging as not only was I taking a lot of pictures, it also required me to crawl around on the floor and contort myself around the limbs of these three statues that were posed as if engaged in a debate. The key I learned from several tutorials on the Autodesk YouTube channel is that for a successful detailed model, pictures must not only be in focus and evenly lit, but there must be 40% of overlap between all the pictures to allow the point cloud to be accurate. Additionally, depending on how much detail you want to capture, photos should be taking five degrees apart as you shoot around the object, above and below. This resulted in over 200 photographs taken of the first test run of pictures. Then based on the arrangement of the statues within the exhibit, I decided to focus on two statues that stood alone, William Blount and our current celebrity, Alexander Hamilton. Since I had access as long as I needed to the exhibit, I decided to tackle the Benjamin Franklin group. This consisted of a group of five statues surrounding a table at which Franklin was seated. This was the most challenging group of statues to photograph properly so I focused on Franklin (seated) and Gov. Morris (leaning over Franklin) but the primary focus was on Franklin.

Example of the raw photos taken at varying lengths and at 5-degree intervals. Photos taken at The national Constitution Center in Philadelphia

Once the photos were transferred and uploaded to Autodesk ReMake, it was pretty easy to construct the 3D model and process the information. I credit using the ReMake program as opposed to using the 123D Catch app for the ease in transfer and construction. Next, I saved the 3D model and then imported it into SketchFab, which was a challenge only because I needed to somehow get more space than my free educational account provided. After getting the necessary space to upload all my models, it took a couple of tutorials to figure out how to orient, light and shade my models. I still have a lot to learn but for the time period given for this project, the result came out pretty good.

Screen shot of initial upload of the Alexander Hamilton (center screen) photos. The exhibit room is partially reconstructed even thought photos were primarily of Hamilton.


To determine the feasibility of using 3D technology to tell stories, I constructed my virtual Alexander Hamilton complete with annotations and shared it on my Facebook page asking for anyone to share their impressions. I wanted my target audience to be a mix of people in the journalism industry and everyday people so I decided to identify a cross section of my Facebook friends that were professional television journalists, cameramen, photographers, regular everyday people and a couple of librarians. The last demographic was chosen because of the historical nature of my project and the fact that librarians have been tuned in to the digital age since the debut of electronic readers. The overall reaction in general was how cool the technology was and that it was something that could be done with still pictures. Nearly all respondents felt immersed in learning about Alexander Hamilton and also felt the annotations brought another level of immersive-ness because not only were they able to see what the annotation was explaining, but it could viewed from different angles.


This technology is really effective when it comes to documenting and telling historical accounts. It’s a much more immersive way to teach which is why we see more and more virtual and 3D storytelling coming from the likes of National Geographic and Smithsonian as evidenced in their digital magazines. For my intentions, this use of photogrammetry and 3D technology was effective. I think with more time to develop my skills in cleaning my models and building a virtual scene for the subjects to live in, using these two technologies would exceed my expectations. Being a video person, I would love to go into videogrammetry.


Improvements to communications infrastructure and Internet speeds would bring the use of photogrammetry to news organizations on a more mainstream level. With the capabilities of so many mobile devices and applications that allow the use of technology such as photogrammetry, the question becomes how fast can the processing power of these devices become standard to where anyone with a smartphone can construct a 3D scene such as I did with my iPhone 6 with minimal transferring or data issues.


Technology like photogrammetry and 3D modeling will definitely become the norm when it comes to storytelling for journalists. We have already crossed the threshold with the New York Times and BBC News implementing story coverage in that format. As mentioned before, National Geographic and Smithsonian and National Geographic Travel have already become go-to sources for immersive storytelling via their digital magazines. The challenge becomes whether more news organizations become aware of the capabilities or of the availability of this kind of technology or if they are, whether they can find storytellers that are able to use the software effectively. Besides news, photogrammetry and 3D technologies will become a tool in preserving the historical artifacts of things like the Seven Wonders of the World or save monuments or historical buildings from the hands of extremism.

As of 2018, software improvements combined with more drone accessibility has brought photogrammetry front and center in helping with agriculture, mining, construction and inspections. The most notable use is by the New York Time VR team in the recent natural disasters with volcanic eruption in Guatemala and Hawaii. In the gaming world, high quality scanned assets contributed to the first immersive first-person interactive story released by none other than Unity. In regards to historical preservation, we now see photogrammetry used to freeze a time capsule of culture by including street clutter such as fire hydrants, bollards, and road signs.

The quality alone in photogrammetry software has improved enormously and can only foreshadow what another two years can produce.


Summers, N. June 6, 2018. “Inventory” Preserves Street Clutter With Photogrammetry. Retrieved from

Palladino, T. June 21, 2018. New York Times AR Coverage of Guatemala Volcano Disaster Shows AR Isn’t Ready for Breaking News. Retrieved from

Walford, A. 2007. Photogrammetry. “What is Photogrammetry?” Retrieved from

Soto, R. December 13, 2016. Neiman Lab Predictions for Journalism 2017. “VR Moves from Experiments to Immersion.” Retrieved from

Caughill, P. December 22, 2016. Futurism. “This New Drone is Powerful Enough to Carry You and a Friend.” Retrieved from

Krewson Wertz, Pamela. September 19, 2016. “Digital Photography: The future of small-scale manufacturing?” Retrieved from

2015, June 18. Sketchfab Tutorial. How to Set Up A Successful Photogrammetry Project. Retrieved from


Reality Capture: The New Camera Phone

Reality capture technology is has come quite a long way from what we know from movies like Avatar, Lord of the Rings and King Kong.

Avatar (2009) Image:
Film Title: King Kong.
King King (2005) Image:

Nowadays there are 3D capture applications that are available for your smartphone, that allow anyone to capture an object in 3D. There even more apps that are now available for download that will take that 3D file and animate it. It’s amazing times when it comes to technology.

Often we cheer the innovation of such technology and how cutting edge or beneficial it is for sharing information, telling stories or providing a unique experience. What about the long term ramifications? When it comes to gaming,

Kit Harrington from HBO’s Games of Thrones is featured in latest video game, Call of Duty: Infinite Warfare (Release date November 4, 2016).

3D and virtual reality is the name of the game. But what about everyday life? What about allowing anyone the ability to capture video for 3D. The question becomes about privacy and the ownership of a person’s likeness. Much like when cameras started appearing on cellphones, the issue of a person being photographed without their knowledge became an ethical discussion. Now, with easy access to 3D and virtual reality apps and software the same concern is appearing again. What if someone mistakenly makes a 3D capture of another person available publicly? What happens to that person’s reasonable expectation of privacy? What if that person is a celebrity? Who then has control of their likeness and is there any recourse for inappropriate or illegal use of that likeness?

Not long ago (9 years), one of the television stations I worked at began using digital avatars of their on-air news anchors and meteorologists. Their digital selves were made to walk onto the corner of your computer screen or television set and tell you what the weather forecast would be or notify you of breaking news. Most of the time, though, it was promoting the programming of the station. This new digital presence didn’t last long, because there were some concerns on the part of the on-air personalities of what their likenesses would be used for beyond what they agreed to, and let’s not the forget the basic issue of compensation. How do you compensate a person for their likeness? Royalty fees? What happens when those on-air personalities move on to other networks? How can they know that their digital selves have been deleted?

3D capture and virtual reality are definitely some very fun and creative outlets that can make a huge difference in medicine, education or even specific storytelling. However, unlike cameras on cellphones and the now ubiquitous selfie, treating 3D and virtual capture in the same way would be detrimental and controversial.

360 video or Virtual Reality?

Most of my posts on this blog are in response to assignments for my graduate degree in communications. I am specialising in journalism innovation so we talk and learn about all things technology and how it affects legacy media (old media) companies and new media (social). Within that discussion comes a lot of ethical considerations and many times we end up talking about sci-fi books or movies. I never thought I would talk at length about Demolition Man in graduate school. Needless to say, I will be bringing it up again (wait for it).

This blog is supposed to address how 360 video and virtual reality will affect my future or current career. Well, it already is affecting my career, which for the last 15-plus years has been television news and sports. It wasn’t long ago that we technical directors took 2D video and through video manipulation and use of angles that we tricked the human eye in seeing a 3D effect move across the screen. Then came HD television screens that had all on-air talent scrambling for MAC makeup and an air-brushed tan but ended up not being that bad. Yes, it was a much clearer picture, but you couldn’t see down to every pore on a person’s face as was claimed. Then there was the brief time period when television news stations were capturing the likenesses of their main on-air anchors so their mini version could walk out on your computer desktop or during your favorite daytime show and tell you the latest breaking news or weather update. That promotional feat lasted about as long as it wasn’t annoying (not very long).

Since then, technology has improved so much in the area of 360 and virtual video that there may be a real use for it. In my field, I could see it used for special events like the Fourth of July fireworks, 360 video cameras on a drone as fireworks are launched into the air would be pretty “spectacular” as we like to call them so often. Another special event: the Olympics, imagine being able to watch Katie Ledecky speed swim her events from the bottom of the pool. Or watching the World Cup as if you were standing in the middle of the field?

Are you talking about fluid transfers?

Using technology that can bring events so up close and personal, is a serious thing. From a journalism perspective, careful consideration needs to be made about when to use 360 or virtual reality video to convey information. It should not be used for death, destruction or manipulation of a person or people. Privacy rights are a formidable ethical issue as is disclosure of what the virtual story subject is. It is important that those choosing to transport themselves to a place of stress understand the ramifications. Whether viewing a virtual roller coaster or natural disaster, care has to be taken to avoid any incidents of stress on the viewer’s health. In the movie Demolition Man (I told you to wait for it), virtual and augmented reality have replaced the human connection so much that they live in a sterile and “clean” world.

I hope that sci-fi prediction does not become reality.

The Automation of Media

One technology that often gets overlooked when talk about emergent technology occurs, is automation in the media. We first saw this happen to the radio industry in the late 1980s where radio stations were able to program music using software such as ENCO. This allowed smaller radio stations to stay within their limited budgets and not have to hire on-air talent. Of course, once automation was proven to work, it lead to wide spread use in all radio markets and resulted in eliminating the radio DJ or personality that introduced songs, segments and riffed about anything under the sun.

Once radio became automated, the technology became advanced enough that television stations were able to be automated. First, running commercials became automated, then it began encroaching on live productions like newscasts. The late 1990s-early aughts, brought the introduction of Parkervision, the first television automation system that also came with serious bugs. Parkervision was then eventually purchased by broadcasting equipment giant Grass Valley and much development went into making automation smoother and intuitive. The benefit of course was allowing smaller market television news stations the ability to provide higher production value without adding more manpower and staying within their shrinking budgets.

KEYT HD upgrade by Utter Associates      Photo copyright:

As with all automation, the improvements in the interface, software and device communication resulted in the reduction of staff from medium markets all the way to the top 5 markets of New York, Los Angeles, Chicago, Philadelphia and Dallas-Forth Worth. Control rooms that used to require 12-15 people in order to put out a fast, high-production value live broadcast were now reduced to 2 people. Directors no longer directed a show for camera shots, pacing or continuity but instead had to code  shows within computer software parameters. “Directing” was reduced to hitting a space bar to get to the next event (story) and creativity was replaced by computers.

Innovation Is In the Air

So I’m two semesters away from getting my master’s degree in journalism innovation from Newhouse and this semester we are getting into the “innovation” part of journalism. It’s pretty narly, because to me it’s like every sci-fi movie has come true or will be coming true.

Now I’m not talking about all the Star Trek stuff like the communicator (cell phones), the PADD (iPad) or the replicator (3D printers). To me the most innovative thing is still on the horizon but getting closer to mainstream use—the Air Touch technology —currently made possible by a Taiwanese company. The possibilities are endless for what this technology will be able to do. One in particular, is how it can change and help journalists in the field, whether in a conflict zone or at a natural disaster. Air Touch could help EMT and other emergency responder organizations as well.  The question is how reliable the technology would be in those areas and how it’s connected (satellite vs. terrestrial towers).