Field Test in Photogrammetry

This is a summary of my field test in photogrammetry. For the full project details click here: mno613-field-test-white-paper

Photo=light, Gram=drawing, metry=measurement.

In a Neiman Lab article published December 2016, photogrammetry is one of several cited emerging technologies that are expected to really transition in the new year from a passive video experimentation to a full immersive experience. Newsrooms across the country will be able to fully implement the new areas of photogrammetry, ambisonics and stereoscopic rendering. Given how easy it is—using old technology to make new—it makes sense that such emerging technology will become established technology as soon as the turn of the New Year. Photogrammetry has been around for sometime, most recently for constructing maps and topographic landscapes, it’s only with the use of three-dimensional technology that photogrammetry has earned a bigger place within the media landscape. The recent evacuations from the Syrian town of Aleppo could be told in a more immersive way and perhaps move a larger population of readers to a call for action.

We learned about many new and emerging types of technology in class, and while I wasn’t necessarily ignorant about them I had never delved into the technology until this class. Learning about virtual reality, augmented reality, 360 and 3D video, photogrammetry, sensors and drones were quite eye opening for me and my background in television sports and journalism. What most impressed me was the speed by which these technologies were becoming more common and easy to use.


My hypothesis for this project is to use two of the emerging technologies we covered and demonstrate a more immersive storytelling experience. I chose to use photogrammetry to capture a museum exhibit and model it in 3D with annotations to tell a more immersive story of the subject. For this project I decided to cast a wide net and use a popular exhibit at the National Constitution Center in Philadelphia called Signer’s Hall. This exhibit consists of 42 life-sized statues of the founding fathers that signed the Constitution. I will use photogrammetry to capture this exhibit and make it accessible to more people regardless of where they live or their socio-economic level, and I will do so using equipment common to most people: a smartphone (iPhone 6) and desktop computer and free educational access to Autodesk Remake and Sketchfab software programs.


The statues that comprise this particular exhibit are life-size bronze statues so I knew there would be some shading adjustments I would have to make. Another realization was that most of these statues were the same height as me, 5-foot, 6-inches tall and I did not bring or request a stepladder to get shots from above the statues. I began taking test shots of a group of three statues to see how the overlapping between the

Charles Pinkney, Charles Cotesworth Pinckney and John Rutledge. Photo taken at The National Constitution Center, Philadelphia.

three would translate when I brought them into the AutoDesk Remake software and then how difficult it would be to clean up the models in Sketchfab. This became a little challenging as not only was I taking a lot of pictures, it also required me to crawl around on the floor and contort myself around the limbs of these three statues that were posed as if engaged in a debate. The key I learned from several tutorials on the Autodesk YouTube channel is that for a successful detailed model, pictures must not only be in focus and evenly lit, but there must be 40% of overlap between all the pictures to allow the point cloud to be accurate. Additionally, depending on how much detail you want to capture, photos should be taking five degrees apart as you shoot around the object, above and below. This resulted in over 200 photographs taken of the first test run of pictures. Then based on the arrangement of the statues within the exhibit, I decided to focus on two statues that stood alone, William Blount and our current celebrity, Alexander Hamilton. Since I had access as long as I needed to the exhibit, I decided to tackle the Benjamin Franklin group. This consisted of a group of five statues surrounding a table at which Franklin was seated. This was the most challenging group of statues to photograph properly so I focused on Franklin (seated) and Gov. Morris (leaning over Franklin) but the primary focus was on Franklin.

Example of the raw photos taken at varying lengths and at 5-degree intervals. Photos taken at The national Constitution Center in Philadelphia

Once the photos were transferred and uploaded to Autodesk ReMake, it was pretty easy to construct the 3D model and process the information. I credit using the ReMake program as opposed to using the 123D Catch app for the ease in transfer and construction. Next, I saved the 3D model and then imported it into SketchFab, which was a challenge only because I needed to somehow get more space than my free educational account provided. After getting the necessary space to upload all my models, it took a couple of tutorials to figure out how to orient, light and shade my models. I still have a lot to learn but for the time period given for this project, the result came out pretty good.

Screen shot of initial upload of the Alexander Hamilton (center screen) photos. The exhibit room is partially reconstructed even thought photos were primarily of Hamilton.


To determine the feasibility of using 3D technology to tell stories, I constructed my virtual Alexander Hamilton complete with annotations and shared it on my Facebook page asking for anyone to share their impressions. I wanted my target audience to be a mix of people in the journalism industry and everyday people so I decided to identify a cross section of my Facebook friends that were professional television journalists, cameramen, photographers, regular everyday people and a couple of librarians. The last demographic was chosen because of the historical nature of my project and the fact that librarians have been tuned in to the digital age since the debut of electronic readers. The overall reaction in general was how cool the technology was and that it was something that could be done with still pictures. Nearly all respondents felt immersed in learning about Alexander Hamilton and also felt the annotations brought another level of immersive-ness because not only were they able to see what the annotation was explaining, but it could viewed from different angles.


This technology is really effective when it comes to documenting and telling historical accounts. It’s a much more immersive way to teach which is why we see more and more virtual and 3D storytelling coming from the likes of National Geographic and Smithsonian as evidenced in their digital magazines. For my intentions, this use of photogrammetry and 3D technology was effective. I think with more time to develop my skills in cleaning my models and building a virtual scene for the subjects to live in, using these two technologies would exceed my expectations. Being a video person, I would love to go into videogrammetry.


Improvements to communications infrastructure and Internet speeds would bring the use of photogrammetry to news organizations on a more mainstream level. With the capabilities of so many mobile devices and applications that allow the use of technology such as photogrammetry, the question becomes how fast can the processing power of these devices become standard to where anyone with a smartphone can construct a 3D scene such as I did with my iPhone 6 with minimal transferring or data issues.


Technology like photogrammetry and 3D modeling will definitely become the norm when it comes to storytelling for journalists. We have already crossed the threshold with the New York Times and BBC News implementing story coverage in that format. As mentioned before, National Geographic and Smithsonian and National Geographic Travel have already become go-to sources for immersive storytelling via their digital magazines. The challenge becomes whether more news organizations become aware of the capabilities or of the availability of this kind of technology or if they are, whether they can find storytellers that are able to use the software effectively. Besides news, photogrammetry and 3D technologies will become a tool in preserving the historical artifacts of things like the Seven Wonders of the World or save monuments or historical buildings from the hands of extremism.

As of 2018, software improvements combined with more drone accessibility has brought photogrammetry front and center in helping with agriculture, mining, construction and inspections. The most notable use is by the New York Time VR team in the recent natural disasters with volcanic eruption in Guatemala and Hawaii. In the gaming world, high quality scanned assets contributed to the first immersive first-person interactive story released by none other than Unity. In regards to historical preservation, we now see photogrammetry used to freeze a time capsule of culture by including street clutter such as fire hydrants, bollards, and road signs.

The quality alone in photogrammetry software has improved enormously and can only foreshadow what another two years can produce.


Summers, N. June 6, 2018. “Inventory” Preserves Street Clutter With Photogrammetry. Retrieved from

Palladino, T. June 21, 2018. New York Times AR Coverage of Guatemala Volcano Disaster Shows AR Isn’t Ready for Breaking News. Retrieved from

Walford, A. 2007. Photogrammetry. “What is Photogrammetry?” Retrieved from

Soto, R. December 13, 2016. Neiman Lab Predictions for Journalism 2017. “VR Moves from Experiments to Immersion.” Retrieved from

Caughill, P. December 22, 2016. Futurism. “This New Drone is Powerful Enough to Carry You and a Friend.” Retrieved from

Krewson Wertz, Pamela. September 19, 2016. “Digital Photography: The future of small-scale manufacturing?” Retrieved from

2015, June 18. Sketchfab Tutorial. How to Set Up A Successful Photogrammetry Project. Retrieved from


News in the Age of New Media

The last several months have been quite an education. As a jaded, cynical member of the television media, I have come to accept the crazy new world of not only social media as a news source but the newfangled technologies that are allowing social to be a news source. I attribute this acceptance to the graduate school program I am closing out (hallelujah) and the latest course in emerging technologies.

One of the many new technologies that are disrupting news gathering is 360 video and virtual reality. Both provide a more immersive experience to news stories and current events. 360 video in particular will have a bigger impact, as it is easily accessible to anyone with a smart phone and Facebook account. This “surround” video allows the viewer to move around in a real 2D environment that can feel three-dimensional. Add some surround sound and you’ve got quite the presentation of an event. Imagine 360 coverage of a political rally with the accompanying sound. It would be real-time documentation without any bias other than what the viewer brings to the story.

Virtual reality (VR) on the other hand is not just from video gamers anymore and is already becoming the hot new way to cover certain news worthy events. Unlike 360 video, VR requires software to construct the virtual environment on top of the extra skill in capturing the real-life subject matter. This would be an easy adaptation for large and long-established news organizations like The New York Times but would not be practical for smaller news outlets. Additionally, VR is a medium by which careful consideration should be made regarding which news events are made in virtual reality. Considerations such as the effects of war or crime events on a viewers mental capacity or physical health. Instead, VR would have a wholly different effect when used for educational purposes. Whether to help in the treatment of phobias like a fear of heights or to bring historical characters or events to life as the Smithsonian is working toward.

For the future, 360 video is much better suited to news gathering and VR to a more controlled educational or game environment. After all, even with video games there is the understanding that the viewer or gamer is entering a false environment.


Road Closures by Drone

Unmanned Aerial Vehicles, or drones, have been making the news off an on for their use in journalism and in videography, especially in movies. Drones have the ability to capture a landscape from a wide variety of heights and are much more versatile than a jib or crane camera. Personally, I have always thought of drones in this respect, getting that money shot from low to high of a scene from the sky. They are always so beautiful and often times breath taking. An area that I don’t often think of the use of drones is in conducting journalism.

I came across a perfect example where the use of drone footage would enhance a story. It’s about the closure of a major Philadelphia road artery for much needed repairs. Currently, the news article used a capture from Google maps to show the length of the road set to be closed. Instead, drone video would not only show how busy this road is but also show the level of repairs it has needed for sometime. Actual video would generate more views to the news story as well since it would be a unique look at this stretch of road.

From a regulation standpoint, using a drone for journalism would count as commercial use and would require passing the Part 107 exam administered by the Federal Aviation Administration. However, were one to use a drone as a recreational tool to check out Lincoln Drive and how the repairs are going, they would have to keep the drone within eyesight and below 400 feet.

The potential viewers that drone video would bring could be worth the extra effort in getting certified by the FAA to use drones for journalism and thus, commercial use.

Storytelling Sensors

A lot of complicated or complex issues or ideas can be made into easy to understand stories through the use of visualizations or by collecting data. One such complex or abstract issue is the ongoing drought conditions in the southwest and southeast parts of the United States. To illustrate the levels of drought conditions, the SparkFun Soil Moisture Sensor can be used to gather data on those drought conditions in the southwest. The soil moisture sensor would be plugged into an Arduino and would light up if moisture was present. A sensor array could then be used to distinguish the amount of moisture on a scale of 1-5 by using the SparkFun LED 8×7 array. By coding, the level of moisture concentration can be indicated by each pin and show differences in drought areas.

Virtual Concessions

Now that this long, presidential campaign full of inappropriate comments, accusations and threats is over, I started thinking about the swift about-face that both “establishment” party politicians took. The calm platitudes from the former reality TV star turned President-Elect and then the tasteful, call to unite from the former Secretary of State got me thinking how different it must be for journalists covering the candidates, they would see these two people “off the air” while traveling, while interacting with staff and voters.

What if 360 cameras were taken on the airplanes of the presidential candidates to show what goes on while in transit? Viewers could see how reporters cover a campaign and how candidates interact with those reporters. This could be the new way of getting to know a candidate running for office, not just the edited and prepared candidate that we get now.

An opportunity for a virtual reality story could be following the election, when the President-Elect meets with the President to talk about the role. Imagine being able to virtually be present in the Oval Office as the two men, address the press and answer questions about how their meeting went. Another one, virtual reality of the political rallies each candidate has in every state during the campaign. What better way to show the true climate of a rally or see how many people are in attendance or what the energy felt like at these rallies.

One thing is for sure, it may help show the true climate of an election and be a more accurate predictor than traditional polling or focus groups.

360 video or Virtual Reality?

Most of my posts on this blog are in response to assignments for my graduate degree in communications. I am specialising in journalism innovation so we talk and learn about all things technology and how it affects legacy media (old media) companies and new media (social). Within that discussion comes a lot of ethical considerations and many times we end up talking about sci-fi books or movies. I never thought I would talk at length about Demolition Man in graduate school. Needless to say, I will be bringing it up again (wait for it).

This blog is supposed to address how 360 video and virtual reality will affect my future or current career. Well, it already is affecting my career, which for the last 15-plus years has been television news and sports. It wasn’t long ago that we technical directors took 2D video and through video manipulation and use of angles that we tricked the human eye in seeing a 3D effect move across the screen. Then came HD television screens that had all on-air talent scrambling for MAC makeup and an air-brushed tan but ended up not being that bad. Yes, it was a much clearer picture, but you couldn’t see down to every pore on a person’s face as was claimed. Then there was the brief time period when television news stations were capturing the likenesses of their main on-air anchors so their mini version could walk out on your computer desktop or during your favorite daytime show and tell you the latest breaking news or weather update. That promotional feat lasted about as long as it wasn’t annoying (not very long).

Since then, technology has improved so much in the area of 360 and virtual video that there may be a real use for it. In my field, I could see it used for special events like the Fourth of July fireworks, 360 video cameras on a drone as fireworks are launched into the air would be pretty “spectacular” as we like to call them so often. Another special event: the Olympics, imagine being able to watch Katie Ledecky speed swim her events from the bottom of the pool. Or watching the World Cup as if you were standing in the middle of the field?

Are you talking about fluid transfers?

Using technology that can bring events so up close and personal, is a serious thing. From a journalism perspective, careful consideration needs to be made about when to use 360 or virtual reality video to convey information. It should not be used for death, destruction or manipulation of a person or people. Privacy rights are a formidable ethical issue as is disclosure of what the virtual story subject is. It is important that those choosing to transport themselves to a place of stress understand the ramifications. Whether viewing a virtual roller coaster or natural disaster, care has to be taken to avoid any incidents of stress on the viewer’s health. In the movie Demolition Man (I told you to wait for it), virtual and augmented reality have replaced the human connection so much that they live in a sterile and “clean” world.

I hope that sci-fi prediction does not become reality.

3D, Virtual and Augmented Reality

We live in exciting times, and scary times. Technology and innovation have never been more cutting edge. Who would have thought human kind would be living in artificial worlds through their video games, mobile games, and for combating mental illness.

When I hear 3D, and augmented reality, I think of video games like Call of Duty and mobile games like Pokemon Go. It was not until recently, with the continuing improvements in wearable VR like the Oculus, did I think a traditional “good” could come from an artificial world. Researchers are using VR to help those with acrophobia (fear of heights), fear of flying and other mental barriers that prevent a person from normal activity. That’s the exciting part. The scary part, is the possible use of virtual reality, 3D and such for reporting stories. It would seem a very few types of stories would benefit from such a technology, perhaps something that is worth bringing the reader intimately into the story environment. Perhaps the opening night at the Metropolitan Opera or Cirque du Soleil. Or perhaps transporting viewers to the current civil war going on in Syria. This is where I think some guidelines will eventually have to be set in place for journalists and content creators.

With enough patience and computer processing power, anyone can make a virtual world of reality or fantasy. The question becomes what is the context and for what purpose.

The very intimacy artificial reality, both virtual and augmented, even 360 video can bring traumatic events front and center causing the viewer to feel anxiety, stress and even triggering a response that may be detrimental.