We get into some heavy discussion in this episode. From talking to family members, abortion and what really is the value of a woman.
What are we talking about? Not women’s basketball. It’s a serious yet humorous podcast all about the ovaries.
We start with our first guest, Robin Cohen, CEO and Founder to the Sandy Rollman Ovarian Cancer Foundation. Robin shares the background of OC research and how difficult it is to get funding for a preventative testing method.
This is a summary of my field test in photogrammetry. For the full project details click here: mno613-field-test-white-paper
Photo=light, Gram=drawing, metry=measurement.
In a Neiman Lab article published December 2016, photogrammetry is one of several cited emerging technologies that are expected to really transition in the new year from a passive video experimentation to a full immersive experience. Newsrooms across the country will be able to fully implement the new areas of photogrammetry, ambisonics and stereoscopic rendering. Given how easy it is—using old technology to make new—it makes sense that such emerging technology will become established technology as soon as the turn of the New Year. Photogrammetry has been around for sometime, most recently for constructing maps and topographic landscapes, it’s only with the use of three-dimensional technology that photogrammetry has earned a bigger place within the media landscape. The recent evacuations from the Syrian town of Aleppo could be told in a more immersive way and perhaps move a larger population of readers to a call for action.
We learned about many new and emerging types of technology in class, and while I wasn’t necessarily ignorant about them I had never delved into the technology until this class. Learning about virtual reality, augmented reality, 360 and 3D video, photogrammetry, sensors and drones were quite eye opening for me and my background in television sports and journalism. What most impressed me was the speed by which these technologies were becoming more common and easy to use.
My hypothesis for this project is to use two of the emerging technologies we covered and demonstrate a more immersive storytelling experience. I chose to use photogrammetry to capture a museum exhibit and model it in 3D with annotations to tell a more immersive story of the subject. For this project I decided to cast a wide net and use a popular exhibit at the National Constitution Center in Philadelphia called Signer’s Hall. This exhibit consists of 42 life-sized statues of the founding fathers that signed the Constitution. I will use photogrammetry to capture this exhibit and make it accessible to more people regardless of where they live or their socio-economic level, and I will do so using equipment common to most people: a smartphone (iPhone 6) and desktop computer and free educational access to Autodesk Remake and Sketchfab software programs.
The statues that comprise this particular exhibit are life-size bronze statues so I knew there would be some shading adjustments I would have to make. Another realization was that most of these statues were the same height as me, 5-foot, 6-inches tall and I did not bring or request a stepladder to get shots from above the statues. I began taking test shots of a group of three statues to see how the overlapping between the
three would translate when I brought them into the AutoDesk Remake software and then how difficult it would be to clean up the models in Sketchfab. This became a little challenging as not only was I taking a lot of pictures, it also required me to crawl around on the floor and contort myself around the limbs of these three statues that were posed as if engaged in a debate. The key I learned from several tutorials on the Autodesk YouTube channel is that for a successful detailed model, pictures must not only be in focus and evenly lit, but there must be 40% of overlap between all the pictures to allow the point cloud to be accurate. Additionally, depending on how much detail you want to capture, photos should be taking five degrees apart as you shoot around the object, above and below. This resulted in over 200 photographs taken of the first test run of pictures. Then based on the arrangement of the statues within the exhibit, I decided to focus on two statues that stood alone, William Blount and our current celebrity, Alexander Hamilton. Since I had access as long as I needed to the exhibit, I decided to tackle the Benjamin Franklin group. This consisted of a group of five statues surrounding a table at which Franklin was seated. This was the most challenging group of statues to photograph properly so I focused on Franklin (seated) and Gov. Morris (leaning over Franklin) but the primary focus was on Franklin.
Once the photos were transferred and uploaded to Autodesk ReMake, it was pretty easy to construct the 3D model and process the information. I credit using the ReMake program as opposed to using the 123D Catch app for the ease in transfer and construction. Next, I saved the 3D model and then imported it into SketchFab, which was a challenge only because I needed to somehow get more space than my free educational account provided. After getting the necessary space to upload all my models, it took a couple of tutorials to figure out how to orient, light and shade my models. I still have a lot to learn but for the time period given for this project, the result came out pretty good.
INTERVIEW AND METRICS
To determine the feasibility of using 3D technology to tell stories, I constructed my virtual Alexander Hamilton complete with annotations and shared it on my Facebook page asking for anyone to share their impressions. I wanted my target audience to be a mix of people in the journalism industry and everyday people so I decided to identify a cross section of my Facebook friends that were professional television journalists, cameramen, photographers, regular everyday people and a couple of librarians. The last demographic was chosen because of the historical nature of my project and the fact that librarians have been tuned in to the digital age since the debut of electronic readers. The overall reaction in general was how cool the technology was and that it was something that could be done with still pictures. Nearly all respondents felt immersed in learning about Alexander Hamilton and also felt the annotations brought another level of immersive-ness because not only were they able to see what the annotation was explaining, but it could viewed from different angles.
This technology is really effective when it comes to documenting and telling historical accounts. It’s a much more immersive way to teach which is why we see more and more virtual and 3D storytelling coming from the likes of National Geographic and Smithsonian as evidenced in their digital magazines. For my intentions, this use of photogrammetry and 3D technology was effective. I think with more time to develop my skills in cleaning my models and building a virtual scene for the subjects to live in, using these two technologies would exceed my expectations. Being a video person, I would love to go into videogrammetry.
Improvements to communications infrastructure and Internet speeds would bring the use of photogrammetry to news organizations on a more mainstream level. With the capabilities of so many mobile devices and applications that allow the use of technology such as photogrammetry, the question becomes how fast can the processing power of these devices become standard to where anyone with a smartphone can construct a 3D scene such as I did with my iPhone 6 with minimal transferring or data issues.
FUTURE OF PHOTOGRAMMETRY
Technology like photogrammetry and 3D modeling will definitely become the norm when it comes to storytelling for journalists. We have already crossed the threshold with the New York Times and BBC News implementing story coverage in that format. As mentioned before, National Geographic and Smithsonian and National Geographic Travel have already become go-to sources for immersive storytelling via their digital magazines. The challenge becomes whether more news organizations become aware of the capabilities or of the availability of this kind of technology or if they are, whether they can find storytellers that are able to use the software effectively. Besides news, photogrammetry and 3D technologies will become a tool in preserving the historical artifacts of things like the Seven Wonders of the World or save monuments or historical buildings from the hands of extremism.
As of 2018, software improvements combined with more drone accessibility has brought photogrammetry front and center in helping with agriculture, mining, construction and inspections. The most notable use is by the New York Time VR team in the recent natural disasters with volcanic eruption in Guatemala and Hawaii. In the gaming world, high quality scanned assets contributed to the first immersive first-person interactive story released by none other than Unity. In regards to historical preservation, we now see photogrammetry used to freeze a time capsule of culture by including street clutter such as fire hydrants, bollards, and road signs.
The quality alone in photogrammetry software has improved enormously and can only foreshadow what another two years can produce.
Summers, N. June 6, 2018. “Inventory” Preserves Street Clutter With Photogrammetry. Retrieved from https://www.engadget.com/2018/06/06/oddviz-street-objects-photogrammetry-collage/
Palladino, T. June 21, 2018. New York Times AR Coverage of Guatemala Volcano Disaster Shows AR Isn’t Ready for Breaking News. Retrieved from https://mobile-ar.reality.news/news/new-york-times-ar-coverage-guatemala-volcano-disaster-shows-ar-isnt-ready-for-breaking-news-0185386/
Walford, A. 2007. Photogrammetry. “What is Photogrammetry?” Retrieved from http://www.photogrammetry.com/index.htm
Soto, R. December 13, 2016. Neiman Lab Predictions for Journalism 2017. “VR Moves from Experiments to Immersion.” Retrieved from http://www.niemanlab.org/2016/12/vr-moves-from-experiments-to-immersion/
Caughill, P. December 22, 2016. Futurism. “This New Drone is Powerful Enough to Carry You and a Friend.” Retrieved from https://futurism.com/this-new-drone-is-powerful-enough-to-carry-you-and-a-friend/
Krewson Wertz, Pamela. September 19, 2016. Phys.org. “Digital Photography: The future of small-scale manufacturing?” Retrieved from http://phys.org/news/2016-09-digital-photography-future-small-scale.html
2015, June 18. Sketchfab Tutorial. How to Set Up A Successful Photogrammetry Project. Retrieved from https://blog.sketchfab.com/how-to-set-up-a-successful-photogrammetry-project/
My big final project for one of my graduate classes is to conduct a field test using an emerging technology to tell a story. There is a lot of emerging technology out there and for some, I do not see an effective purpose in accurately telling a story— but that is why we go to school, to learn. I have since changed my mind about the value of virtual reality, 360 video, voice-activated artificial intelligence (Siri and Alexa), drones and even streaming video like Facebook live.
I decided to conduct my field test using virtual reality to share the story of Philadelphia, specifically the National Constitution Center where visitors can walk among the founders of our country. Philadelphia is chock full of historical landmarks, museums and founding history and some of it goes unnoticed because there is so many hidden gems. I chose the Signer’s Hall where visitors can sign the Constitution along with the 42 founding fathers present at the original signing on September 17, 1787. Signer’s Hall is one of the most popular exhibits of the National Constitution Center and would not only serve in telling the story of each founding father but would also serve as an interactive way of promoting the Center across the country.
Accomplishing this will be a challenge and I fully expect several issues in scanning each statue and building my virtual environment since I will be using free versions of Sketchfab, Unity and Autodesk. Another challenge will be planning the time that it will take to conduct the scans needed and then the time it will take to build the VR components. All challenges that are worth tackling to bring something historical to life.
Reality capture technology is has come quite a long way from what we know from movies like Avatar, Lord of the Rings and King Kong.
Nowadays there are 3D capture applications that are available for your smartphone, that allow anyone to capture an object in 3D. There even more apps that are now available for download that will take that 3D file and animate it. It’s amazing times when it comes to technology.
Often we cheer the innovation of such technology and how cutting edge or beneficial it is for sharing information, telling stories or providing a unique experience. What about the long term ramifications? When it comes to gaming,
3D and virtual reality is the name of the game. But what about everyday life? What about allowing anyone the ability to capture video for 3D. The question becomes about privacy and the ownership of a person’s likeness. Much like when cameras started appearing on cellphones, the issue of a person being photographed without their knowledge became an ethical discussion. Now, with easy access to 3D and virtual reality apps and software the same concern is appearing again. What if someone mistakenly makes a 3D capture of another person available publicly? What happens to that person’s reasonable expectation of privacy? What if that person is a celebrity? Who then has control of their likeness and is there any recourse for inappropriate or illegal use of that likeness?
Not long ago (9 years), one of the television stations I worked at began using digital avatars of their on-air news anchors and meteorologists. Their digital selves were made to walk onto the corner of your computer screen or television set and tell you what the weather forecast would be or notify you of breaking news. Most of the time, though, it was promoting the programming of the station. This new digital presence didn’t last long, because there were some concerns on the part of the on-air personalities of what their likenesses would be used for beyond what they agreed to, and let’s not the forget the basic issue of compensation. How do you compensate a person for their likeness? Royalty fees? What happens when those on-air personalities move on to other networks? How can they know that their digital selves have been deleted?
3D capture and virtual reality are definitely some very fun and creative outlets that can make a huge difference in medicine, education or even specific storytelling. However, unlike cameras on cellphones and the now ubiquitous selfie, treating 3D and virtual capture in the same way would be detrimental and controversial.
Most of my posts on this blog are in response to assignments for my graduate degree in communications. I am specialising in journalism innovation so we talk and learn about all things technology and how it affects legacy media (old media) companies and new media (social). Within that discussion comes a lot of ethical considerations and many times we end up talking about sci-fi books or movies. I never thought I would talk at length about Demolition Man in graduate school. Needless to say, I will be bringing it up again (wait for it).
This blog is supposed to address how 360 video and virtual reality will affect my future or current career. Well, it already is affecting my career, which for the last 15-plus years has been television news and sports. It wasn’t long ago that we technical directors took 2D video and through video manipulation and use of angles that we tricked the human eye in seeing a 3D effect move across the screen. Then came HD television screens that had all on-air talent scrambling for MAC makeup and an air-brushed tan but ended up not being that bad. Yes, it was a much clearer picture, but you couldn’t see down to every pore on a person’s face as was claimed. Then there was the brief time period when television news stations were capturing the likenesses of their main on-air anchors so their mini version could walk out on your computer desktop or during your favorite daytime show and tell you the latest breaking news or weather update. That promotional feat lasted about as long as it wasn’t annoying (not very long).
Since then, technology has improved so much in the area of 360 and virtual video that there may be a real use for it. In my field, I could see it used for special events like the Fourth of July fireworks, 360 video cameras on a drone as fireworks are launched into the air would be pretty “spectacular” as we like to call them so often. Another special event: the Olympics, imagine being able to watch Katie Ledecky speed swim her events from the bottom of the pool. Or watching the World Cup as if you were standing in the middle of the field?
Are you talking about fluid transfers?
Using technology that can bring events so up close and personal, is a serious thing. From a journalism perspective, careful consideration needs to be made about when to use 360 or virtual reality video to convey information. It should not be used for death, destruction or manipulation of a person or people. Privacy rights are a formidable ethical issue as is disclosure of what the virtual story subject is. It is important that those choosing to transport themselves to a place of stress understand the ramifications. Whether viewing a virtual roller coaster or natural disaster, care has to be taken to avoid any incidents of stress on the viewer’s health. In the movie Demolition Man (I told you to wait for it), virtual and augmented reality have replaced the human connection so much that they live in a sterile and “clean” world.
I hope that sci-fi prediction does not become reality.
We live in exciting times, and scary times. Technology and innovation have never been more cutting edge. Who would have thought human kind would be living in artificial worlds through their video games, mobile games, and for combating mental illness.
When I hear 3D, and augmented reality, I think of video games like Call of Duty and mobile games like Pokemon Go. It was not until recently, with the continuing improvements in wearable VR like the Oculus, did I think a traditional “good” could come from an artificial world. Researchers are using VR to help those with acrophobia (fear of heights), fear of flying and other mental barriers that prevent a person from normal activity. That’s the exciting part. The scary part, is the possible use of virtual reality, 3D and such for reporting stories. It would seem a very few types of stories would benefit from such a technology, perhaps something that is worth bringing the reader intimately into the story environment. Perhaps the opening night at the Metropolitan Opera or Cirque du Soleil. Or perhaps transporting viewers to the current civil war going on in Syria. This is where I think some guidelines will eventually have to be set in place for journalists and content creators.
With enough patience and computer processing power, anyone can make a virtual world of reality or fantasy. The question becomes what is the context and for what purpose.
The very intimacy artificial reality, both virtual and augmented, even 360 video can bring traumatic events front and center causing the viewer to feel anxiety, stress and even triggering a response that may be detrimental.