MITRE's Tech Futures Podcast

NASA at the Edge

June 09, 2022 MITRE Season 1 Episode 1
MITRE's Tech Futures Podcast
NASA at the Edge
Show Notes Transcript

Fresh off his Ph.D., a gifted engineer harnesses the power of Edge Computing to bring NASA onto the web.

By Brinley Macnamara

Guests: Dr. Nitin Naik, Mark Ambrefe and Mike Vincent

Brinley Macnamara (host) (00:00):

Hello and welcome to MITRE's Tech Futures Podcast. I'm your host Brinley Macnamara. At MITRE, we offer a unique vantage point and objective insights that we share in the public interest. And in this podcast series, we showcase emerging technologies that will affect the government and our nation in the future. Today, we're talking about this story of how a very special MITRE engineer used edge computing to bring NASA onto the web. Through my storytelling, I hope to inform you about what edge computing is and how it is immediately applicable to MITRE's U.S. government sponsors.

Brinley Macnamara (host) (00:36):

But before you begin, I want to sincerely thank Dr. Kris Rosfjord, the Tech Futures Innovation Area Leader in MITRE's Research and Development Program. Last year, Dr. Rosfjord provided the funding for me and a fellow Networks Engineer, Michael Vincent, to research some emerging trends in cloud computing, which we ironically discovered heavily revolved around edge computing. Recently, she asked me if I'd be willing to create a podcast series as a way to tell the world about our edge computing project, as well as other Tech Futures research projects, and quickly provided the funding and support to make it happen. Now, without further ado, I bring you MITRE's Tech Futures Podcast, episode number one

Brinley Macnamara (host) (01:34):

In the early 2000s, Dr. Nitin Naik had just completed his PhD in computer science. He was an aggressive forward thinker and poised to take on a new challenge, especially one that involved the web. So when NASA offered him the job of overhauling the space agency's web infrastructure, he jumped at the opportunity.

Dr. Nitin Naik (01:53):

NASA actually asked me to come over to Washington, D.C. and be the Associate Chief Technology Officer responsible for all the web. I basically built a whole team to re-hance NASA's web presence in 2003. And when I joined NASA, NASA had almost 3000 webpages, sorry, web entry points and we had close to about six million webpages. So we started creating an organization structure. And so, we launched nasa.gov with a whole new infrastructure using, I don't know, if you're aware of Akamai, which is a global caching mechanism.

Brinley Macnamara (host) (02:45):

Oh, yeah.

Dr. Nitin Naik (02:46):

Yeah. So there was a competitor to Akamai, Akamai was small. There was a competitor to Akamai called Speedera, so we contracted with Speedera. And technically, that was the first, I would say, instantiation of cloud computing at the edge, where basically they enabled us to have a small presence in the data center, but to reach a wide audience with static content across the web.

Brinley Macnamara (host) (03:18):

The first major stress test at this technology came in 2003 when a space shuttle carrying a crew of seven astronauts disintegrated as it reentered the Earth's atmosphere, killing everyone on board.

Dr. Nitin Naik (03:32):

We had the Columbia tragedy and the NASA website, actually it was the day we launched it, we were getting close to about 50,000 hits a second. And because of Speedera caching, we could sustain that traffic.

Brinley Macnamara (host) (03:47):

Less than a year later, Speedera was put to the test again, when NASA decided they wanted to live stream the Mars rover landing across the internet. To achieve this, Dr. Naik and his team worked with Speedera to ensure this historic event could be streamed over the internet for the entire world to watch in real-time.

Brinley Macnamara (host) (04:14):

So, what do content delivery networks have to do with edge computing anyway? Well, if you define an edge computing as simply the delivery of internet services from endpoints that are physically closer to end users than a typical origin server would be. Then content delivery networks, whose purpose is to cache web content so that end user requests can be handled closer to those users themselves, are a perfect example of how edge computing works in the so-called wild. The irony of this is that NASA was live streaming video of its Mars rover over its CDN Speedera, before AWS even existed and the term cloud became part of the day-to-day vernacular of every software developer on earth.

Brinley Macnamara (host) (04:53):

But if edge computing is nothing new, then why is it such a hot buzzword in this day and age? To answer this question, I have to take you to around the time the Mars rover was reaching its destination. In the early 2000s, an online book seller was growing its business and running into a challenge that nearly every software company faces at some point. Developers were frustrated with having to repeat the same tasks to provision infrastructure every time they wanted to build a new service. So a group of them got together, and solved this internal problem by developing a set of APIs with intuitive interfaces for provisioning and managing infrastructure.

Brinley Macnamara (host) (05:30):

It wasn't long before they identified an opportunity to monetize these APIs by offering them along with the abstracted compute in their data centers, to customers who are facing the same engineering challenges. They called the service the Elastic Compute Cloud, or EC2 for short. Incredibly, what started as a need to solve an internal problem at Amazon is now a multi-billion dollar industry that is ushered in a revolutionary model for application development and hosting, also known as cloud computing. And the appeal of limitless scalability in the so-called cloud has led to the global adoption of cloud computing. Nevertheless, there is one major catch, which is that most of the commercial cloud is closed source, so cloud computing customers must place a great to of trust and their cloud service providers to get security right. Officials and the upper echelons of government agencies, including Dr. Naik, were quick to raise these concerns to AWS in early talks about the government transitioning workloads to the cloud.

Dr. Nitin Naik (06:31):

The concerns was security. And by, at that time, I was working at IRS and IRS handles stack sensitive PII data tremendously. And so actually, I attended the meeting with the CISO of IRS and we peppered AWS with all the questions of, "Okay, how can I ensure that the data is secure?" "How can we ensure applications are secure?" And at that time, they had limited answers, they were not fully evolved. They were more on the notion of virtualization as the selling point.

Brinley Macnamara (host) (07:15):

These security concerns were prevalent enough to inspire a group of engineers to develop a totally open-source alternative to AWS. These engineers, all from NASA, met each other in an exclusive commune outside of Silicon Valley known as a Rainbow House. And as the story goes, their idea for a completely open-source platform for spinning up on-premise clouds spawned from late night conversations at the Rainbow House. And they quickly moved to translate this idea into code, Python code, to be more specific.

Mark Ambrefe (07:45):

You're probably going to start seeing more like OpenStack and OpenShift going forward with your private cloud infrastructure, i.e., something that walks and talks like an Amazon Web Services, but at the end of the day is running under your own hardware. And actually, what's really cool is that OpenStack in particular, was originally designed by Rackspace and NASA to compete with AWS. So if you run something like that, you're going to get a pretty close fundamental experience out of something like OpenStack.

Brinley Macnamara (host) (08:30):

That's Mark Ambrefe talking, he's a fellow Networks Engineer. He first got into networking in high school and always has interesting opinions about new trends and networking and cloud technology.

Mark Ambrefe (08:42):

So something like that could conceivably be a pretty good middle ground to say, I want cloud, but also, I'm not sure how much I want Amazon monopolizing my data.

Brinley Macnamara (host) (09:00):

So I'm wondering if you're aware of OpenStack and if you are, if you think it's a legitimate competitor to AWS?

Dr. Nitin Naik (09:09):

Yes, I am aware of OpenStack and sure it could be a competitor, but I think that is part of the evolution of the whole technology industry. I think we tend to get better, and better and better, and we always look for efficiency and we always look for the next sort of operating model that is coming about.

Dr. Nitin Naik (09:32):

So, you even hear data centers being immersed in the ocean. There will always be evolution in technology, evolution in processes to help with that. And I think ultimately now, we are going to also see evolution. We have seen evolution in data handling, because ultimately, data is now the gold for the agency, it's no more of the applications because you get platform as a service in the cloud.

Brinley Macnamara (host) (10:05):

The growth of open source technology like OpenStack for hosting on-premise clouds is a clear sign that virtualization technology has matured enough for cloud customers, including MITRE's sponsors, to start considering hosting their own API accessible clouds in their own data centers at the edge. Some argue that this development could dramatically enhance privacy by giving cloud customers back control over where their data lives. That said, I remain skeptical that this utopian reality will ever see the light of day. As public cloud providers like AWS have moved equally fast to implement the security controls necessary to make their infrastructure usable by almost every U.S. government agency.

Brinley Macnamara (host) (10:46):

Moreover, our sponsors' risk tolerances have evolved in tandem. Moving forward, they'll be increasingly concerned with variables like the cost and performance of making their data and services accessible, rather than where those data and services actually live. My former Project Leader and Lead Networks Engineer, Michael Vincent, emphasized this new normal in a conversation we had recently

Michael Vincent (11:10):

The physical assets, where they are is becoming much less important. To the edge side, it's always about, I guess, where the computer is, as long as you can get it where it needs to be and you get connectivity, and there's so many different ways to get connectivity nowadays with 5G and access speeds that you can provide there.

Michael Vincent (11:31):

It's moving much more towards the services over top of it and the data that needs to be moved to and from, and analyzed and crunched, and asked for answers and given answers from.

Brinley Macnamara (host) (11:44):

Yeah.

Michael Vincent (11:44):

Much less about sort of where the network is, because the network can go anywhere nowadays. Becoming much easier to do that than 20 years ago when I first started this.

Brinley Macnamara (host) (12:00):

This podcast was written by me. It was produced and edited by Dr. Kris Rosfjord, Dr. Heath Farris, and myself. Our guests were Dr. Nitin Naik, Mark Ambrefe, and Michael Vincent. The music in this episode was brought to you by Ooyy, Trevor Kowalski, and Truvio. We'd like to give a special thanks to Dr. Kris Rosfjord, the Technology Futures Innovation Area Leader for all her support. Copyright 2022, MITRE PRS # 21-2930, February 8th, 2022. MITRE: solving problems for a safer world.