Geoff: And these days, it's commonplace, it's woven its way into the fabric of our lives in ways that are totally insidious and completely invisible.
Caroline: From Cobalt at home, today in Portland and also Australia, this is "Humans of InfoSec," a show about real people, their work, and its impact on the information security industry. My guest today is Geoff Huston. Geoff has been working on the Internet since the early '80s, and in his own words, did his bit to set up the Internet in Australia, as well as to set up the early global Internet in the academic and research community. He's been doing Internet networking ever since, and now works on infrastructure security, including DNS, the Web PKI framework, and routing security, just to name a few. You can find Geoff's various writings and presentations at www.potaroo.net. Geoff also taught me just moments ago that potoroo is in fact an adorable animal, a bit similar to a kangaroo, a wallaby, a bandicoot. Geoff, welcome to our podcast.
Geoff: Well, thank you very much. Let me just quickly talk about the potoroo for a second.
Caroline: Oh yes, please.
Geoff: It's a very small animal. It's like a little mouse, but it's carnivorous. Under threat like many of Australia's small marsupials these days, but certainly, a very rare but very quirky animal, and for some reason, like quackers and quals, I happen to just like potoroos, which is why my website got named that way.
Caroline: Geoff, you have been working in the world of IT for more than 40 years. I have an extremely broad question to start out with, which is, how have you observed the Internet technology and the industry around it evolve in this time?
Geoff: I suppose in a lifetime, that's a fascinating change. I started out at the time when real computers were massive pieces of steel built in sort of customized rooms with, you know, raised floors and customized air conditioning and so on. And these beasts cost, you know, anything from a few hundred thousand dollars up to a few million at a bevy of what we called operators at the time. And were hideously, you know, rare. Universities had one, governments had maybe a few. And so certainly in the late '70s when I was sort of starting up as a student in the '80s, if you really wanted to work on computers like so many others of my peers, you tended to be in universities, sometimes in hospitals and sometimes in government departments, but it was kind of rare.
So over the last 40 years, computers got faster and smaller. And even from the early days, Gordon Moore of Intel with Moore's law was quite accurate when he predicted that the number of transistor gates on a silicon wafer would double approximately every 18 months, and it has. And what this really meant was that the price and power of computing just kept on shrinking. That was something that, you know, we knew was gonna happen. What we didn't really believe was the consumerization of computing where we would embed computers so deeply that they became invisible and they became ubiquitous. The whole Dick Tracy watch syndrome, and I am betraying my age, seemed unbelievable in the 1970s and is now a commonplace watch today that literally everyone wears. The whole idea of carrying a supercomputer in your pocket, otherwise known as a mobile phone, absolutely unbelievable. And so these days when we're surrounded by 30, maybe 40 billion devices on this planet, maybe more, that's something that was impossible to wrap our heads around in the 1980s when computers were rare, had a bevy of humans tending them, and quite frankly, absorbed a huge amount of human attention.
And these days, it's commonplace, it's woven its way into the fabric of our lives in ways that are totally insidious and completely invisible. It's like the infrastructure of running water in a faucet or a tap to your home. The amount of engineering and technology to make that water drinkable and permanently available is indeed stunning. Metallurgy, hydraulics, geology, physics, engineering, all play their role, and yet the answer is you turn on the tap and water flows. And the same way computing is becoming just incredibly invisible and assumed. And that's been a remarkable, a remarkable change. The other thing that's sort of remarkable in the same way is that while earlier industries absorbed a huge amount of the workforce, a hundred and twenty odd years ago, one of the largest employers in the United States was the emerging U.S. Steel. Tens of thousands of workers. AT&T, the telephone company at its height, I think absorbed around half a million workers, an extraordinary number at the time.
Does the IT industry employ as many people? Certainly, it seems to be not the case. That in some ways, this industry, although it touches the lives of literally everyone actually is less human intense, but still has the same capital intensity. So it's not that everyone is a programmer, it's not that everyone works for, you know, one of the behemoths in this industry, but nevertheless, its output touches literally everybody on the planet. Those are changes that we could never, ever have guessed at. You read Moore's law, you might have believed Moore's law, but what it really meant? Nah, no one really knew. It required a leap of imagination, which was just forbidding. So yes, it's changed a lot. We have changed from being an esoteric bunch of nerds working, you know, in universities and down in the basements into working on stuff that's just the mainstream of today's life. And that is completely unexpected and just a little bit scary at the same time.
Caroline: Geoff, I could listen to you speak for hours and hours. I don't know if it's the Australian accent or kind of the poetic storytelling way in which you communicate, but I find myself just transfixed. A couple of follow-up questions. So one is you mentioned Moore's law, and I'm curious to know, do you think Moore's law gets to a place where it stops or where it slows down, or do you think it's just gonna keep on going for the next 40 years?
Geoff: Yeah, you know, 40 years of doubling every 18 months is just incredibly, incredibly forbidding. And at some point, I think we all think the laws of physics might get in the way. But what we've done even in the last few years has been unbelievable. We started doing silicon wafers by a simple process of almost photography, visible light, where you'd create a mask and shine a bright light and place that mask on a silicon wafer and lay down another layer of metal oxide and do it again and again and again with more and more layers. But those tracks were quite thick because light is not a very sort of small wavelength. And so to make the chips, or at least the individual tracks even smaller, instead of using visible light, we started using X-ray lithography, you know, smaller wavelength light, you can get ever finer etchings on the silicon.
And then we start getting smaller and finer and smaller and finer. Seven angstroms and an angstrom is 10 to the -10 of a meter. It's incredibly small. Fractions of a human hair. State-of-the-art is now five and we're looking at three. Pretty soon, we're starting to get tracks on silicon that are a few molecules wide, and you sort of think, can you go any smaller? Really? Really? And oddly enough, it keeps on delivering in ways that I suppose the previous generation of physics wouldn't have thought possible. In our thirst and quench for putting these computers in ever smaller devices, we actually use less power. You know, you can't contain that much heat in a big wafer with lots and lots of chips. So in using less power, you can actually get away with finer tracks and even higher density.
So current chips these days are heading up towards 10 billion individual gates and still getting larger. And it's not just the conventional design of computers that's changed. It's not just there's an arithmetic and logic unit. You know, that you see the program count of the PC, you know, the standard classic computer design from, geez, the 1940s. We've gone into fuzzy logic. We've gone into the current chips which use neural arrays in silicon to actually create a whole new array of programming, which is less about determinism and linear programming and more about posing potential questions which are solved in fuzzy ways in parallel. And so these days, chips are not what you thought of even a generation ago and the whole issue of reduced instruction set and so on. The programming paradigms of the '80s and '90s are very quickly falling apart because what you can do with 10 billion gates on a computer chip is vastly different to what you could do with an early Pentium chip of, say, 20 years ago.
So there's this tension, I suppose, between hardware and software. And the hardware folk got smaller, but didn't really change the fundamentals until about 5 to 10 years ago when they started experimenting with these neural arrays in hardware and started actually doing custom design hardware, which took over some of the harder challenges in software and addressed it in uniquely different ways. So now when you see your mobile phone with this amazing virtual reality ability to take a live video feed and append, animate, and change it, you are using hardware, not just software to make that happen and in ways that, as I said, were inconceivable even a decade ago. So will it continue, was your question. Each time I think we look at a generation and its bounds of silicon, we kind of go, "Nah, nah, can't do that," yet we do, yet we achieve weird things.
And some of this is actually, we're still exploring down a very narrow space in a very wide domain. We're still using metal, we're still using this, you know, the normal techniques of the old transistor gate from late 1940s, 1947, '48, and just making it smaller. Oddly enough, as humans, we use a completely different system, biological. It's still electronic, but completely different in its fundamental physical building blocks. And the amount of compute flexibility and power that is in a human brain makes all of these efforts with chips somewhat, you know, antiquated and inflexible. And we haven't even explored that area of, if you were carbon-based computing as distinct from silicon-based computing, and maybe, you know, the next 40 years, because that's an awfully long time, we might start exploring down these different paths rather than trying to make the metal tracks ever finer because, yes, physics will get in the way eventually. You can't conduct electrons through ever narrow tunnels without at least using some metallic substrate, and it's gotta stop sooner or later.
Caroline: I continue to just bask in the artistry of the way in which you express yourself. And I am imagining and visualizing the beautiful evolution of this thing, which is technology, computing technology over the past few decades and into the next few decades. Geoff, I now wanna kind of pivot a bit and talk to you about the roles and responsibilities of humans. You had made a comparison about humans working in different industries, and the relatively small number of humans who work on technology today and that strong contrast between the number of people who are working on it and the number of people whom it touches. You and I began our relationship writing emails to each other and you had written to me, "We are drowning in a sea of poor code, weak systems, escalating complexity, and absolutely no commercial incentive to fix anything. We seem to be relying more and more on stuff that is just not worthy of even a single fraction of the trust we imbue in it." Now, Geoff, I'd love to ask you about your experience and your role in establishing the Internet in Australia. And maybe I'll ask you a big question which I can't wait to hear your response to, which is why is all of it so darn insecure? How did we miss the mark so severely?
Geoff: Okay, so kind of two big questions, and I suppose my role in establishing the Internet is kind of easy in so many ways because, you know, what happened in Australia was oddly enough no different to what happened in America and what happened in Europe at the time, and I'm gonna say 1980s. The folk who knew about how to channel bits through wires were the telephone companies. They made, I was gonna say 100% of their money on voice calls, but that's not true. In many cases, they made a huge amount of their money on fax calls, if you remember the fax. They were quite happily churning along, making huge amounts of money, way more than it cost to run a telephone system. And there was certainly a growing lack of patience, particularly with AT&T in America. And eventually, the antitrust issue reared its ugly head in that country, and AT&T agreed to a divestiture.
They would take one company, one large telephone company, and create a number of smaller companies, the baby bells. They had regionalized America, they would create fiefdoms, they weren't allowed to compete with each other, they each had a local geographic monopoly, and bit between it, the inter-regional was handled by a trunk operator, which was the vestige of AT&T. Now, the problem was that each of those local companies were making a lot of money. And the real question is, well, what do we do with it? We can't do takeovers of our neighbors. Dear old Judge Greene who was overseeing that vestiture wouldn't allow that. So we couldn't compete within the U.S. So what would we do? And what they did do was use other institutional instruments internationally and in particular the World Bank and the International Monetary Fund to place incredibly strong pressure on other countries to undertake a similar process of taking the clutch of telephony out of the hands of semi-government or even government monopolies.
And we entered an era of deregulation where rather than just having a national telephone operator, we started to have competitors fronting up. But the issue was, and this was the odd bit, competing on telephone calls was exceptionally difficult. And quite frankly, consumers didn't really get it. So the folk who came up as competitors were trying to take slices of the market off the incumbent, but it wasn't really that satisfactory. On the other hand, computers were getting smaller and smaller and more and more affordable, and computers needed to talk because without it, they're just glorified typewriters. But once you actually allow them to communicate with each other, they can share their data, they can share their information, they can share what people enter in. And all of a sudden, if you let them talk, you unleash this astonishing ability to create an environment where the computers themselves are talking.
Now, normally, the telephone company would've said, "Ah, yes, but you gotta pay our prices and obey our rules and our protocols," which was basically a protocol that insisted on, we are going to build a network that's good for voice and pretty shocking for everything else, including computers. Whereas over in the computer land, we were starting to build campus-wide networks. We were taking those dedicated machine rooms and pushing them over 10, 20 kilometers, or in your case, 5 to 10 miles and creating dedicated wiring that didn't just go at the speed of a human voice but went at millions of bits per second. All of a sudden, these new and more powerful and faster computers could speak across these dedicated local networks at astonishing speeds. They could actually move enough information, a real image, a real audio recording in real-time. Now, that was interesting, but at the same time in the U.S. and elsewhere, there were the emerging new brand of top-end computers because while Apple, Compaq, and a whole bunch of other folk were getting into the personal computer, taking both business and early residential, there was another area of computing, and Seymour Cray in the U.S. was one of the more famous that was trying to make the fastest computer ever, the biggest computer, the one that could process the most.
And the U.S. in a program in the mid to late 1980s decided that they would buy six of these and spread them across the United States and have them as a collective resource for the academics and researchers within the U.S. But there's a lot more than six universities, a lot more than six locales. So how do you create one computer or six centers, if you will, and have them accessible across the country? Well, you could call up AT&T and say, "Give us some phone lines." But that would be hopeless because the phone wasn't fast enough. It was built for voice, not computers. And so in an astonishing foresight, the National Science Foundation took the program which had been sort of sleepily working away in the advanced research project agency of the Department of Defense and said, "Can we borrow that technology and use it to hook up computer centers, supercomputer centers?"
Can we actually make this work for the entire research community of the U.S.? And the answer was surprisingly, A, yes, and B, in terms of the NSF, it was astonishingly cheap, $40 million, and that was the information revolution for you. One of the wisest bits of $40 million the U.S. government ever spent in the 20th century, I think. Now, everyone else looked at this in other countries going, "Hmm, we can do it too." And what we actually saw in the late 1980s was this emergence of computer-based networking. So, that was one thing, country by country, but within the next, you know, blink of an eye, the answer was, "Well, if the telephone lines go across the world and the submarine cable certainly did, so yes, we can do that. Why can't we hook up these computer networks?" We started to go into the, originally the satellite providers, but also into the submarine cable folk going, "Can we lease some circuitry off you?"
And the answer was, "Well, very expensive." We said, "That's okay, we just need capacity." And arms were twisted, money was paid, and we started then hooking things up. So by 1989, the Internet, as we sort of know it now, was taking shape. There were many wires across the Atlantic in the various submarine cables and there were circuits going U.S. to Australia, Japan, South Korea, Singapore, New Zealand. We were starting to hook things up. The National Science Foundation's network was actually only a very small project, and by 1995, the funding was over, but it did its job. The aim of research is to actually create the future, is to actually create industries that are self-sustaining, create viable enterprises and businesses. And by 1995, the Internet no longer needed subsidies, no longer needed, "Oh, well, it's free to use," nothing like it. It had actually penetrated well into business and was busy penetrating all those home computers.
The era of the dial-up modem and the dial-up Internet was well and truly with us. And so the formative work in building these academic and research networks in connecting universities very quickly became the first of the Internet service providers. And it was by about '95, '96 that the telephone companies started to wake up and go, "You mean to say there's another use for our wires other than voice?" The answer was, "Well, yes, but you might even be too late to jump on." So we started to create an infrastructure that very quickly outgrew the voice network and grew in a different way. Telephone handsets are dumb, they're just speakers and microphones, there's nothing else in them. All of the intelligence and security of the phone system is inside a network. It's the network that's the expensive thing. It's the network that actually demands everyone to run it.
Computer networking is different because the computers at either end are extraordinarily capable. And that was the one piece of insight that happened way back in the 1960s, that instead of creating a state in the network, building a special purpose road for each conversation and then tearing it down when you hang up, what if you allow every element of that conversation, every word to be individually wrapped up in bits with the address of the destination in the header of those bits, in the header of that packet, and splay those packets out into the network almost as independent entities. Packet 1, packet 2, packet 3, and simply for the network to treat each of those packets independently and get them closer to their destination however they can. Packets might get lost, packets might get reordered, that's okay because at the other end isn't a human with varied limited capability to sort of recreate the data in their brain into a signal, instead it's a computer, which is endlessly patient and endlessly capable of taking the sequence 1, 3, 5, 2, 4 and recreating 1, 2, 3, 4, 5, you know, in microseconds, millisecond, whatever.
So all of a sudden, you didn't need power in the network, all you needed was brute capacity and nothing else. Where you actually placed all the intelligence in the Internet was outside the network in the computers at either edge, right? So all of a sudden, you are in a different paradigm. Now, what drives these computers at either edge? Well, software, it's all software. And this I suppose is where things get a little bit scary because while the hardware of computing has just gone leaps and bounds, just absolutely astonishing, taking simple concepts and replicating them a billion-fold, software is still largely a human activity. It's all about encoding a logic flow. And we haven't got any better. As we create more and more complex systems, our ability to understand the system and to ensure it behaves in ways that we envisaged and predicted is endlessly compromised.
Why does Apple or, you know, Android put out software updates every few weeks? Because they wanna make it better? I don't think so. Because yet more problems have emerged with the mind-boggling complexity of software code that actually sits inside these devices, because we're still using the same coding languages. We're still reliant on building ever-larger systems and having humans try and understand the logic flow and to make it just slightly worse. Originally, our code was linear, start here, end there, loop a bit in the middle. But now, of course, we've gone asynchronous, we're doing threading where we're trying to create little codelets that respond to individual circumstances, we're multitasking, just like human brains do. And so instead what we're creating is this astonishing complexity of code where individual elements of code can be activated in unpredictable ways. There is a difference between complex and complicated. Complicated, it's just hard to understand.
But as you understand it, you can see how it works. Complex is an entirely different word.
Complex is a system that is capable of emergent behavior, capable of acting in ways that were inconceivable in the original design because complex is, well, complex. And once you start creating complex systems, you actually start to get exposed into areas where you never quite envisaged the way it behaves. This would be fine, except these very systems are open, accessible, and used by all kinds of folk with all kinds of motives. And while it's been astonishing as to what we can do, the threat landscape is astonishing as to what it can do at the same time. These days we don't think of computer viruses as fun little things to play with, and oh, it's just kitties having fun. I can take over the traffic system of, I don't know, San Francisco and make all the traffic lights go green at the same time or red. You don't want that to happen. Water supply, electricity, the fundamentals of the way we actually drive our society are now on the Internet and are now driven by these systems which are simply not up to the job.
The best example I think I can give you is actually back to Apple and its iPhone, because this is one of the first of the examples where not only did they take away all the buttons, they took away all the programmability. It's very hard to actually see the underlying core, the Unix box at the heart of that iPhone. Yet if you could, and if you could get exposed to it, you could load your own apps. You don't need to buy your apps through the Apple store, you can do what you want. If you jailbreak your iPhone, then all of a sudden, Apple loses a huge amount of potential revenue.
So it's in Apple's commercial interest to keep these phones phonetically sealed. And it's in the interest of others to make sure that every time there's a new release of iOS, it gets cracked, it gets broken, you know, the jailbreak is out there. And so far the track record is perfect, that every release has been broken. Our best efforts by a company, a large company whose very commercial interests lie in preventing this have managed to make it impossible to do so. Their failure rate is complete. Why is that? Software isn't just, "I wrote a piece of code." That happened back in the '40s, maybe not since then. Because I wrote a library and you used it, you made another library and then Bob used it, and Jim used it, and Joe used it, and Carol used it, and now let's use that, and we start doing what I suppose science has done all the time, build on the work of others.
But the problem is that if there's any kind of flaws or gaps in that towering edifice of building on the work of others, it's very easy to make it all come crashing down. And this is almost the fundamentally, and no one writes a complete environment. We use libraries for almost every single part of this for the hard bits. And the real question is, how good are those libraries? And the answer is, I've only got 40 hours work and I've gotta get the code done by Thursday and I already haven't timed this. I'm just gonna use this library. And yes, I hope they've done their work because my work then depends on it. And that happens all the time. And so all of a sudden, we're kind of relying on everyone else's work. How can we fix that? Well, it'll take me a year or two to do a complete audit of my code, another year for your code, etc.
Who's gonna pay for that time? Who's gonna actually spend the time and pause and actually go through program correctness? And even if we spent all that work, would we find all the bugs? The answer is no. And nobody cares. Nobody cares. Classic example of this is actually in the padlock in your web browser. You go to your local internet bank, dubdubdub.mybank.whatever, and up comes the splash page you are used to and a friendly gray padlock appears in the browser bar. Obviously, it's my bank or is it? Why do you think it's your bank? Well, I've typed stuff into my browser and stuff appears on my screen. Why isn't it my browser? Well, you are relying on a whole bunch of people doing their job perfectly. Not well, not really well, perfectly. Because underneath this system is a set of cryptographic operations where your bank went to somebody, and you don't know who, and paid them some money and that entity created, if you will, a digital puzzle, a certificate. And that certificate is part of that certificate issuer's set.
Your phone trusts the certificate issues and there are only really about 1,000 of them on the planet to do everything perfectly. Now, if they ever get lied to or hoodwinked, or take a bribe, then you are in trouble because that's no longer your bank, that's someone who's lied to somebody. So, some years ago, your browser trusted the Chinese Network Information Center to never lie, which was a fine thing to do, I guess, until the Chinese decided to do a favor for their mates in Egypt and mint a fake certificate, a fake piece of cryptography that translated to google.com. And the Egyptians said, "We wanna see what our citizens are saying in Gmail. If we trap all the outgoing traffic from Egypt and siphon it off into our little black box and pretend to be Google because we've got a certificate that their browsers trust, we're cool. We can get away with this, and we can sniff everyone's usernames and passwords." Oh dear.
When this happened, the Chinese got removed from being part of the trusted set of folk who can issue certificates, and fair enough too, but it's not the only case, it's one of many. There was the tax office in Holland that trusted an entity called DigiNotar who got severely hacked in their systems because they were online, and fake certificates were minted. And in this case, the Iranians used it to spy on their citizenry, not a good outcome. So, while we trust it implicitly, you're trusting in the actions of a whole bunch of folk you've never met and never will to never never lie about operations that you have no knowledge of. Now, if that sounds a lot like black magic, you're probably right. And that, you know, is the complete foundation of what you call security on the Internet.
It's not very good. Could we fix it? Possibly. Do we want to? Oh, we're talking about a system used by billions of people every single day. How do you forklift a completely new piece of, you know, machinery into that? Most of this is based around prime number cryptography. The art of saying a very, very big number is extraordinarily difficult to break into its prime number factors until you have a quantum computer, and as soon as you have a quantum computer, that takes a few milliseconds at most. Everything we do today is basically reliant on someone else not making quantum computers working. Gee whiz, feeling even better now. So, you know, in our haste to build a new world in the last 10 years, and we've built an extraordinary world, we've cut corners, we've made assumptions, we've simply taken a system that was very small and scaled it up well beyond its engineering tolerance. And then we wonder why it's looking a bit creaky and a bit groan. The answer was it's just too big for what it's doing. You know, what we're trying to make it do is well beyond its original design tolerance. We're pushing it too hard and we have no motivation to make it any better. That's a problem.
Caroline: Yes. Yes, it is. Geoff, I think that so many folks who listen to this podcast want to help, want to do good, want to defend, want to protect. And what advice might you give to folks who are working in the field or who might be thinking about getting into the field?
Geoff: I work on the topic of infrastructure security, which is around the areas of routing, the domain name system, and some parts of the PKI. And my advice to folk who are particularly working in applications is assume that I'm really bad at my job. Do not count on the routing security to bail you out. Do not count on the Domain Name System to always deliver the right answers. Do not even count on the Web PKI. Try and build your application to work in a uniquely, toxically hostile environment where infrastructure security is almost a contradiction in terms. Try to survive without relying on others if you really wanna make an application that actually has some resilience and integrity, because the more you try and outsource that and say, "Well, if I send my packet to the right IP address, obviously the answer is believable," is such a bad assumption in so, so many ways. Don't do that.
Try and create systems where the data is signed and the folk who receive your information, that data, can check as to its authenticity, currency, and completeness. That no one else in the middle has lied, altered, or even observed it. Try and make systems that stand up on their own and don't rely on the work of others in order to function with integrity. And so, if you will, infrastructure security is and should be a failure because others should really not rely on that being perfect all the time. What they really should rely on is that it's imperfect at times they never want it to be and try and create the alternatives, the mechanisms that shore up the application going, "Even if my packet got redirected, I don't trust the answer." So systems like security for the DNS, signing your domain name, and having clients actually validate those signatures makes the DNS so much better than if you don't.
And it's really disappointing to see that less than 10% of domain names are signed and not one of them is my bank's, by the way. And it's kind of, why aren't you doing that? Don't you care about me? Why do you think that the infrastructure just works when it clearly doesn't? And so, you know, some of these bottom lines are about don't trust anyone else, and that's almost the paranoia that you need in any aspect of InfoSec. And don't assume that everyone else is motivated by the same motivations as you, working honestly and doing a good job. They might be doing a good job, but it might be totally anti to what you are trying to achieve. It's a big Internet out there and we're all connected. And so in some ways, the answer is by all means try and secure it up, but if you're at the application level, assume absolutely nothing.
So what do we see now in the world around this that is actually reacting in such ways? If you've ever looked at the latest versions of iOS, you'll see private network relays coming up with Apple. But what they're now doing is actually creating secure channels through the infrastructure and trying to make sure that middleware can't interfere, sort of meddle with the data or even observe it in ways that were not intended. And because of they're doing a double wrapping, not even the endpoints know who's asking what question, it's being obscured from each other. Sure, that sounds like an extraordinary length to go to, but if you are carrying the valuable information of everybody, you can't be too careful, can you? And this is where that set of architecture is heading, assume nothing, trust nobody, make sure you validate what the answers are before you use them.
The same thing with the DNS and with routing, believe nothing. And the dear old Web PKI, oh dear, oh dear, oh dear. The DNS folk came up with an interesting answer because the Web PKI is a very, very odd form of security. I get a domain name from my friendly domain name registrar. Hi, this is my name, it's potaroo.net. And then I go off to somebody else, somebody completely who wasn't the person who gave me that domain name. And I say, "If I pay you money, will you say that this is my domain name?" And they say, "Well, pay first." Okay, here's my credit card. "Okay, yeah, Geoff, that's your domain name." How did they know? Because I've told them. What if it wasn't my domain name but I paid them money anyway? Not good. Not good. And it's been hopelessly abused time and time again.
Why aren't these bits of cryptographic assurance inside the domain system to start with? Why did we need a somewhat tawdry, corrupted, and antiquated third-party commentary on the name system as being the foundation of today's internet security? Well, we don't, we shouldn't. We should create a better mechanism that has a tighter framework of authenticity. What if I put these certificates into the DNS itself so that not only does it have IP addresses, this is my name, this is its IP address, but this is my name and this is the public key that I'll be using for services that use that name? This is the reason why it's me and not you that someone's talking to. Sounds easy, right? Well, big Internet, lots of players. Change is hard. I wish it was easy. How do we make it better? By not assuming that everyone's doing a good job today, and by making sure that we keep on pushing to do it better. Because the assumption that everyone's doing a good job today seems to be broken minute by minute with every single hack, every single zero-day exploit, every single weakness that gets exploited. So, if we are gonna do a better job, you know, the first thing is don't assume that we're doing a good job today because we're not.
Caroline: Geoff, thank you so much. I am so grateful to you for spending this time with me today. I'm so grateful to your family member and to my teacher who connected us. Thanks for being with me, for telling me about what you think, for sharing your story. And thanks for teaching me about the potoroo, which I did not know about, and which I will tell my four and seven-year-old children about. They will be very delighted. Geoff, thank you.
Geoff: Thank you. It's been a pleasure.
Caroline: "Humans of InfoSec" is brought to you by Cobalt, a Pentest as a Service company. You can find us on Twitter @humansofinfosec.