Trust

Discussions about cybersecurity frequently include the word Trust. It may be stated on it's own or as part of other terms such as "Trust No-one" or "Zero-Trust" or "Trust but Verify"... I recently spent some time thinking about trust and how it relates to cybersecurity. The more I thought about it, the more I realized that a bit of a deep-dive into who we are trusting (or not trusting) was well worth the effort.

Please note: This article may get a bit scary but that's not the point. The point is to understand how things work at a deeper level so you can make informed decisions. 

First, before we cross over into the cybersecurity side of things (because that gets complicated quickly), let's look at a basic example of how we decide to trust something. For this example, let's look at a wooden ladder.

In the simplest case. If I go out into the forest, cut down a tree myself, mill lumber from it, check it's sound, design and build the ladder myself using appropriate techniques and materials, and safety-test it before I use it... I don't really need to trust anyone other than myself (assuming I'm any good at all the things I just mentioned needed for building ladders to begin with). Of course, hardly anyone would do all this themself. The far more likely scenario is that you would hop down to the local hardware store and purchase a ladder. When you do it this way, you're actually making several decisions involving trust:

  • You're trusting whoever milled and supplied the wood - selected appropriate materials
  • You're trusting the ladder design was appropriate and tested including that the weight rating is correct
  • You're trusting it was built to specification and using the correct materials
  • You're trusting it wasn't damaged during transport

When you think through even this simple example, you're actually trusting a lot of different companies, and probably a certification body or two... and that's just a ladder. But we certainly want our ladder to work correctly and remain safe through normal use, so that trust becomes pretty important.

Now, let's see how this sort of trust plays out in the world of computers and technology (hint: it's exponentially more complicated) It's so much more complicated that I struggled to come up with any examples remotely close to the ladder scenario. The best I could come up with is a single USB cable (ya the cable you charge your phone with)

A basic USB cable (the version 2 variety) is just a bundle of 5 individual wires with a specific type of connector on each end. In theory you could make one entirely yourself, but you would need to somehow get the raw materials to make wire and plastic, have a metal forge, plastic fabrication equipment, and a ton of other precision tools and skills (good luck with that). So instead, we buy a manufactured USB cable and again make a bunch of trust assertions:

  • We trust the cable is designed and built to be fit for purpose (it won't short out our phone and the wires are attached to the right connections)
  • We trust it will function structurally (not break apart with normal use)
  • We trust that it's not a fire-hazard
  • We trust that the materials are not toxic to us
  • We also trust that someone didn't add hidden circuitry to the cable that secretly captures our data or manipulates the equipment we plug it into in some way (oh ya, rogue USB cables really are a thing!) 

... and suddenly just because it's technology, we already had to have more trust for a simple cable than we did for a ladder.

Now, if that got you thinking, remember this was the simplest example I could think of. 

Let's look at something much more complicated. A one line computer program that you write yourself!

For the uninitiated, when you first learn a new computer programming language or are testing a new device, it's common practice to try and have it print out "Hello World" So, in keeping with that tradition. Let's look at what all we are trusting if we create our own single line program in the Python programming language that does exactly this.

Create a new plain text file and type in the line (don't copy and paste it, I'll discuss why later):

print('Hello, World!')

then save this as hello.py and run it using the python interpreter. The output looks like this:

Computer output showing "Hello World" from the python execution

Amazing! But let's think about what trust assertions we made for something as artificially simple as this program:

  • We trusted that the keyboard connected to our computer was working and fit for purpose
  • We trusted the text editor we used to create the file
  • We trusted that when we saved the file it would only create a new file and not impact any other data without some kind of warning
  • We trusted that the python language would interpret the code and function as intended
  • You may have just trusted my directions if you typed in what I told you and did it yourself as well
  • We trusted that python was actually correctly installed on the computer (in this case I did this myself - but that may not always be the case - and that means I trusted I installed the real python and not a fake version that had malware in it)
  • We trusted the command line program (powershell in this case) was doing only what it should
  • We trusted that python would run correctly on the computer operating system (Windows 11 in this case)
  • We trusted that running this program wouldn't damage our computer, corrupt any data, cause any crashes, etc..
  • We trusted that the display showed us the correct output

(we actually trusted a great deal more than just this... but you get the idea...)

That got a lot more complicated pretty quickly and we haven't even scratched the surface yet. The reality of most computer programming is that that, similar to the ladder and USB example, hardly anyone writes everything completely from scratch. Instead they get pieces of code written by others that are packaged to save time and then connect them together. These are called libraries. 

Libraries of computer code allow developers to save a great deal of time that would otherwise be spent writing code for tasks that are either very common or very complicated and require special skills. What trust assertions does the use of a simple library add to our mix? Suddenly we are now trusting every single developer that is helping to write that library, their skills and their intentions. That's just for a simple library. Complex libraries actually use OTHER libraries and have multiple developers so the trust required increases exponentially. Now, consider this: Most modern software of average size and complexity is composed of hundreds or even thousands of libraries. 

Now is about when I get the question:

"So, let me get this straight, you're saying: we need to trust every single developer involved in maintaining every single library that is built into some piece of software I'm installing and using on my system(s) including all the companies involved in addition to just the one that eventually put it all together and in addition to the platform(s) on which it all runs and the network(s) used for communication."

my answer "Yep. Actually, it's a lot worse than that because all those pieces keep changing, can be bought or sold, developers come and go, it's all in flux and some of the governments in control where the data flows you may not want to trust either. AND, because you also need to trust all the hardware components, networks, and everything else involved in actually getting and using the software."

This is the point where I point back to the beginning of the article... to remind you about that bit where I said it might get scary... now you know what I mean! 

All this, is so that you understand how things actually are built and function. I've really only touched the surface but it only gets worse when you dig deeper. The key take-away here is to understand that the software, systems, and technology we use hardly ever come from a single "source" they are almost entirely assembled from a myriad of sources both hardware and software and for us to truly trust the end result, we are basically, implicitly also trusting all the ingredients.

Remember the "hello world" python example when I said we actually trusted a great deal more than what I listed... When you think about libraries, consider that both Microsoft Windows and Python are modern software, written with libraries, so these also have many implicit trust assertions built-in, far too many to list. The same is true for your smart-TV, your car, your phone, any technology that runs on software (which is basically all of it these days)

Just one more example (then I promise we'll move out of the gloom into what we can do about all this) When Microsoft (a company most other companies generally trust - ya the logic there is debatable but still, they do...) implemented their "new" outlook app they identified over 750 other companies that all would have access to some or all of the user data associated with that app. (see: https://mspoweruser.com/772-third-parties-can-access-your-outlook-data-allows-microsoft-to-read-the-emails/ ) So even if you trust Microsoft, do you trust all these others too?

So does this mean I can't trust anything?

That's not really the right question to ask. Instead it means you need to make a more informed decision about what you are trusting and what information you are trusting it with. 

Help, this is all overwhelming what should I do?

If you feel a bit overwhelmed, you're not alone. The question of how to deal with these sorts of supply-chain issues are even causing grief for professionals that have made trying to protect against this sort of thing their life's work. Governments, corporations, and individuals all struggle with it. 

The good news is that there are many things that you can do to reduce risk for yourself, and it mainly comes down to mindset and how you make the decision to trust something or not. 

Key Takeaways:

  • Before you install anything, an app or a program, think about everything above. Then consider the following:
    • Weigh the benefits of the utility of the application against the potential risks.  Consider what it might have access to (your data, photos, location tracking, ability to listen to conversations) See if you can find any reviews from sources you know. Consider that brand new, never before seen, things are more difficult to trust because there will be much less information available.
    • Use all these factors to make a conscious decision about weather or not to trust it and don't install it unless you can.
  • Actually read the privacy policy and End User License Agreement!
    • Yes, I know, this is a big ask and can take some time, but you might be surprised at what you find in there. Several sites exist (eg: https://tosdr.org/ ) that summarize complex privacy and license policies into plain language and highlight concerns. Or, try using your preferred AI tool to simplify the process by asking it to summarize the documents (keeping in mind that AI output is frequently wrong so you may not want to trust that either!).
  • Delete things you no longer need.
    • This may seem obvious but it's easy for old unwanted applications, browser extensions, add-ons, etc... to build up on your phone, in your computer etc... over time. Once we stop using them, it's easy to forget about them. But as long as they are still there - you are still trusting them. Simply deleting the things you no longer need or at least removing their access to your stuff instantly and easily eliminates a lot of risk.

In the end, I hope this didn't scare you but instead made you think about trust, which was the point. We need to consider what we trust to be more safe. Nothing is foolproof but the more you know, the better decisions you can make.