Docker Security Risks: GUIs + Xorg

Recently I wrote a post where I was running a GUI application in a docker container. I did so because I couldn’t be confident of the software’s origins and thought it’d be best not to take any chances. What other potential exploits does this leave one vulnerable to and how can one best protect themselves?

docker_and_xorg

But isn’t everything running in Docker secure?

First things first, let’s talk about what kind of security assurances docker tries to provide and under what circumstances those assurances would be considered null and void.

One of the major selling points of containers is the various forms of isolation that they provide (here’s Solomon’s list from DockerConEu). Docker’s strategy is to lock down as many avenues between containers and the host as it can. From here it lets you decide whether or not your application needs them and lets you open some of these avenues at your discretion.

This means that you’re free to provide containers with access to things like /var/run/docker.sock which means they can control the docker engine running on the host. People do this all the time, e.g., if they’re running Continuous Integration software inside a container that wants to execute build plans in other containers. That doesn’t mean that it’s particularly safe if you don’t trust code running in those build plans. Processes in these containers could use this to become root on the host (here’s a pretty succinct explanation of how this works), although it’s my understanding that the new support for user namespaces in Docker 1.10 nips this in the bud.

This brings me back to running  GUI applications in Docker. To display the GUI they need to be able to talk to the X server (Xorg) running on your host through a socket file; /tmp/.X11-unix. This also provides it with access to a smorgasbord of things that it can use against you.

The problem is that Xorg is in charge of more than just what gets displayed on the screen, it also handles input from keyboard/mouse. It does have a security layer but it’s kinda tacked on and doesn’t support fine grain control over which resources are accessible.

What can I do about it?

So how do we know when access to certain resources is a bad idea? I don’t believe it’s an exaggeration to say that every piece of software that serves a practical purpose also comes with potential security implications. Security omniscience (knowing every facet of the software we run, understanding how it relates to security and how these facets interrelate) is impractical, for this reason security omnipotence (the power to be 100% secure) is impossible.

The best we can do is to constantly seek clarity on what it is that our software does and (if at all possible) how it does it. I’ve found that as a result of adopting this policy, an intuition regarding security will naturally begin to develop. It all boils down to this: never stop asking questions.

So the next question to ask is…

How can I fuck over the end-user?

After poking around for a bit I realised that containers with access to Xorg could indeed do some scary things (at least while the container was still running.)

So I decided I’d throw together a few demos of containers that spy on/manipulate their host. These are really basic! They’re purely to remind you that just because you’re running something in a container doesn’t mean you’re not exposed to potential attacks.

Screenshots

Imagemagik is used a lot on Linux to take screenshots and it does this by interacting with X. Ergo; if your container has access to X on the host, than it can screenshot the host (and without any particular form of warning).

Here’s a demo dockerfile that screenshots with imagemagik and displays the resulting image with feh:

Here’s the commands needed to get this working:

You’ll notice that the only additional things this container needs is to tell xhost to allow any access from localhost, to mount the X socket and the provide the display number. These are all things you would need to do to allow any GUI to run from the confines of a container.

Toggling capslock every 20 minutes

This example installs xdotool which can be used for (among other things) simulating keypresses. Then it creates a cron that uses xdotool to toggle caps lock every 20 minutes.

Clearly I’m some kind of deranged lunatic for coming up with something this nefarious. Use responsibly.

Pretty much the same docker commands for running this container so I won’t bother including those from now on.

Peeking at the clipboard

This one is ridiculously easy! Xclip is a tool for read and write access to the various clipboards provided by X. That includes the regular clipboard (Ctrl+c/Ctrl+v), used in the example here, as well as the selection clipboard (highlight/middle mouse button) if you’re on Linux.

Scripting mouse movements

Our old friend xdotool can also be used to script mouse movements! Here’s an example of it moving to certain co-ordinates (15, 15) and clicking, it’s just a minor modification of the capslock example:

On it’s own this could be fairly random and useless but if we were to combine this with the screenshots we took above, we could get a reasonable idea of where we’d like to click to cause the most damage.

What else?

We could go further and grant access to other stuff like /dev/input/* (log data from keyboard + mouse), /dev/snd (speakers + microphones) or /dev/video0 (web cams). The possibilities are endless.

With all of this we could put together a RAT!… But at the risk of being put on some kinda watch-list, I think I’ll just leave that as an exercise for the reader…

Conclusions

The purpose of this article is to inform you of what you’re opening yourself up to when running GUIs in containers. Don’t do it when you’re working on anything sensitive on the host!

If you don’t actually need to interact with the GUI itself (maybe you’re considering your testing options) then you might get some of the benefits of containerisation by running these applications headless (a lot of people containerise Selenium tests this way).

Silver lining: Programs running in containers need a some kind of exploit to escape the confines of that container (like the /var/run/docker.sock thing I talked about at the beginning). This means that most of the time we can be assured that as soon as we stop the container running our hypothetical RAT, it will be unable to continue spying on us (it’s the equivalent of pulling the plug.)

Advertisements
Docker Security Risks: GUIs + Xorg

Running QtCreator in Docker

Update: I wrote a follow up post detailing potential security risks of running GUIs in containers, check it out here.

The first thing I ever tried installing from source was Qt. For those who don’t know, Qt is a cross platform framework for writing GUI applications.

5954820

That was then…

I was in college, I had a macbook at the time and I was toying around with writing GUIs with Python/Tkinter. I kept fighting against Tkinter’s limitations and I finally decided it was time to find something with more power.

All I remember was putting together the endless list of dependencies and sitting through lectures while they were compiling. Clearly I hadn’t discovered homebrew at this point. From what I can tell Qt and it’s python bindings (pyqt) were added to homebrew around the same time I needed them in 2009, go figure!

… This is now!

This week I decided to take the latest Qt for a spin and see what I’d been missing all these years!

I found their wiki which points to the official download page where you can get an installer for QtCreator with instructions on how to chmod/run it. This made me feel a bit uncomfortable for a number of reasons (no signed deb/rpm, no https and twas binary so no source and no insight into what the installer actually does).

I thought that it’d be a nice idea to run the installer inside a container. This isn’t a perfect buffer from potential threats of executing untrusted code (see security notes below) but at least it’s a first step.

docker_qt

Here’s my plan:

  • Build a base image (qt:base) with all dependencies installed and a copy of the installer.
  • Run installer in container, go through installation wizard, the container should exit when everything is finished.
  • Commit that container to another image (qt:installed).

I hate installation wizards. Maybe I’ve been spoiled by wealth of scriptable packaging and configuration tools available on Linux. Hopefully as I learn more about how QtCreator works, I’ll find a way to automate away the configuration steps (hell, I might even find the time to write a Dockerfile that builds it from source so that I can save other people the trouble!)

In the meantime, I’ll have what I need: an image that I can run QtCreator from whenever I have any GUI development on hand. Lets get to it!

Building QtCreator inside a container

Here’s the dockerfile I wrote:

Here’s the commands to get things working:

Problems encountered

When I was trying to get this working, there were a few things that broke straight away.

This error in particular appeared straight away and meant that both the installer and QtCreator wouldn’t accept keyboard input at all:

xkbcommon: ERROR: failed to add default include path /usr/share/X11/xkb
Qt: Failed to create XKB context!
Use QT_XKB_CONFIG_ROOT environmental variable to provide an additional search path, add ':' as separator to provide several search paths and/or make sure that XKB configuration data directory contains recent enough contents, to update please see http://cgit.freedesktop.org/xkeyboard-config/ .

When launching QtCreator it also warned me with pop ups about a few libs it couldn’t find like libxslt and libgstapp.

I was able to solve these problems by adding a few more packages to the list that are installed in the Dockerfile: libxkbcommon-dev, libxcb-xkb-dev, libxslt1-dev and libgstreamer-plugins-base0.10-dev.

Security Concerns

As for security implications of running untrusted code in a container; Docker is such a fast evolving field right now, it makes it difficult to stay on top of the potential security implications.

My first thought was that it might be a bad idea to execute code using the root inside the container given that it had access to stuff I didn’t fully understand: /tmp/.X11-unix, /dev/shm and /dev/dri.

To understand the risks a bit better I had to find some reading material.

The docker docs have a page which gives a broad outline of security concerns. This was a good read but it didn’t give any hints regarding my own concerns other than I should run as non-root user. I’ve tried this but I need to find a way to allow installation as non-root user without adding complexity of multiple docker build steps (things are bad enough as they are). I’d say it just requires some permissions changes inside the container, I’ll poke around and update here when I’ve got it working.

Took me a while to find a decent article with details I was looking for, written by someone who knows what they’re talking about. Stéphane Graber discusses some of the problems with access to X11 and things in /dev on his blog. Essentially, GUIs running in containers could still eavesdrop on the host while the container is running but when the container is off, nothing you installed in there will continue running. FYI: Examples of eavesdropping permitted by giving untrusted code access to X might include key logging or ability to take screenshots.

If anyone has suggestions regarding container security, please reach out to me! Container security is a pretty cool topic and I’m really interested to hear your thoughts!

More about GUIs in containers

For anyone who hasn’t seen ’em yet, Jessie Frazelle (a core contributor to Docker) has some great blog posts and conference talks about running GUI applications inside containers! I looked to these to get some inspiration, you should check ’em out too!

Aquameerkat also had a great article that gives a crash course on the X server, what displays are, and how to run headless GUIs in docker containers for testing purposes (not what I was doing here but interesting all the same!).

Running QtCreator in Docker

Weekly Notes – 18/7/15

This is the first post in a series that I’ll (hopefully) send out every week. It’ll cover the problems I have during the week as well as how I overcome those issues.

Hopefully this will be of some use to people out there, leave me a comment if you find any of this useful!

Docker machine + docker compose

So I’ve been playing around with docker-machine while working on a pet project of mine: daftpunk.

For those of you that aren’t familiar: docker-machine is a tool to manage hosts running the docker-engine (the server part of docker that actually manages containers). You can use it to create VMs locally using virtual box or vmware that run boot2docker. It can also create instances on various cloud providers and have them all set up and ready to go in minutes. It’s been really handy, I’ve used it to create an EC2 instance to host all the components of my project as containers.

Because daftpunk sports a web front end (written using flask), I thought it would be cool to mount all the flask-related code as a shared volume. That way I could make changes locally and they would be synced to the container hosting the flask app in AWS. Since changes to a flask app would reload the debug server automatically it seemed like a pretty neat way of testing out any changes I was making; at least, in theory. Here’s a mock up of the docker-compose config I’d need to get this to work:

web:
 build: frontend
 volumes:
 - frontend:/opt/frontend
 ports:
 - 5000:5000

It worked just fine when I ran it against my local docker-engine but I spent some time wrestling with the setup when I switched to AWS. The flask server wouldn’t start and when I investigated it seemed that the mount point in the container was empty. Eventually it dawned on me. You just can’t mount files/folders to a remote container.

This makes perfect sense once I stopped to think about it for a minute. What would happen if I turned off my local box? Let’s say for the sake of the argument that the remote container would keep the files it had; those files might be important to it’s operation after all. But things would get complicated if I tried to reconnect to the remote engine. Should it try to find the same stuff on my host and sync with it? This raises questions about merging file changes and that’s really the realm of DVCS‘ like git. Too complicated. Much simpler to just leave that functionality out when working with remote docker-engines.

So what’s the best way to reload the web front end? Well first I needed to add the front end code when I’m building the image instead of mounting it when I’m running the container. In the docker-compose.yaml file I just remove the volumes section and I add this to frontend/Dockerfile:

ADD . /opt/frontend

Then any time I want to test new changes I can use docker-compose to rebuild and redeploy only the front end container, like so:

docker-compose build web
docker-compose up web

This takes a little bit longer but I don’t have to worry about syncing files to remote hosts.

Weekly Notes – 18/7/15