Remapping zoom slider on Microsoft Ergonomic Keyboard

I’m a big fan of Microsoft’s “Natural Ergonomic Keyboard 4000”. I’m just not a fan of it’s name, it’s a mouthful! (How about for future reference we just call it the Ergo?)

b112ddfc-0541-4948-a804-a7268b0cc2a7

 

The Ergo’s great for typing on and also has a selection of feature keys that you can key-map to. Depending on your distro/desktop environment, you can map keys for all sorts of reasons: to replace caps lock with ctrl, control your music player, to run scripts, etc. There’s plenty of guides about remapping keys around so I won’t talk too much about that here. Google around and if you have questions askubuntu.com is a great place to find answers. What I’m interested in is getting the zoom slider working!

The zoom slider sits above the space bar and between the two primary sets of keys. It’s a spring-loaded rocker switch and button that’s supposed to be used to zoom in/out (I’ve never used it in Windows so I’ve never actually seen it in action). I thought it’d be cool to remap but it doesn’t seem to work like the other keys. Turns out it needs a little fiddling.

Enabling the zoom slider

When I try to use it, the zoom slider just seems to be totally ignored. I believe this is because they’re custom keys created specifically for (and supported only by) Windows. They have no analogue in Linux. To get them working we’ll have tell Linux that they’re equivalent to some other keys.

This is where udev steps in. We need to write rules to interpret these as keys and put em into the hardware database. Easy! Create a new set of keyboard rules like so:

$ sudo mkdir /etc/udev/hwdb.d
$ sudo vim /etc/udev/hwdb.d/61-keyboard-local.hwdb

And add this:

keyboard:usb:v045Ep00DB*
keyboard:usb:v045Ep071D*
KEYBOARD_KEY_0c022d=pageup
KEYBOARD_KEY_0c022e=pagedown

Hwdb rules are grouped together by device. The first two lines here identify two different models of Ergo (the 4000 and the 7000) that our block of rules will apply to. It’s pretty obvious that these are for targeting USB keyboards but what is the set of seemingly random characters near the end? They are for matching specific devices by vendor/product code, they follow the following pattern:

v<vendor_code>p<product_code>

How could you be certain that your device will match this code? You can check your keyboard’s code by running:

$ lsusb | grep Ergo | awk '{print $6}'.

It should appear in the form <vender>:<product>. For my keyboard I get “045e:00db” which matches the first line of the example as expected.

The next two lines (each indented with a single space) are the rules to tell udev which scan codes map to which buttons. In this case we’re telling udev that the slider keys (scan code 0c022d and 0c022e) are supposed to be pageup/pagedown.

For these rules to take effect, you’ll need to recompile the hwdb and reload udev:

$ sudo udevadm hwdb --update
$ sudo udevadm control --reload

At this point you might need to plug the keyboard out/in to get it to work.

Identifying other keys and scan codes

Okay that’s great if all you wanna do is page up or down. Where do you go to find the names of other keys to map to? They’re all listed (in upper case with “KEY_” as a prefix) in /usr/include/linux/input-event-codes.h.

If you want to write udev rules for other keys you’ll need a way to identify key scan codes. For that we’ll use evtest.

$ evtest /dev/input/event0

When this is running you can move the slider up and down and it should post event logs like this:

Event: time 1468247448.373417, type 4 (EV_MSC), code 4 (MSC_SCAN), value c022e
Event: time 1468247448.373417, type 1 (EV_KEY), code 109 (KEY_PAGEDOWN), value 0

If it doesn’t, kill it with CTRL+C and try the next event file (/dev/input/event1) and keep going until you get these kind of logs (for me it was event10).

UPDATE: A friend of mine pointed out (in the comments below) that you don’t need to step through each of the devices manually. You can run sudo evtest without any arguments and it will display all devices as a list with short descriptions for you to choose from by simply entering the device number.

The first line in each event will be the scan code (“MSC_SCAN”) and the value listed (“c022e”) is the scan code.

Suggested Fixes

Depending on what distro of Linux you’re using, you might need to tweak the rules a bit. The ones I’ve shown above worked fine for me with Ubuntu 12.04. I’ve also had success getting them working with CentOS 7. Here’s two suggested changes if you run into trouble.

In newer versions of Ubuntu (15.01+) you might need to change the device matching rule to include a bus number (a four digit code, 0003 for USB devices). Here’s an example:

“`
keyboard:usb:b0003v045Ep00DB*
“`

In the example the scan codes started with a zero. Sometimes this doesn’t work (but I’ve no idea why). Try it out without the leading zero, like so:

“`
KEYBOARD_KEY_c022d=pageup
“`

Footnotes

  1. Originally got this working by reading this askubuntu question.
  2. Learned about hwdb was by reading the man page.
  3. Got some of the nitty gritty details of how to write hwdb rules by reading this Arch wiki page.
  4. To figure out how to use evtest I used this guide.
Advertisement
Remapping zoom slider on Microsoft Ergonomic Keyboard

A Markdown notes server

I keep a lot of notes…

Writing notes

Any time I’m researching anything computing related, whether it’s for work or in my spare time, I keep a log of my “stream of conciousness”. This includes thoughts like “I think this is because…”, “I wonder if this is related to…”, “What the fuck is…?”.

This is great when I’ve spent a lot of time learning something new and I can look back over them once a week to reinforce the stuff I’ve covered. I get to see how wrong I was about some things and I can gain a new appreciation for other things that might not have dawned on me the first time around. I encourage anyone to keep stream of conciousness notes while they’re in learning mode.

A few specifics: I write these notes in vim using Markdown. I typically write the date as a header and use sub-headers for breaking up a days thoughts into topics. Then I write down what ever thoughts occur to me as I go in a bullet-point list. If I’m googling and find helpful resources I’ll throw the links into the notes as well so I can come back to them later, rather than break the flow of my exploration.

These aren’t strict rules you have to follow; just my own personal style. There’s nothing wrong with experimenting and finding what works for you!

Reading notes

When it comes to reviewing what I’ve written, I’ve found that reading markdown in vim sucks.

What I really wanted was to format the notes in HTML and make them look visually appealing (this makes a big difference when you’ve got an attention span as short as mine).

Hosting them from a webpage (over LAN) would be ideal because then I could check ’em from a phone or tablet when I’m away from my desk (great for recounting details during stand ups).

It should be noted that I didn’t want to use any of the note taking web apps like Evernote or Google Keep or even though they’d making note syncing really easy. Why you ask? Because 1) vim, 2) grep, and 3) I wanna host my own notes and not expose them over the internet because sometimes they’re sensitive work related stuff.

What did I come up with?

Simple: flask app in a docker container. It uses pandoc to convert to HTML and it spruces things up with dashed/github-pandoc.css.

You can pull it from the Docker Hub and mount the folder with all your notes when you run it. No fuss, relatively little muss. Here’s the commands to serve files from ~/notes at localhost:4000 if you wanna try it yourself:


docker pull nicr9/mdserver
docker run -dp 4000:4000 -v /home/$USER/notes:/opt/notes nicr9/mdserver

view raw

mdserver.sh

hosted with ❤ by GitHub

Want to take a closer look? The code is up on my github and the latest image is on the docker hub! It’s just something I threw together quickly. If you have ideas for improvements I’ll be happy to look at any pull requests.

This is actually the first image I put on the hub so that was fun.

I also posted some screenshots below:

Give it a go and let me know what you think! I’d love to hear from you about your own note taking strategies.

Enjoy!

A Markdown notes server

Docker Security Risks: GUIs + Xorg

Recently I wrote a post where I was running a GUI application in a docker container. I did so because I couldn’t be confident of the software’s origins and thought it’d be best not to take any chances. What other potential exploits does this leave one vulnerable to and how can one best protect themselves?

docker_and_xorg

But isn’t everything running in Docker secure?

First things first, let’s talk about what kind of security assurances docker tries to provide and under what circumstances those assurances would be considered null and void.

One of the major selling points of containers is the various forms of isolation that they provide (here’s Solomon’s list from DockerConEu). Docker’s strategy is to lock down as many avenues between containers and the host as it can. From here it lets you decide whether or not your application needs them and lets you open some of these avenues at your discretion.

This means that you’re free to provide containers with access to things like /var/run/docker.sock which means they can control the docker engine running on the host. People do this all the time, e.g., if they’re running Continuous Integration software inside a container that wants to execute build plans in other containers. That doesn’t mean that it’s particularly safe if you don’t trust code running in those build plans. Processes in these containers could use this to become root on the host (here’s a pretty succinct explanation of how this works), although it’s my understanding that the new support for user namespaces in Docker 1.10 nips this in the bud.

This brings me back to running  GUI applications in Docker. To display the GUI they need to be able to talk to the X server (Xorg) running on your host through a socket file; /tmp/.X11-unix. This also provides it with access to a smorgasbord of things that it can use against you.

The problem is that Xorg is in charge of more than just what gets displayed on the screen, it also handles input from keyboard/mouse. It does have a security layer but it’s kinda tacked on and doesn’t support fine grain control over which resources are accessible.

What can I do about it?

So how do we know when access to certain resources is a bad idea? I don’t believe it’s an exaggeration to say that every piece of software that serves a practical purpose also comes with potential security implications. Security omniscience (knowing every facet of the software we run, understanding how it relates to security and how these facets interrelate) is impractical, for this reason security omnipotence (the power to be 100% secure) is impossible.

The best we can do is to constantly seek clarity on what it is that our software does and (if at all possible) how it does it. I’ve found that as a result of adopting this policy, an intuition regarding security will naturally begin to develop. It all boils down to this: never stop asking questions.

So the next question to ask is…

How can I fuck over the end-user?

After poking around for a bit I realised that containers with access to Xorg could indeed do some scary things (at least while the container was still running.)

So I decided I’d throw together a few demos of containers that spy on/manipulate their host. These are really basic! They’re purely to remind you that just because you’re running something in a container doesn’t mean you’re not exposed to potential attacks.

Screenshots

Imagemagik is used a lot on Linux to take screenshots and it does this by interacting with X. Ergo; if your container has access to X on the host, than it can screenshot the host (and without any particular form of warning).

Here’s a demo dockerfile that screenshots with imagemagik and displays the resulting image with feh:


FROM ubuntu:15.10
MAINTAINER Nic Roland "nicroland9@gmail.com"
RUN apt-get update && \
apt-get install -y imagemagick feh
ENTRYPOINT import -window root -display :0 /tmp/0.png && \
feh -. /tmp/0.png

Here’s the commands needed to get this working:


# Build the demo dockerfile
docker build -f Dockerfile.screenshot -t xattacks:screenshot .
# Allow Docker processes to access X
xhost local:root
# Run container that takes a screenshot on the host (imagemagick) and displays it for you (feh)
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY xattacks:screenshot

You’ll notice that the only additional things this container needs is to tell xhost to allow any access from localhost, to mount the X socket and the provide the display number. These are all things you would need to do to allow any GUI to run from the confines of a container.

Toggling capslock every 20 minutes

This example installs xdotool which can be used for (among other things) simulating keypresses. Then it creates a cron that uses xdotool to toggle caps lock every 20 minutes.

Clearly I’m some kind of deranged lunatic for coming up with something this nefarious. Use responsibly.


FROM ubuntu:15.10
MAINTAINER Nic Roland "nicroland9@gmail.com"
RUN apt-get update && \
apt-get install -y xdotool cron && \
apt-get clean
RUN echo "20 * * * * xdotool key Caps_Lock" > capslock.cron
RUN crontab capslock.cron
ENTRYPOINT cron -f

Pretty much the same docker commands for running this container so I won’t bother including those from now on.

Peeking at the clipboard

This one is ridiculously easy! Xclip is a tool for read and write access to the various clipboards provided by X. That includes the regular clipboard (Ctrl+c/Ctrl+v), used in the example here, as well as the selection clipboard (highlight/middle mouse button) if you’re on Linux.


FROM ubuntu:15.10
MAINTAINER Nic Roland "nicroland9@gmail.com"
RUN apt-get update && \
apt-get install -y xclip && \
apt-get clean
ENTRYPOINT xclip -o -selection clipboard

Scripting mouse movements

Our old friend xdotool can also be used to script mouse movements! Here’s an example of it moving to certain co-ordinates (15, 15) and clicking, it’s just a minor modification of the capslock example:


FROM ubuntu:15.10
MAINTAINER Nic Roland "nicroland9@gmail.com"
RUN apt-get update && \
apt-get install -y xdotool cron && \
apt-get clean
RUN echo "20 * * * * xdotool movemouse 15 15 click 1" > mouse.cron
RUN crontab mouse.cron
ENTRYPOINT cron -f

On it’s own this could be fairly random and useless but if we were to combine this with the screenshots we took above, we could get a reasonable idea of where we’d like to click to cause the most damage.

What else?

We could go further and grant access to other stuff like /dev/input/* (log data from keyboard + mouse), /dev/snd (speakers + microphones) or /dev/video0 (web cams). The possibilities are endless.

With all of this we could put together a RAT!… But at the risk of being put on some kinda watch-list, I think I’ll just leave that as an exercise for the reader…

Conclusions

The purpose of this article is to inform you of what you’re opening yourself up to when running GUIs in containers. Don’t do it when you’re working on anything sensitive on the host!

If you don’t actually need to interact with the GUI itself (maybe you’re considering your testing options) then you might get some of the benefits of containerisation by running these applications headless (a lot of people containerise Selenium tests this way).

Silver lining: Programs running in containers need a some kind of exploit to escape the confines of that container (like the /var/run/docker.sock thing I talked about at the beginning). This means that most of the time we can be assured that as soon as we stop the container running our hypothetical RAT, it will be unable to continue spying on us (it’s the equivalent of pulling the plug.)

Docker Security Risks: GUIs + Xorg

Nested decorator functions in Python

When I was at PyConIE last October I was talking with an old friend about Python’s decorator functions.

He lamented how you need to google around for tutorials any time you wanted to write a parametrised decorator because it can be so confusing. I told him that there was a way to do it by nesting decorator functions which is much simpler than implementing them using classes (which seems to be the widely known about way).

I thought I’d write up this quick blog post with some examples that will demonstrate how to do this and serve as a reference in case I forget any of this stuff myself!

Using classes

So here’s a rick rolling example using classes:


class WithoutParams(object):
def __init__(self, func):
"""
Constructor receives target function.
"""
self.func = func
def __call__(self, *args, **kwargs):
"""
Arguments intended for target function are passed to __call__.
From here you can call the target any way you see fit.
"""
self.func("Never gonna give you up")
class WithParams(object):
def __init__(self, val):
"""
Constructor takes decorator params instead of target.
"""
self.val = val
def __call__(self, func):
"""
Target function is passed in here instead.
This is where we create a wrapper function to replace the target.
"""
def wrapped(*args, **kwargs):
"""
Wrapper function takes the target arguments and calls target.
"""
func(self.val)
return wrapped
@WithoutParams
def a(text):
print text
@WithParams("Never gonna let you down")
def b(text):
print text
if __name__ == "__main__":
a("hello world")
b("foo bar")

Using nested functions

And here’s the corresponding example which uses nested functions:


def without_params(func):
"""
Outer function takes target function and returns a wrapped one.
"""
def _without_params(*args, **kwargs):
"""
Inner function takes target arguments and makes the call.
"""
return func("Never gonna run around")
return _without_params
def with_params(val):
"""
If you need to take params, it's the same but wrapped in another function.
This one takes the decorator parameters and returns a doubly wrapped function.
"""
def _with_params(func):
def __with_params(*args, **kwargs):
return func(val)
return __with_params
return _with_params
@without_params
def a(text):
print text
@with_params("and desert you!")
def b(text):
print text
if __name__ == "__main__":
a(2, 3)
b("fizz bang")

Conclusion

In summary: class decorators are confusing because arguments and functions are sent to different places depending on the context. Nested function decorators are a neater abstraction because the base decorator is the same in both cases, if you want parameters you just wrap it in an additional function.

Hope this comes in handy!

Nested decorator functions in Python

Running QtCreator in Docker

Update: I wrote a follow up post detailing potential security risks of running GUIs in containers, check it out here.

The first thing I ever tried installing from source was Qt. For those who don’t know, Qt is a cross platform framework for writing GUI applications.

5954820

That was then…

I was in college, I had a macbook at the time and I was toying around with writing GUIs with Python/Tkinter. I kept fighting against Tkinter’s limitations and I finally decided it was time to find something with more power.

All I remember was putting together the endless list of dependencies and sitting through lectures while they were compiling. Clearly I hadn’t discovered homebrew at this point. From what I can tell Qt and it’s python bindings (pyqt) were added to homebrew around the same time I needed them in 2009, go figure!

… This is now!

This week I decided to take the latest Qt for a spin and see what I’d been missing all these years!

I found their wiki which points to the official download page where you can get an installer for QtCreator with instructions on how to chmod/run it. This made me feel a bit uncomfortable for a number of reasons (no signed deb/rpm, no https and twas binary so no source and no insight into what the installer actually does).

I thought that it’d be a nice idea to run the installer inside a container. This isn’t a perfect buffer from potential threats of executing untrusted code (see security notes below) but at least it’s a first step.

docker_qt

Here’s my plan:

  • Build a base image (qt:base) with all dependencies installed and a copy of the installer.
  • Run installer in container, go through installation wizard, the container should exit when everything is finished.
  • Commit that container to another image (qt:installed).

I hate installation wizards. Maybe I’ve been spoiled by wealth of scriptable packaging and configuration tools available on Linux. Hopefully as I learn more about how QtCreator works, I’ll find a way to automate away the configuration steps (hell, I might even find the time to write a Dockerfile that builds it from source so that I can save other people the trouble!)

In the meantime, I’ll have what I need: an image that I can run QtCreator from whenever I have any GUI development on hand. Lets get to it!

Building QtCreator inside a container

Here’s the dockerfile I wrote:


FROM ubuntu:15.10
MAINTAINER Nic Roland "nicroland9@gmail.com"
# Install lots of packages
RUN apt-get update && apt-get install -y libxcb-keysyms1-dev libxcb-image0-dev \
libxcb-shm0-dev libxcb-icccm4-dev libxcb-sync0-dev libxcb-xfixes0-dev \
libxcb-shape0-dev libxcb-randr0-dev libxcb-render-util0-dev \
libfontconfig1-dev libfreetype6-dev libx11-dev libxext-dev libxfixes-dev \
libxi-dev libxrender-dev libxcb1-dev libx11-xcb-dev libxcb-glx0-dev x11vnc \
xauth build-essential mesa-common-dev libglu1-mesa-dev libxkbcommon-dev \
libxcb-xkb-dev libxslt1-dev libgstreamer-plugins-base0.10-dev wget
# Download script
RUN wget http://download.qt.io/official_releases/online_installers/qt-unified-linux-x64-online.run
RUN chmod +x ./qt-unified-linux-x64-online.run
# Run installer as entrypoint
ENTRYPOINT ./qt-unified-linux-x64-online.run

Here’s the commands to get things working:


# Build base image
docker build -t qt:base .
# N.B. This is an important step any time you're running GUIs in containers
xhost local:root
# Run installation wizard, save to new image, delete left over container
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v /dev/shm:/dev/shm –device /dev/dri –name qt_install –entrypoint /qt-unified-linux-x64-online.run qt:base
docker commit qt_install qt:latest
docker rm qt_install
# Then you can run QtCreator with this monster of a command
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v /dev/shm:/dev/shm -v ~/src:/root –device /dev/dri –name qt_creator –rm –entrypoint /opt/Qt/Tools/QtCreator/bin/qtcreator qt:latest

Problems encountered

When I was trying to get this working, there were a few things that broke straight away.

This error in particular appeared straight away and meant that both the installer and QtCreator wouldn’t accept keyboard input at all:

xkbcommon: ERROR: failed to add default include path /usr/share/X11/xkb
Qt: Failed to create XKB context!
Use QT_XKB_CONFIG_ROOT environmental variable to provide an additional search path, add ':' as separator to provide several search paths and/or make sure that XKB configuration data directory contains recent enough contents, to update please see http://cgit.freedesktop.org/xkeyboard-config/ .

When launching QtCreator it also warned me with pop ups about a few libs it couldn’t find like libxslt and libgstapp.

I was able to solve these problems by adding a few more packages to the list that are installed in the Dockerfile: libxkbcommon-dev, libxcb-xkb-dev, libxslt1-dev and libgstreamer-plugins-base0.10-dev.

Security Concerns

As for security implications of running untrusted code in a container; Docker is such a fast evolving field right now, it makes it difficult to stay on top of the potential security implications.

My first thought was that it might be a bad idea to execute code using the root inside the container given that it had access to stuff I didn’t fully understand: /tmp/.X11-unix, /dev/shm and /dev/dri.

To understand the risks a bit better I had to find some reading material.

The docker docs have a page which gives a broad outline of security concerns. This was a good read but it didn’t give any hints regarding my own concerns other than I should run as non-root user. I’ve tried this but I need to find a way to allow installation as non-root user without adding complexity of multiple docker build steps (things are bad enough as they are). I’d say it just requires some permissions changes inside the container, I’ll poke around and update here when I’ve got it working.

Took me a while to find a decent article with details I was looking for, written by someone who knows what they’re talking about. Stéphane Graber discusses some of the problems with access to X11 and things in /dev on his blog. Essentially, GUIs running in containers could still eavesdrop on the host while the container is running but when the container is off, nothing you installed in there will continue running. FYI: Examples of eavesdropping permitted by giving untrusted code access to X might include key logging or ability to take screenshots.

If anyone has suggestions regarding container security, please reach out to me! Container security is a pretty cool topic and I’m really interested to hear your thoughts!

More about GUIs in containers

For anyone who hasn’t seen ’em yet, Jessie Frazelle (a core contributor to Docker) has some great blog posts and conference talks about running GUI applications inside containers! I looked to these to get some inspiration, you should check ’em out too!

Aquameerkat also had a great article that gives a crash course on the X server, what displays are, and how to run headless GUIs in docker containers for testing purposes (not what I was doing here but interesting all the same!).

Running QtCreator in Docker

Golang Ramblings from a Python Dev

I’ve been spending time lately learning Go and I thought I’d throw some of my thoughts down here. As the title implies my experience is mostly in Python so expect lots of apples to oranges comparisons!

Taken from the golang blog

Tooling

I guess the first thing that jumps out is the quality of the tooling, I’ve only had a chance to play around with a few of the CLI’s features but already I’m impressed with how simple it makes some day to day things.

When you’re looking at new code for the first time the first thing on your mind is usually vendoring dependencies (I didn’t realise ‘vendor’ is a verb, ya learn something new every day). “go get …” will download and install individual dependencies but it can also recursively scan through a project to save you the effort:

go get -d ./...

This can pull in code automagically from git repositories (mercurial, bazaar and subversion too) which is nice and it doesn’t involve installing an extra package like pip/setuptools.

When you’re adding changes to code don’t forget about gofmt. It takes advantage of the relative ease of parsing Go’s syntax to automate/enforce certain elements of coding style like use of whitespace. It’s also really easy to include in your workflow; I use fatih/vim-go, a vim plugin that (among other cool things) runs gofmt every time you save changes. If vim’s not your style (pun not intended) then it’s easy to find docs for setting up a git hook that runs gofmt before you commit.

There are some other tools that use Go’s simplicity to automate away the boring stuff. These are tools I haven’t had a lot of time to try out but for the adventurous: godoc “extracts and generates documentation for Go programs” and gofix which “finds Go programs that use old APIs and rewrites them to use newer ones.” The later one sounds particularly ambitious! Reminds me of 2to3, which never offered any guarantees other than a helping hand; you were still expected to go most of the porting work yourself.

Next step is usually building/installing code; the “go build” and “go install” commands serve all your needs here. No need to choose between packaging libs like setuptools and distribute! The problem here is that what these packaging libs provide for is a standard way of declaring package metadata so that they can be found on an index like PYPI, which leads me to…

Argh! Packaging!

So far this is the first big thing missing for Go and I’m not entirely sure how I feel about it!

On the one hand there is no need to set up and actively maintain your project listings on a centralised site; all you need to do is to choose (and choice is important to developers) which site you host your code on.

On the other hand PYPI can really leverage that metadata and offer a powerful way to search for the package you need. If you haven’t tried browsing by package category before, give it a quick try so you can see what I mean. I can wait…

I tried looking into a similar resource for golang and most of the internet pointed me to godashboard.appspot.com/project, which seems to be down at the time of writing. After a little bit of research it seems that the listings it hosted were moved around some wikis till they found their new home at https://github.com/golang/go/wiki/Projects. Having this page in the language’s git wiki seems to makes sense in terms of go’s philosophy but it just leaves me wanting something better.

But I Digress… Lets Talk About Testing

So the compiler hasn’t found any problems in the code; it all builds without issue. Hurray!

The next thing to check is tests and the command for running them is (big surprise here, drum-roll please) “go test“. This quite frankly makes “python -m unittest discover” look like an afterthought.

Here’s something that shocked me when I was writing tests: there’s no assertions in Go! The aim here was to remove a crutch that developers too often use to avoid thinking about proper error handling. That should be a top consideration when you’re writing server-side code; if you’re not careful with errors requests will die and cause lots of trouble for the client.

So what do tests look like in a world without assertions? Not that different really, assertions are just if statements with some syntactic sugar to make things look a bit more formal. All you need is to write your own if statements and make calls to “t.Fatal(…)” if things look broken.

I did some soul searching regarding this; is there anything that we really lose from leaving assertions out of the language? At first I was a bit miffed because I’m used to the array of assertions available as member functions of unittest.TestCase in Python but I got over it quickly! Keystrokes saved while writing tests aren’t worth it if it means people get lazy with error handling.

Convention Over Configuration

The common theme that I keep coming back to in my head here is “what can we automate by relying on a convention and what should we let developers control by way of configuration?”.

Go’s tooling stuff all depends on a strict directory structure and syntax (which probably required a lot of thinking on Google’s part). Python’s package distribution relies on the maintainer filling out setup.py. Both languages also have conventions regarding writing test cases to facilitate test discovery.

It makes me think a lot about the upfront investment in terms of reading docs. When you’re starting out learning programming, documentation can be daunting; there’s a hell of a lot of it! I’m not ready to say which language has the greater learning curve for developers starting out but it’s a very important consideration

More Rambling?

I realise that this post has been far from a comprehensive comparison and that there are many important things I left out, e.g., goroutines vs asyncio, support for paradigms like OO and functional programming, third party libraries, etc.

There’s way too much to cover in a single blog post. I hope to write more as I get familiar with the language and maybe I’ll ramble about other technologies too!

If you have any comments, suggestions, or questions feel free to get in contact with me! I’d love to hear from you.

Golang Ramblings from a Python Dev

I’m back!

I know that it’s been a while since I wrote a post! I haven’t been keeping up with my own goal to write something new at least every week.

At first this was because I was on holidays and internet access was spotty at best. More recently I’ve been very busy at work and not had as much time for reflection. I wanted to call this out here to keep myself honest and to make a fresh commitment!

Taking the time to write this blog is in my own interest because it pushes me to develop certain aspects of myself (writing skills, communication, confidence). I love that it gives me an excuse to explore technical subjects in greater depth and to showcase what I find out. Lastly (and I know I’ve said this before) I hope that it serves as a record and guide for others who are starting out in the software space.

I would urge anyone with an interest in tech to start a blog of their own. It’s a great way to become a more rounded person and, with enough time and effort, will have a massive effect on your career! Reach out to me on twitter if you need some advice about this.

I’ll be back with some more weekly updates soon!

I’m back!

Weekly Notes – 26/7/15

Setup.py and rpm.spec:

https://rkuska.fedorapeople.org/pyp2rpm.png

So recently at work I needed to package a tool we wrote at work. The tool was written in Python (big surprise) and it was designed to be run as a daemon so it had an init-script.

Most Python tools are packaged by writing a `setup.py` script. There are actually a number of tools you can use (I’ve used distutils and setuptools in the past) and there’s a bit of a confusing history regarding which tool is recommended. I mostly use setuptools.

Most (all?) of the software we use in work are packaged as RPMs which are built from a `rpm.spec`. I don’t want to get bogged down in the advantages/disadvantages of this approach here so I’ll keep this part short: It’s more powerful than setup.py scripts but not as well documented. It’s simple enough to install files to specific places that setuptools/distutils don’t make easy (like an init.d script) or to execute code during build/install. On the other hand you really need to understand the ins and outs of your application when writing an rpm.spec. There are some subtle things that happen when you’re building RPMs and that can cause a lot of confusion when you’re starting out.

Now, you can use setuptools to create an RPM package, which is handy. It bypasses the weirdness of rpm.specs and setuptools nicely wraps up some of the messy details of where you should install packages so that they’re available on the python path. Unfortunately, this approach limits you to the features available in setuptools and you lose out on all of rpm.specs flexibility! Just not good enough for my needs.

What I really wanted to do is write both and mix them together. An rpm.spec that delegates some of the hard work to setup.py and looks after the detailed work itself. Luckily this is possible!

Here’s what you need inside your rpm.spec to get setup.py to do the heavy lifting.

The build phase is pretty self explanitory:

%build
python2.7 setup.py build

Then during install you need to point it to install inside the RPM build directory and to save a list of files to INSTALLED_FILES:

%install
python2.7 setup.py install --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES

Then that file list can be passed into the `%files` directive by using `-f`. This means that you won’t need to maintain a list of files that setup.py will be looking after.

%files -f INSTALLED_FILES

You can still list any other files that are taken care of by the rpm.spec after the `%files` directive.

If you try this out yourself, you can check that it all worked by listing the files inside the resulting rpm:

rpm -qpl path/to.rpm
Weekly Notes – 26/7/15

Weekly Notes – 18/7/15

This is the first post in a series that I’ll (hopefully) send out every week. It’ll cover the problems I have during the week as well as how I overcome those issues.

Hopefully this will be of some use to people out there, leave me a comment if you find any of this useful!

Docker machine + docker compose

So I’ve been playing around with docker-machine while working on a pet project of mine: daftpunk.

For those of you that aren’t familiar: docker-machine is a tool to manage hosts running the docker-engine (the server part of docker that actually manages containers). You can use it to create VMs locally using virtual box or vmware that run boot2docker. It can also create instances on various cloud providers and have them all set up and ready to go in minutes. It’s been really handy, I’ve used it to create an EC2 instance to host all the components of my project as containers.

Because daftpunk sports a web front end (written using flask), I thought it would be cool to mount all the flask-related code as a shared volume. That way I could make changes locally and they would be synced to the container hosting the flask app in AWS. Since changes to a flask app would reload the debug server automatically it seemed like a pretty neat way of testing out any changes I was making; at least, in theory. Here’s a mock up of the docker-compose config I’d need to get this to work:

web:
 build: frontend
 volumes:
 - frontend:/opt/frontend
 ports:
 - 5000:5000

It worked just fine when I ran it against my local docker-engine but I spent some time wrestling with the setup when I switched to AWS. The flask server wouldn’t start and when I investigated it seemed that the mount point in the container was empty. Eventually it dawned on me. You just can’t mount files/folders to a remote container.

This makes perfect sense once I stopped to think about it for a minute. What would happen if I turned off my local box? Let’s say for the sake of the argument that the remote container would keep the files it had; those files might be important to it’s operation after all. But things would get complicated if I tried to reconnect to the remote engine. Should it try to find the same stuff on my host and sync with it? This raises questions about merging file changes and that’s really the realm of DVCS‘ like git. Too complicated. Much simpler to just leave that functionality out when working with remote docker-engines.

So what’s the best way to reload the web front end? Well first I needed to add the front end code when I’m building the image instead of mounting it when I’m running the container. In the docker-compose.yaml file I just remove the volumes section and I add this to frontend/Dockerfile:

ADD . /opt/frontend

Then any time I want to test new changes I can use docker-compose to rebuild and redeploy only the front end container, like so:

docker-compose build web
docker-compose up web

This takes a little bit longer but I don’t have to worry about syncing files to remote hosts.

Weekly Notes – 18/7/15

Hello world!

Why I started this blog

I’m not very good at writing about my experiences, I prefer to just talk. In fact I could ramble on about my own thing all night, constantly meandering and exploring tangents as I go.

I’ve decided that I should put more of an effort into collecting my thoughts in blog form so that I can focus them. At the very least I hope this will make me a better developer but maybe I’ll help others who run into the same stumbling blocks on their own journey.

What I intend to write about

To start off I’m hoping to write a weekly summary of interesting things I learned; whether it’s something from my professional work or a pet project.

In the future, I may post updates about some of my pet projects or events I attend but no promises just yet.

Hello world!