Combining Shell and Lisp in Eshell

11 September 2021 11:03 AM (emacs | eshell | perforce | vc-p4)

The code in this post is entirely useless since Perforce already provides this feature out of the box, I just didn't know about it at the time. Still, I wanted to post something and this seemed as fun as anything else.

I have been working on vc-p4 off-and-on for a while to make working with Perforce more enjoyable in Emacs. I have some plans for that package. One of the bigger things that I have done so far was add the option to specify the client in the .dir-locals.el.

So in my .dir-locals.el I would have something along the lines of the following:

((nil . (vc-p4-client . "SOME-CLIENT-NAME")))

This would let Emacs switch automatically between the different clients.

I wanted to use the p4 command-line rather than P4V more, but a thing that bugged me was that this didn't allow me to automatically pick the client, and I didn't like having to type in p4 -c SOME-CLIENT-NAME ... all the time. I felt like with Emacs, Eshell, and the feature I added to vc-p4 surely I should be able to do something about this.

The first problem I ran into is that Eshell doesn't load the directory-local variables when I change into a directory. So I wrote something to do that:

(defun oni-eshell-set-local-variables ()
  (dolist (elt file-local-variables-alist)
    (set (make-local-variable (car elt)) (default-value (car elt))))
  (setq file-local-variables-alist nil)

First it goes through all of the local variables that have been set before and resets them to their default value. This is so that any variables that are set locally don't hang around when you leave the directory. It then sets the list of currently set file local variables to nil so that it doesn't consider them cached and skips over giving them new values. Finally it calls the hack-dir-local-variables-non-file-buffer function that specifically exists to set directory-local values for variables in a buffer that isn't associated with a file.

(add-hook 'eshell-directory-change-hook #'oni-eshell-set-local-variables)

The function needs to run every single time the current directory changes, which Eshell has a hook for.

This would let me use a single variable essentially for specifying the current client to use. As long as there is a value this would work:

p4 --client $vc-p4-client ...

That's definitely easier than having to remember exactly which client I was using. It can still be better. I know that in Eshell shell-like constructs can be combined with Lisp easily by using either ${} or $(). So really I can just use a single command that checks whether there is a value for the client or not and calls p4 accordingly:

p4 $(when vc-p4-client (list "--client" vc-p4-client)) ...

This is very wordy though. Since this can be called any time it's nice to just make it an alias:

alias p4 'p4 $(when vc-p4-client (list "--client" vc-p4-client)) $*'

Now I can just call p4 and it'll specify the client for me automatically.

I haven't done a lot of stuff in Eshell, so I liked being able to write a fun little alias that combined some shell command with some simple Lisp code. Of course only after this I discovered (well, someone pointed out) that Perforce has the P4CONFIG environment variable, which names a file name to look for up the directory tree from the current directory (much like the .dir-locals.el works) and read settings from there. So I set that p4 set P4CONFIG=.p4config, and then specify the client in there.


And now I don't have to go through any of this, I can remove the whole feature I added to vc-p4 too.

Loading the Emacs Info manuals in MSYS2

4 July 2021 3:58 AM (emacs | msys2 | windows)

I've been annoyed at MSYS2 for a while because Info manuals included with Emacs wouldn't show up when I opened info. The few manuals that were installed through ELPA packages showed up fine.

Some time ago I discovered this was because I installed the mingw-w64-x86_64-emacs package from MSYS2, and this package installs all the info manuals into /mingw64/share/info instead of /usr/share/info and there was no dir file in there. I couldn't quite remember how this worked so I left it alone. At least I understood what was going on.

Recently I finally took the time to look at it again. I remembered that pacman has some capabilities for hooks. I wrote a very simple one for myself to keep my Pacman mirror list updated automatically. But I couldn't remember where the default ones were located. pacman to the rescue. pacman -Ql pacman | less with a quick search for hooks and I discovered that these hooks live in /usr/share/libalpm/hooks/. A quick look in there showed that MSYS2 distributes a couple of hooks: texinfo-install.hook and texinfo-remove.hook. When a package gets installed, upgraded, or removed one of these hooks gets called.

Basically what the -install hook does is go through each file in the installed packages that is under /usr/share/info and call install-info on it. That's great, easy to reproduce on the command line:

find /mingw64/share/info -fype f -name '*.info' -exec install-info '{}' /mingw64/share/info/dir \;

This sets it up the first time, since I already have Emacs installed and didn't want to reinstall it.

Type = Path
Operation = Install
Operation = Upgrade
Target = mingw64/share/info/*

Description = Updating the mingw64 info directory file...
When = PostTransaction
Exec = /usr/bin/sh -c 'while read -r f; do install-info "$f" /mingw64/share/info/dir 2> /dev/null; done'

Put this in /etc/pacman.d/hooks/texinfo-install-mingw64.hook (or C:/msys2/etc/pacman.d/hooks/texinfo-install-mingw64.hook if you're working from Emacs), and now every time a package gets installed or upgraded and it has any files in /mingw64/share/info/ it should automatically update the dir file and give you access to all those info manuals.

The remove hook is basically the same, except it passes in the --delete option to install-info to remove the entries from the dir file.

Type = Path
Operation = Remove
Target = mingw64/share/info/*

Description = Removing old entries from the mingw64 info directory file...
When = PreTransaction
Exec = /usr/bin/sh -c 'while read -r f; do install-info --delete "$f" /mingw64/share/info/dir 2> /dev/null; done'


4 June 2021 7:45 AM (emacs | org-mode)

I've been using olivetti-mode for a little while when I write notes in org-mode and I must say that I really enjoy it. It seems like a very simple package. It doesn't have very many interactive functions or customizable options. Essentially it comes down to you enable it and you pick the default width of the text that you want. The initial 70 is a bit too small for me, but 80 or 85 is pretty comfy.

It's also possible to have it enable and disable visual-line-mode. Personally I always have this on in Org, ever since I started using org-indent-mode.

Hacking Coleslaw to show my custom front page

31 March 2021 9:00 AM (meta)

Recently I've been introduced to the idea of seeing my blog as a garden instead of a personal news site. My blog is a digital garden, not a blog and How the Blog Broke the Web have inspired me to look at my website as a garden, with a blog as just a part of it. I really don't like the idea of presenting my thoughts like a personal news site. I don't think you're that interested in me or my daily comings and goings.

What I do think you might be interested in is documents about how to do stuff that you might not know. What I'm interested in on this site is writing about things that I didn't know before.

One thing I wanted to do for this was to move away from having my latest posts as the landing page of my website. I want something hand-crafted now. It's going to be terrible, but I'm hoping it'll be fun to mess around with it, make small incremental changes over time.

coleslaw doesn't seem naturally set up for this at the moment. So after I'd set up my blog builds I wanted to extend them to include a custom front page. I looked for a way to have coleslaw do it, but it doesn't seem possible built-in, or through a plug-in. So I hackily added a step to my build step in my .gitlab-ci which just moves a named generated page into index.html.

# ...

  # ...
    - cd html && coleslaw
    - cd ..
    - cp img/* public/img/
    # A temporary measure to let me define my own front page.
    - mv public/front-page.html public/index.html
  # ...

This works because in coleslaw, luckily, the usual index.html is just a symlink to the first page of recent posts.

Maybe in the not-too-distant future I'll remember to try and find the time to see if I can make a plugin for this.

Writing a blog with Org-mode, coleslaw, and GitLab CI

24 January 2021 10:00 AM (meta | org-mode | coleslaw | ox-coleslaw | ci | gitlab)

My previous deployment process wasn't very will organized. Using gitolite I had set up some post-receive hooks on my server that would run some coleslaw that happened to have been installed on my server (which had become quite ancient by now).

The new process is a bit more structured.

g export Export org-mode files generate Generate HTML using coleslaw export->generate copy Copy coleslaw files copy->generate deploy Deploy generate->deploy

Preparing to generate

Org mode is the only markup format that I really like working in. But it's pretty strictly tied to Emacs. Luckily Org mode is very good at converting to other formats, and coleslaw accepts raw HTML as one of the formats.

Exporting org-mode to coleslaw

Exporting is pretty simple. Before doing the actual export I need to run cask so that any dependencies get installed. This installs org-plus-contrib, htmlize, and ox-coleslaw. In the future this might install more if I need to add more dependencies for exporting, such as language major modes for exporting with syntax highlighting and such things.

Once the dependencies have been installed, I call Emacs in batch mode, which does:

  • Initialize package.el by calling the package-initialize function.

  • Load the project.el file, which defines how Org mode should export the files.

  • Export everything defined in project.el by calling the org-publish-all function.

    - cask
    - cask emacs -batch -f package-initialize -l project.el -f org-publish-all
      - html

The hard work is done by Org mode, which is converting everything to HTML. The project.el file defines how this works.

To differentiate between what should become .post files and .page files I decided to put them in separate directories and then call org-coleslaw-publish-to-post and org-coleslaw-publish-to-page respectively. They both publish their results to the html/ directory.

(setq org-publish-project-alist
         :base-directory "posts/"
         :publishing-directory "html/"
         :publishing-function org-coleslaw-publish-to-post)
         :base-directory "pages/"
         :publishing-function org-coleslaw-publish-to-page
         :publishing-directory "html/")))

Once this is done, the .gitlab-ci.yml says to publish everything in HTML as the artifacts for this step.

Copy .post and .page files

There are still a number of files that were written in Markdown before I made ox-coleslaw, and these just need to be copied into the html/ directory and published as artifacts for this step.

    - mkdir html
    - cp -r .coleslawrc *.page *.post themes/ html/
      - html

This also copies the .coleslawrc file so that when we run coleslaw from the html/ directory it has the right settings.

Converting from coleslaw to HTML

Once everything's been prepared I just need to call coleslaw.

  image: registry.gitlab.com/ryuslash/blog
    - job: generate-posts
      artifacts: true
    - job: copy-rest
      artifacts: true
    - cd html && coleslaw
      - public/

This specifies that it needs the artifacts from the previous two steps. Since they both published their artifacts into the html/ directory, this merges the results of both those steps into one directory.

The coleslaw configuration specifies that it should generate the files into the public/ directory.

(;; Required information
 ;; ...
 :staging-dir "../public/"

This directory is then published as the step's artifact and used by the deploy step to actually upload to my server.

About the docker image

For this step I wrote a Docker image that installs Roswell and then uses that to install coleslaw.

RUN ros install coleslaw-org/coleslaw \
    && coleslaw --help 2>/dev/null \
    && chmod a+rx /usr/local/bin/coleslaw

I call coleslaw --help because Roswell doesn't seem to actually compile coleslaw until the first time you run it. And for some reason the coleslaw executable's permissions didn't get set up correctly.

I manually build and publish this docker image for the moment, but I intend to automate that at some point later.


The deploy step just ends up getting the public/ directory from the previous step and uses rsync to send it up to the server.


Now that I've got a bit more structure in the build process it should be easier to extend it.

  • For one I want to change the way everything looks. And now I might be able to add something like compiling some less code into CSS and such.

  • I've also been thinking about running some checks as I build, such as if all the links still work and such.

  • Add caching of the org timestamps and Emacs dependencies.

Making docker-compose easier with wdocker

21 February 2016 9:00 AM (wdocker | docker | docker-compose)


wdocker is a little utility written by a friend and former colleague of mine. It allows you to define commands for it in a Dockerfile. He wrote it because he used a lot of composite commands when writing docker images like:

docker stop CONTAINER && docker rm CONTAINER && docker rmi IMAGE && \
    docker build -t IMAGE && docker run --name CONTAINER IMAGE

By using wdocker to define a command he can greatly simplify his own workflow. Let's call it rebuild:

#wd# container = CONTAINER
#wd# image = IMAGE
#wd# stop = docker stop {container}
#wd# rm = docker rm {container}
#wd# rmi = docker rmi {container}
#wd# build = docker build -t {image}
#wd# run = docker run --name {container} {image}

#wd# rebuild: {stop} && {rm} && {rmi} && {build} && {run}

FROM ubuntu

# ...

Now he can use the following command instead of the list presented before:

wdocker rebuild


wdocker has very simple syntax. You can define variables and commands:

#wd# variable = value
#wd# command: program

Variables can be used by putting them in braces, including in other variables, as you've seen in the first example.

#wd# variable = -l
#wd# list: ls {variable}

This would run ls -l when the command wdocker list is called.

As you can see you're not limited to using docker in your wdocker commands. This property is what allows me to use wdocker in my workflow.

Combining with docker-compose

I started using docker not too long ago at work to develop our projects in. This is nice because it allows me to completely isolate my development environments. Since we have a few processes running together a single docker image isn't a great option, so I use docker-compose to define and combine the containers I need.

As a side-effect this requires me to write long commands to do something like run rspec tests:

docker-compose run --rm -e RACK_ENV=test -e RAILS_ENV=test \
    container bundle exec rspec

The alternative is defining a specialized test container with a bogus entry command (such as true) and use that, which would still make the command:

docker-compose run --rm test-container bundle exec rspec

Instead I can define a wdocker command in the Dockerfile used to build the containers used:

#wd# rspec: docker-compose run --rm -e RACK_ENV=test -e RAILS_ENV=test container bundle exec rspec

FROM ruby


Now I can run the following, much shorter, command to run the rspec tests:

wdocker rspec

We also use cucumber for some other tests, which is even longer to type in, adding the cucumber command is easy:

#wd# rspec: docker-compose run --rm -e RACK_ENV=test -e RAILS_ENV=test container bundle exec rspec
#wd# cucumber: docker-compose run --rm -e RACK_ENV=test -e RAILS_ENV=test container bundle exec cucumber

FROM ruby

# ...

Now I can run wdocker cucumber as well.

The latest git version of wdocker passes any arguments after the command name directly to the command to be executed. So if I need to run tests in a single spec file I can just do:

wdocker rspec spec/models/mymodel_spec.rb

We have two commands defined now that are 90% the same. I always use the --rm switch to remove the started container after it's done, I don't want a lot of containers piling up. I also always have to use bundle exec to run commands, since the containers don't use rvm or add the script directories to $PATH. We can extract them to some variables:

#wd# run = docker-compose run --rm
#wd# exec = bundle exec
#wd# test = -e RACK_ENV=test -e RAILS_ENV=test

#wd# rspec: {run} {test} container {exec} rspec
#wd# cucumber: {run} {test} container {exec} cucumber

FROM ruby

# ...

Right now these commands always use the container service defined in docker-compose.yml. I could add it to the run command, but I might need to run some commands on another container, but I can define another variable:

#wd# run = docker-compose run --rm
#wd# test = -e RACK_ENV=test -e RAILS_ENV=test
#wd# run-test-container = {run} {test} container
#wd# exec = bundle exec

#wd# rspec: {run-test-container} {exec} rspec
#wd# cucumber: {run-test-container} {exec} cucumber

FROM ruby

# ...

Now you also see that variables can be nested in other variables.

If you ever forget what you defined or if the mix of commands and variables becomes too much for you, you can call the wdocker command without arguments to see the commands you defined and the shell commands they'll run.

Using DisPass to manage your passwords

14 February 2016 10:00 AM (dispass)

tl;dr: If you don't care about any of the back story and just want to know how to use DisPass to manage passwords, skip to Managing passwords for instant gratification.


DisPass is a project that was started, and is still maintained, by a friend and former colleague of mine. I've been using it for quite some time. It helps me feel safe online, knowing that all my accounts have different and strong passwords.

DisPass uses algorithms to make reproducible passphrases. Making it a kind-of functional password manager, just like Haskell is a functional programming language and Guix is a functional package manager. Given the same input DisPass will always produce the same output. This means that the generated passphrases are never stored anywhere and cannot be discovered by crackers1 and the like.

The input for DisPass consists of a label, algorithm, length, possibly a sequence number (depending on the algorithm used) and finally a password. All but the label and password have some default value, but can also be specified through command-line switches.

The Labelfile

Being a functional anything usually means that whatever you're using doesn't maintain any state. This can be true for DisPass, but isn't necessarily so. It can be a challenge to remember the size, algorithm and sequence number for a large number of labels, so there is the labelfile.

The labelfile is normally located in either $XDG_CONFIG_HOME/dispass/labels or $HOME/.dispass/labels, but can also be specified on the command-line. It contains the metadata for the labels, and the labels themselves. This lets you run something like:

dispass generate foobar

And it'll know the size, algorithm and sequence number for the label "foobar", assuming you've saved it to the labelfile. The labelfile is unencrypted, but this information is useless as long as nobody knows the password(s) you use to generate the passphrases.

Setting up

DisPass is easy to install if you have either Archlinux or pip installed. Windows is a bit more problematic and I don't even know how to get started on a Mac personally, but there is no reason it can't work. It doesn't have many dependencies, so you don't need to install anything else first.

The latest release is quite old, but a new release should be coming soon. There haven't been too many developments since version 0.3.0-dev because it basically does what it needs to do, and the user base is currently very small, so bugs might not be encountered too quickly. Don't think that it's an abandoned project, if you look at it's github page you'll see that it's seen a bit of development again as of late.

In the case of Archlinux I've provided packages in the AUR for both python2-dispass version 0.2.0 and python2-dispass-git. Installing either of these like any regular old aur package will get you set up. Incidentally, if you're using Archlinux on x8664 and have the testing package repository enabled, you could also use my package repository, though no guarantees that it'll ever work are given there.

For a general pip installation it should be as easy as running:

sudo pip install dispass


Seeing as how my friend would like it to be generally useful, and he's a VIM user, there is both a GUI and CLI interface. Since I'm an Emacs user I've created an Emacs and a Conkeror interface for it as well.


The CLI is what gets the most attention and gets developed the most. I will be working with this in the Managing passwords section.


There is a basic GUI included with dispass, it can be started with either the gdispass or the dispass gui commands. It requires tkinter to be installed. It doesn't do everything the CLI does, but there are plans to improve it and use a different gui library (such as Qt). In some situations it can copy the generated passphrases directly to the clipboard, but this is only true on GNU/Linux, not on Windows.


I wrote an Emacs interface when I started using DisPass. It tries to copy the generated passwords directly to the clipboard, instead of needing the user to copy it manually as the CLI does. It can also insert generated passphrases into a buffer, such as the minibuffer.

It's available on github.


I also wrote a Conkeror interface some time later, because I didn't want to keep copying and pasting the passphrases through one of the other interfaces (usually Emacs). It inserts the generated passphrases into the focused input.

It's also available on github.


As I mentioned, the idea is to expand the GUI and use a different gui library for it, to make it look a little better. The functionality should also be extended to do everything the CLI does.

A Firefox extension is also still on the list of desirable interfaces. I'm not sure how plausible it is with the new WebExtension plugin api, I haven't looked into it yet. I don't think chrom(e|ium) allows developers to call external programs, which is an obstacle, but I haven't looked at this either.

Managing passwords

Now for the real fun. Generating passphrases is simple. Use the generate command:

dispass generate foobar

If no entry exists in the labelfile for foobar, it uses the defaults, which at the time of writing are a length of 30, and the algorithm dispass1. This algorithm doesn't use a sequence number. It can generate more than one passphrase at a time.

The generated passphrases are presented in an ncurses screen so they aren't kept in your terminal emulator's scrollback history, at least in some cases. You can use the -o switch to do away with the ncurses screen and just output a line for each generated passphrase. Together with something like awk this can be used to directly send some command the passphrase it needs. For example, if the program foo needs a password from stdin, you could use:

dispass generate -o foobar | awk '{ print $2 }' | foo

You can specify a different length, algorithm and sequence number by using command line switches. For example, I normally prefer the dispass2 algorithm since it adds a sequence number. For some crazy reason the place I use the passphrase limits it to a length of 16 characters and I've had to change my password twice, so I use a sequence number of 3. I could use:

dispass generate -l 16 -a dispass2 -s 3 foobar

It would be difficult to remember all this, so I personally would add it to the labelfile. To do this I can use the add command. Basically this is:

dispass add foobar

This creates an entry in the label file with the same default values as the generate command: a length of 30 and using the dispass1 algorithm. To use the values we used before we can instead do:

dispass add foobar:16:dispass2:3

This way we can add multiple entries with different values at once:

dispass add foo:16 bar::dispass2:2

This would add the foo label with a length of 16, using the default algorithm and the label bar with the default length, using the dispass2 algorithm and the sequence number 2. As you can see you can omit any trailing parameters and leave any parameters in between empty to use their default values.

If you added it before I showed you the extended add syntax you can use update to change an existing entry in the labelfile:

dispass update foobar 13:dispass2:3

Unlike the add command, the update command only updates one label at a time.

Now, the place I use my password was cracked by crackers1, my password was stolen. That's no biggie. I use the list command to check what my sequence number is:

dispass list

Then I can update my labelfile and use a new sequence number:

dispass update foobar ::4

I could also use the convenient increment command:

dispass increment foobar

Every time the sequence number is changed the input changes and so does the passphrase. So a simple call to the increment command will completely change your passphrase. This is nice, because otherwise I'd have to change either the label or the password used to generate the passphrase.

Actually, I just quit the job where I used my foobar label. I still use many other labels and don't want my list to get too big. I also don't want to delete the label in case I ever need to get back in there, so I just disable it:

dispass disable foobar

This keeps it in the labelfile, but commands such as list don't show it anymore. But then they really need me back, and since I'm now a freelance worker I can accommodate them, so I enable my label again:

dispass enable foobar

But now the place where I use the foobar label has gone out of business (I mean, come on, using a maximum password length of 16 and getting cracked by crackers all the time, are you really surprised?) and their site has been taken offline. Now I really have no reason to keep this label around, so I remove it:

dispass remove foobar


Yes, this is an excellent project and I'm not just saying that because a friend of mine wrote it. There are some things that it just isn't suited for.

When sharing a single account with someone else (don't do this!), you can't expect the other party to use the same label and password to generate the passphrase, if they're even tech-savvy enough to use DisPass just like you. It also increases the amount of information you need to remember to use DisPass. There are better programs to store pre-generated passwords.

Due to the way the current algorithms are implemented there is a limit to the length of the passphrases and that limit isn't entirely consistent. This is only a problem when you need passphrases of more than 100 characters, and I haven't had that problem yet.



I refuse to use the term hackers, because to me that means something completely different, and I hope to you as well.

Yoshi Theme 6

8 September 2015 9:00 AM (yoshi-theme | emacs | projects)

According to github I released version 6 of my Yoshi theme 8 days ago. Consequently I uploaded it to Marmalade. I felt I should mention that, and it also gives me an excuse to write something.

This new version brings a newly added CHANGELOG, inspired by Keep a CHANGELOG. It doesn't completely follow the format suggested in there because it's in Org mode format and because I felt I could style this format better than the one they propose, at least when using Org mode. I still need to export it to somewhere nice.

Since the last release, which I don't think anyone has ever seen, there have been some major changes in magit's faces, so those have been added. The original ones are also still there in the hopes that anyone using an older version will still have nice colors.

I've also removed some color variations because I felt they were impurities in the theme. I don't think they actually bring anything to the table and it looks cleaner with just the basic set of colors.

If you have any suggestions, wishes, questions or insults you want to throw my way, please do so in the issue tracker.

Introducing ox-coleslaw

5 August 2015 9:00 AM (meta | ox-coleslaw | emacs | cask | tekuti | projects | org-mode)

I have a big problem: I can't write a blog in anything other than Org mode. I have another problem: I haven't found a good way to write a blog only in Org mode. This always keeps me going back and forth between blogging systems. I've used tekuti, WordPress, and I've tried a few others. Currently I'm using Coleslaw. I haven't written anything lately though because it supports Markdown and HTML and I was getting antsy for some Org mode again. So I've been on the lookout for something new.

Well… I've had enough. I'm not going away this time. I'm going to fix my problems and commit to this system. I picked Coleslaw because it's written en Common Lisp and has some interesting features. I'm going to write an exporter for org to whatever Coleslaw needs!

I've known that it's pretty easy to write an exporter for Org mode for some time, but I've never actually tried to write one. I modified some bits and bobs on org-blog, but that didn't really work out. Today though, while reading an old(er) post on Endless Parentheses, I ran into ox-jekyll. Jekyll has a pretty similar page/post definition syntax to Coleslaw, so it seemed easy to read what they're doing and copy the relevant parts. It's a very small Emacs Lisp file, which made it very easy. So congrats to them and the people writing Org mode for making some very clear code.

So I wrote (or copied) ox-coleslaw based on ox-jekyll. It's slightly smaller than ox-jekyll because, frankly, it offers less. I just need a simple way to export a .org file to a .post file, nothing fancy.

To write posts I will use Org mode. Once ox-coleslaw is loaded I use the org export function to export it to an HTML file with the proper header. You can also do this non-interactively from, for example, a Makefile, but that is a story for another time.

This document is the first attempt at publishing a blog post using ox-coleslaw.

Installing HLA on Archlinux

28 December 2014 6:43 AM (hla | archlinux | vagrant)

I recently started reading The Art of Assembly Language, 2nd Edition. It uses High-Level Assembly language in its code examples and this requires a special compiler, or assembler, to turn your code into machine code.

Fixing the PKGBUILD

The compiler, hla, is available on the Archlinux User Repository here. At the time of writing, though, that PKGBUILD doesn't work entirely. By default pacman removes all static libraries from the created packages, which took me a while to find out. Adding the following line to the PKGBUILD fixes it:


I also placed a comment on the AUR page, but there has been no sign of acknowledgment so far.

Running on x8664

After having installed the compiler I got a lot of errors compiling my very simple hello world application, as typed over from the book. The gist of them was that it couldn't create 64-bit executables, which isn't very surprising as HLA seems to be only for x86 (32-bit) architecture. Another comment on the AUR page helped that though. One should add the -lmelf_i386 switch to the hla command-line. So I put in my ~/.zshrc:

alias hla="hla -lmelf_i386"

This discovery only came after a few other attempts to install HLA.

Alternative: Using Vagrant

Before I'd read about the -lmelf_i386 command-line switch I was looking at ways to run a 32-bit operating system inside my Archlinux installation. There are a few options I'm familiar with: lxc, Docker and Vagrant.

At first I tried to create a 32-bit Archlinux container, but the installation script failed, so I couldn't get that started. Then I went on to Vagrant, which worked pretty quickly.

I used the ubuntu/trusty32 box, which can be downloaded by calling:

vagrant box add ubuntu/trusty32

A very short Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/trusty32"
  config.vm.provision :shell, path: "vagrant.sh"

and then the provision in vagrant.sh:

wget http://www.plantation-productions.com/Webster/HighLevelAsm/HLAv2.16/linux.hla.tar.gz
tar --directory / --extract --file linux.hla.tar.gz

cat > /etc/profile.d/hla.sh <<EOF

export hlalib=/usr/hla/hlalib
export hlainc=/usr/hla/include
export hlatemp=/tmp
export PATH="${PATH}:/usr/hla"

After that you can just call vagrant up, wait a while and then have fun playing around with HLA in an Ubuntu 14.04 environment.