Echo Area

TIL: How to Escape a Backtick in Markdown

6 February 2024 7:47 AM (til)

I needed to send a bit of inline code in some Markdown today and it needed to have a ? in it. I realized I hadn't done that before. I write code with ? in it somewhat frequently because Lisp uses it for quasi-quoting, but I always write everything in org syntax where you can use either = or ~. But in Markdown if you need to have a ? in your inline code you need to wrap your inline code with more ? characters.

`This is some inline code`
``This is some inline code with a ` (backtick) in it``
```This is some inline code with `` (two backticks) in it```

It looks like the sky might be the limit.

TIL: I can use elfeed-search-face-alist to highlight certain headlines in Elfeed

7 December 2023 11:45 PM (emacs | elfeed | til)

I rediscovered that I can use elfeed-search-face-alist to customize how headlines are displayed in Elfeed. I had read it before on Chris Wellons' blog, but I didn't have a use for it then.

With elfeed-search-face-alist I can define a face to be used for an article with a specific tag. I added the Blabbermouth RSS feed to my feeds.

(setq elfeed-feeds '(("https://blabbermouth.net/feed" music)))

And created 2 taggers. One tags reviews, because those always have /reviews/ in the url. The other has a list of bands that I'm especially interested in.

(defvar oni-elfeed-blabbermouth-review-tagger
  (elfeed-make-tagger :feed-url (rx "blabbermouth.net")
                      :entry-link (rx "/reviews/")
                      :add 'review)
  "Tagger that marks any reviews from Blabbermouth.")

(defvar oni-elfeed-blabbermouth-favourite-tagger
  (elfeed-make-tagger :feed-url (rx "blabbermouth.net")
                      :entry-title (rx (or "SLIPKNOT"
                                           (seq "DREAM" whitespace "THEATER")
                                           ;; And so on...
                                           ))
                      :add 'favourite)
  "Tagger that highlights specific bands from Blabbermouth.")

(add-hook 'elfeed-new-entry-hook oni-elfeed-blabbermouth-favourite-tagger)
(add-hook 'elfeed-new-entry-hook oni-elfeed-blabbermouth-review-tagger)

And then I can just set up the feeds:

(add-to-list 'elfeed-search-face-alist '(review :slant italic) t)
(add-to-list 'elfeed-search-face-alist '(favourite :foreground "#f17272") t)

As long as the face definitions don't conflict a headline's face would be the combination of all that apply. For example, by default unread headlines are bold, so unread favourite messages would be bold and somewhat reddish.

Switch TODO state when clocking in

10 July 2023 7:56 AM (emacs | org-mode | config)

This Emacs configuration snippet for org-mode changes a task's state from whatever “head” state it's in into the next state in its sequence when you clock in to a task. I do this by setting the org-clock-in-switch-to-state variable.

Different sequences of TODO states

First, just to make sure that this is explained in here, it's possible in org-mode to specify multiple sequences of task states, for example I had this in one of my org files:

#+SEQ_TODO: TODO WIP BLOCKED | DONE
#+SEQ_TODO: READ READING | FINISHED STOPPED
#+SEQ_TODO: WATCH WATCHING | WATCHED
#+SEQ_TODO: LISTEN LISTENING | DONE

This means that there are 4 sequences I've set up. A task can start as either TODO, READ, WATCH, or LISTEN, and then it'll move to a different next state1 depending on which initial state was picked. WIP comes after TODO, WATCHING after WATCH, etc. They generally don't cross, although org-mode will get confused as soon as I change any TODO or LISTEN task to DONE since at that point it can't figure out what it would change back to if it turns out I wasn't done after all. It'll make it TODO if I move forward from DONE in either case.

Here is the graph showing the paths of each sequence:

TODO TODO WIP WIP TODO->WIP BLOCKED BLOCKED WIP->BLOCKED DONE DONE BLOCKED->DONE DONE->TODO READ READ READING READING READ->READING FINISHED FINISHED READING->FINISHED STOPPED STOPPED FINISHED->STOPPED STOPPED->READ WATCH WATCH WATCHING WATCHING WATCH->WATCHING WATCHED WATCHED WATCHING->WATCHED WATCHED->WATCH LISTEN LISTEN LISTENING LISTENING LISTEN->LISTENING LISTENING->DONE

Although this doesn't actually affect me at all in any way because I have org-use-fast-todo-selection set to t.

Making The Switch

Getting back to my snippet: org-clock-in-switch-to-state can be set to either a string, which will just always change it to that particular state when you clock in, or a function that takes a single parameter (the current state of the task you're clocking in to). For this case I want the function, because I won't know which state I want to change to until I know the current state, since TODO will change to WIP, READ to READING, etc. but also when a task is already in the state READING, for example, I don't want it to change at all.

(defun oni-org-maybe-change-todo-state (current-state)
  "Change the state of the current task to its next state.
Only do this outside of a capture buffer and when CURRENT-STATE
is the head state of whichever sequence of states applies to this
task."
  (if (and (not org-capture-mode)
           (member current-state org-todo-heads))
      (cadr (member current-state org-todo-keywords-1))
    current-state))

First I make sure that we're not in a capture buffer. Some of my capture templates state that they should clock in to whatever I'm capturing right away, and in that case the task I'm capturing might immediately change from TODO to WIP, for example.

Then I check to see if the current state is in the org-todo-heads variable, which contains only the first todo state of each possible sequence of states. Let's assume my todo states are:

#+SEQ_TODO: TODO WIP BLOCKED | DONE

Checking that the current-state is in org-todo-heads basically means I check to make sure that current-state is TODO and not any of the other ones. I do this so that if I clock in to a WIP task, it doesn't automatically switch to blocked.

If I'm not in a capture buffer, and the current state is one of the head ones, I search for the current state in the org-todo-keywords-1 which is a simple flat list of all the possible todo states org-mode knows about. This is easier to work with than org-todo-keywords, since that is an alist of (type . list-of-states) and has a bunch of information I don't need. I return whatever comes right after the current state.

Returning whatever next state is in the list does mean that if the next state is DONE, it'll immediately set it to done. But there is no real way to check that with the way I've done this. There is just the next state.

Finally you just set this function as the value of org-clock-in-switch-to-state and then you're good to go.

(setq org-clock-in-switch-to-state #'oni-org-maybe-change-todo-state)

Footnotes:

1

By which I mean by pressing C-c C-t when org-use-fast-todo-selection is nil or pressing C-S-<right> on the headline.

IELM & Paredit

12 April 2023 9:02 AM (emacs)

For a while I've been bothered by being unable to use IELM to evaluate Emacs Lisp expressions. Then Making IELM More Comfortable pointed out that using paredit in IELM causes the RET and C-j keybindings to be bound to the wrong functions and proposes to fix them. The given fix, however, appears to change the keybindings for all buffers using paredit. I don't want that, and Emacs can do better!

A quick search yielded Buffer-locally overriding minor-mode key bindings in Emacs suggesting a fix. My adaptation of that solution:

(defun oni-elisp-ielm-remove-paredit-newline-keys ()
  "Disable ‘C-j’ and ‘RET’ keybindings from ‘paredit-mode’."
  (let ((oldmap (map-elt minor-mode-map-alist 'paredit-mode))
        (newmap (make-sparse-keymap)))
    (set-keymap-parent newmap oldmap)
    (define-key newmap (kbd "RET") nil)
    (define-key newmap (kbd "C-j") nil)
    (make-local-variable 'minor-mode-overriding-map-alist)
    (push `(paredit-mode . ,newmap) minor-mode-overriding-map-alist)))

(add-hook 'ielm-mode-hook #'oni-elisp-ielm-remove-paredit-newline-keys)

Defining RET and C-j as nil means that the keybinding defaults back to the major-mode keybinding.

Combining Shell and Lisp in Eshell

11 September 2021 11:03 AM (emacs | eshell | perforce | vc-p4)

The code in this post is entirely useless since Perforce already provides this feature out of the box, I just didn't know about it at the time. Still, I wanted to post something and this seemed as fun as anything else.

I have been working on vc-p4 off-and-on for a while to make working with Perforce more enjoyable in Emacs. I have some plans for that package. One of the bigger things that I have done so far was add the option to specify the client in the .dir-locals.el.

So in my .dir-locals.el I would have something along the lines of the following:

((nil . (vc-p4-client . "SOME-CLIENT-NAME")))

This would let Emacs switch automatically between the different clients.

I wanted to use the p4 command-line rather than P4V more, but a thing that bugged me was that this didn't allow me to automatically pick the client, and I didn't like having to type in p4 -c SOME-CLIENT-NAME ... all the time. I felt like with Emacs, Eshell, and the feature I added to vc-p4 surely I should be able to do something about this.

The first problem I ran into is that Eshell doesn't load the directory-local variables when I change into a directory. So I wrote something to do that:

(defun oni-eshell-set-local-variables ()
  (dolist (elt file-local-variables-alist)
    (set (make-local-variable (car elt)) (default-value (car elt))))
  (setq file-local-variables-alist nil)
  (hack-dir-local-variables-non-file-buffer))

First it goes through all of the local variables that have been set before and resets them to their default value. This is so that any variables that are set locally don't hang around when you leave the directory. It then sets the list of currently set file local variables to nil so that it doesn't consider them cached and skips over giving them new values. Finally it calls the hack-dir-local-variables-non-file-buffer function that specifically exists to set directory-local values for variables in a buffer that isn't associated with a file.

(add-hook 'eshell-directory-change-hook #'oni-eshell-set-local-variables)

The function needs to run every single time the current directory changes, which Eshell has a hook for.

This would let me use a single variable essentially for specifying the current client to use. As long as there is a value this would work:

p4 --client $vc-p4-client ...

That's definitely easier than having to remember exactly which client I was using. It can still be better. I know that in Eshell shell-like constructs can be combined with Lisp easily by using either ${} or $(). So really I can just use a single command that checks whether there is a value for the client or not and calls p4 accordingly:

p4 $(when vc-p4-client (list "--client" vc-p4-client)) ...

This is very wordy though. Since this can be called any time it's nice to just make it an alias:

alias p4 'p4 $(when vc-p4-client (list "--client" vc-p4-client)) $*'

Now I can just call p4 and it'll specify the client for me automatically.

I haven't done a lot of stuff in Eshell, so I liked being able to write a fun little alias that combined some shell command with some simple Lisp code. Of course only after this I discovered (well, someone pointed out) that Perforce has the P4CONFIG environment variable, which names a file name to look for up the directory tree from the current directory (much like the .dir-locals.el works) and read settings from there. So I set that p4 set P4CONFIG=.p4config, and then specify the client in there.

P4CONFIG=SOME_CLIENT_NAME

And now I don't have to go through any of this, I can remove the whole feature I added to vc-p4 too.

Loading the Emacs Info manuals in MSYS2

4 July 2021 3:58 AM (emacs | msys2 | windows)

I've been annoyed at MSYS2 for a while because Info manuals included with Emacs wouldn't show up when I opened info. The few manuals that were installed through ELPA packages showed up fine.

Some time ago I discovered this was because I installed the mingw-w64-x86_64-emacs package from MSYS2, and this package installs all the info manuals into /mingw64/share/info instead of /usr/share/info and there was no dir file in there. I couldn't quite remember how this worked so I left it alone. At least I understood what was going on.

Recently I finally took the time to look at it again. I remembered that pacman has some capabilities for hooks. I wrote a very simple one for myself to keep my Pacman mirror list updated automatically. But I couldn't remember where the default ones were located. pacman to the rescue. pacman -Ql pacman | less with a quick search for hooks and I discovered that these hooks live in /usr/share/libalpm/hooks/. A quick look in there showed that MSYS2 distributes a couple of hooks: texinfo-install.hook and texinfo-remove.hook. When a package gets installed, upgraded, or removed one of these hooks gets called.

Basically what the -install hook does is go through each file in the installed packages that is under /usr/share/info and call install-info on it. That's great, easy to reproduce on the command line:

find /mingw64/share/info -fype f -name '*.info' -exec install-info '{}' /mingw64/share/info/dir \;

This sets it up the first time, since I already have Emacs installed and didn't want to reinstall it.

[Trigger]
Type = Path
Operation = Install
Operation = Upgrade
Target = mingw64/share/info/*

[Action]
Description = Updating the mingw64 info directory file...
When = PostTransaction
Exec = /usr/bin/sh -c 'while read -r f; do install-info "$f" /mingw64/share/info/dir 2> /dev/null; done'
NeedsTargets

Put this in /etc/pacman.d/hooks/texinfo-install-mingw64.hook (or C:/msys2/etc/pacman.d/hooks/texinfo-install-mingw64.hook if you're working from Emacs), and now every time a package gets installed or upgraded and it has any files in /mingw64/share/info/ it should automatically update the dir file and give you access to all those info manuals.

The remove hook is basically the same, except it passes in the --delete option to install-info to remove the entries from the dir file.

[Trigger]
Type = Path
Operation = Remove
Target = mingw64/share/info/*

[Action]
Description = Removing old entries from the mingw64 info directory file...
When = PreTransaction
Exec = /usr/bin/sh -c 'while read -r f; do install-info --delete "$f" /mingw64/share/info/dir 2> /dev/null; done'
NeedsTargets

olivetti-mode

4 June 2021 7:45 AM (emacs | org-mode)

I've been using olivetti-mode for a little while when I write notes in org-mode and I must say that I really enjoy it. It seems like a very simple package. It doesn't have very many interactive functions or customizable options. Essentially it comes down to you enable it and you pick the default width of the text that you want. The initial 70 is a bit too small for me, but 80 or 85 is pretty comfy.

It's also possible to have it enable and disable visual-line-mode. Personally I always have this on in Org, ever since I started using org-indent-mode.

Hacking Coleslaw to show my custom front page

31 March 2021 9:00 AM (meta)

Recently I've been introduced to the idea of seeing my blog as a garden instead of a personal news site. My blog is a digital garden, not a blog and How the Blog Broke the Web have inspired me to look at my website as a garden, with a blog as just a part of it. I really don't like the idea of presenting my thoughts like a personal news site. I don't think you're that interested in me or my daily comings and goings.

What I do think you might be interested in is documents about how to do stuff that you might not know. What I'm interested in on this site is writing about things that I didn't know before.

One thing I wanted to do for this was to move away from having my latest posts as the landing page of my website. I want something hand-crafted now. It's going to be terrible, but I'm hoping it'll be fun to mess around with it, make small incremental changes over time.

coleslaw doesn't seem naturally set up for this at the moment. So after I'd set up my blog builds I wanted to extend them to include a custom front page. I looked for a way to have coleslaw do it, but it doesn't seem possible built-in, or through a plug-in. So I hackily added a step to my build step in my .gitlab-ci which just moves a named generated page into index.html.

# ...

build:
  # ...
  script:
    - cd html && coleslaw
    - cd ..
    - cp img/* public/img/
    # A temporary measure to let me define my own front page.
    - mv public/front-page.html public/index.html
  # ...

This works because in coleslaw, luckily, the usual index.html is just a symlink to the first page of recent posts.

Maybe in the not-too-distant future I'll remember to try and find the time to see if I can make a plugin for this.

Writing a blog with Org-mode, coleslaw, and GitLab CI

24 January 2021 10:00 AM (meta | org-mode | coleslaw | ox-coleslaw | ci | gitlab)

My previous deployment process wasn't very will organized. Using gitolite I had set up some post-receive hooks on my server that would run some coleslaw that happened to have been installed on my server (which had become quite ancient by now).

The new process is a bit more structured.

g export Export org-mode files generate Generate HTML using coleslaw export->generate copy Copy coleslaw files copy->generate deploy Deploy generate->deploy

Preparing to generate

Org mode is the only markup format that I really like working in. But it's pretty strictly tied to Emacs. Luckily Org mode is very good at converting to other formats, and coleslaw accepts raw HTML as one of the formats.

Exporting org-mode to coleslaw

Exporting is pretty simple. Before doing the actual export I need to run cask so that any dependencies get installed. This installs org-plus-contrib, htmlize, and ox-coleslaw. In the future this might install more if I need to add more dependencies for exporting, such as language major modes for exporting with syntax highlighting and such things.

Once the dependencies have been installed, I call Emacs in batch mode, which does:

  • Initialize package.el by calling the package-initialize function.

  • Load the project.el file, which defines how Org mode should export the files.

  • Export everything defined in project.el by calling the org-publish-all function.

generate-posts:
  before_script:
    - cask
  script:
    - cask emacs -batch -f package-initialize -l project.el -f org-publish-all
  artifacts:
    paths:
      - html

The hard work is done by Org mode, which is converting everything to HTML. The project.el file defines how this works.

To differentiate between what should become .post files and .page files I decided to put them in separate directories and then call org-coleslaw-publish-to-post and org-coleslaw-publish-to-page respectively. They both publish their results to the html/ directory.

(setq org-publish-project-alist
      '(("posts"
         :base-directory "posts/"
         :publishing-directory "html/"
         :publishing-function org-coleslaw-publish-to-post)
        ("pages"
         :base-directory "pages/"
         :publishing-function org-coleslaw-publish-to-page
         :publishing-directory "html/")))

Once this is done, the .gitlab-ci.yml says to publish everything in HTML as the artifacts for this step.

Copy .post and .page files

There are still a number of files that were written in Markdown before I made ox-coleslaw, and these just need to be copied into the html/ directory and published as artifacts for this step.

copy-rest:
  script:
    - mkdir html
    - cp -r .coleslawrc *.page *.post themes/ html/
  artifacts:
    paths:
      - html

This also copies the .coleslawrc file so that when we run coleslaw from the html/ directory it has the right settings.

Converting from coleslaw to HTML

Once everything's been prepared I just need to call coleslaw.

build:
  image: registry.gitlab.com/ryuslash/blog
  needs:
    - job: generate-posts
      artifacts: true
    - job: copy-rest
      artifacts: true
  script:
    - cd html && coleslaw
  artifacts:
    paths:
      - public/

This specifies that it needs the artifacts from the previous two steps. Since they both published their artifacts into the html/ directory, this merges the results of both those steps into one directory.

The coleslaw configuration specifies that it should generate the files into the public/ directory.

(;; Required information
 ;; ...
 :staging-dir "../public/"
)

This directory is then published as the step's artifact and used by the deploy step to actually upload to my server.

About the docker image

For this step I wrote a Docker image that installs Roswell and then uses that to install coleslaw.

RUN ros install coleslaw-org/coleslaw \
    && coleslaw --help 2>/dev/null \
    && chmod a+rx /usr/local/bin/coleslaw

I call coleslaw --help because Roswell doesn't seem to actually compile coleslaw until the first time you run it. And for some reason the coleslaw executable's permissions didn't get set up correctly.

I manually build and publish this docker image for the moment, but I intend to automate that at some point later.

Deploy

The deploy step just ends up getting the public/ directory from the previous step and uses rsync to send it up to the server.

Next

Now that I've got a bit more structure in the build process it should be easier to extend it.

  • For one I want to change the way everything looks. And now I might be able to add something like compiling some less code into CSS and such.

  • I've also been thinking about running some checks as I build, such as if all the links still work and such.

  • Add caching of the org timestamps and Emacs dependencies.

Making docker-compose easier with wdocker

21 February 2016 9:00 AM (wdocker | docker | docker-compose)

Introduction

wdocker is a little utility written by a friend and former colleague of mine. It allows you to define commands for it in a Dockerfile. He wrote it because he used a lot of composite commands when writing docker images like:

docker stop CONTAINER && docker rm CONTAINER && docker rmi IMAGE && \
    docker build -t IMAGE && docker run --name CONTAINER IMAGE

By using wdocker to define a command he can greatly simplify his own workflow. Let's call it rebuild:

#wd# container = CONTAINER
#wd# image = IMAGE
#wd# stop = docker stop {container}
#wd# rm = docker rm {container}
#wd# rmi = docker rmi {container}
#wd# build = docker build -t {image}
#wd# run = docker run --name {container} {image}

#wd# rebuild: {stop} && {rm} && {rmi} && {build} && {run}

FROM ubuntu

# ...

Now he can use the following command instead of the list presented before:

wdocker rebuild

Syntax

wdocker has very simple syntax. You can define variables and commands:

#wd# variable = value
#wd# command: program

Variables can be used by putting them in braces, including in other variables, as you've seen in the first example.

#wd# variable = -l
#wd# list: ls {variable}

This would run ls -l when the command wdocker list is called.

As you can see you're not limited to using docker in your wdocker commands. This property is what allows me to use wdocker in my workflow.

Combining with docker-compose

I started using docker not too long ago at work to develop our projects in. This is nice because it allows me to completely isolate my development environments. Since we have a few processes running together a single docker image isn't a great option, so I use docker-compose to define and combine the containers I need.

As a side-effect this requires me to write long commands to do something like run rspec tests:

docker-compose run --rm -e RACK_ENV=test -e RAILS_ENV=test \
    container bundle exec rspec

The alternative is defining a specialized test container with a bogus entry command (such as true) and use that, which would still make the command:

docker-compose run --rm test-container bundle exec rspec

Instead I can define a wdocker command in the Dockerfile used to build the containers used:

#wd# rspec: docker-compose run --rm -e RACK_ENV=test -e RAILS_ENV=test container bundle exec rspec

FROM ruby

#...

Now I can run the following, much shorter, command to run the rspec tests:

wdocker rspec

We also use cucumber for some other tests, which is even longer to type in, adding the cucumber command is easy:

#wd# rspec: docker-compose run --rm -e RACK_ENV=test -e RAILS_ENV=test container bundle exec rspec
#wd# cucumber: docker-compose run --rm -e RACK_ENV=test -e RAILS_ENV=test container bundle exec cucumber

FROM ruby

# ...

Now I can run wdocker cucumber as well.

The latest git version of wdocker passes any arguments after the command name directly to the command to be executed. So if I need to run tests in a single spec file I can just do:

wdocker rspec spec/models/mymodel_spec.rb

We have two commands defined now that are 90% the same. I always use the --rm switch to remove the started container after it's done, I don't want a lot of containers piling up. I also always have to use bundle exec to run commands, since the containers don't use rvm or add the script directories to $PATH. We can extract them to some variables:

#wd# run = docker-compose run --rm
#wd# exec = bundle exec
#wd# test = -e RACK_ENV=test -e RAILS_ENV=test

#wd# rspec: {run} {test} container {exec} rspec
#wd# cucumber: {run} {test} container {exec} cucumber

FROM ruby

# ...

Right now these commands always use the container service defined in docker-compose.yml. I could add it to the run command, but I might need to run some commands on another container, but I can define another variable:

#wd# run = docker-compose run --rm
#wd# test = -e RACK_ENV=test -e RAILS_ENV=test
#wd# run-test-container = {run} {test} container
#wd# exec = bundle exec

#wd# rspec: {run-test-container} {exec} rspec
#wd# cucumber: {run-test-container} {exec} cucumber

FROM ruby

# ...

Now you also see that variables can be nested in other variables.

If you ever forget what you defined or if the mix of commands and variables becomes too much for you, you can call the wdocker command without arguments to see the commands you defined and the shell commands they'll run.