tag:blogger.com,1999:blog-85677316563829920632024-03-08T21:07:14.750+00:00trembitsBart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.comBlogger39125tag:blogger.com,1999:blog-8567731656382992063.post-63909838711099538532012-01-19T06:35:00.005+00:002012-01-19T06:48:53.712+00:00Extracting audio from CD images<p>I always wondered what the point was in cue/bin CD images compared to the ISO format, but never bothered to look it up.</p>
<p>A couple of days ago I was looking for music from an old Playstation game (which I own but couldn't be bothered to track down). I knew the music was on the disc as CD audio since I'd copied it to tape a dozen years or more ago, so I found and downloaded an image of the game from the web.</p>
<p>The thought then occurred -- how do I rip the music from this image? I usually use cdparanoia to get music off CDs, but could I point cdparanoia to a mounted CD image? The answer is no -- I would only be able to mount the data partition. You don't mount an audio CD before ripping music from it, cdparanoia just accesses the drive directly. Would I have to burn the image to a CD, then rip it back onto the machine? That's ridiculous.</p>
<p>Meanwhile the download finished and I saw that it was bin/cue rather than ISO. This is when I looked it up. It seems obvious now, but an ISO is just the ISO-9660 filesystem -- the data part. As such, an ISO file can't include CD audio. The bin/cue format is needed in order to include the data on the separate tracks of the CD. Converting the bin/cue to ISO would have lost me all the music.</p>
<p>So for future reference, extracting the audio (and the ISO at the same time) from bin/cue is as easy as this:</p>
<p><code>bchunk -w wild9.bin wild9.cue wild9</code></p>
<p>The <code>-w</code> there makes it write any audio as wave files so I can then convert to Flac. I assume it'd write them as raw PCM otherwise, but I've no desire to check.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com1tag:blogger.com,1999:blog-8567731656382992063.post-76200398974963306262012-01-11T00:05:00.004+00:002012-01-11T00:42:50.265+00:00Archiving mailing list messages but not replies to my own posts<p>Sometimes I sign up to a mailinglist and want to see replies to my own threads but not have my inbox overtaken by everything else.</p>
<p>I can set up a filter in Gmail to label the mailing list posts and archive them, obviously, but then replies to my own threads (or threads I've participated in) are not obvious.</p>
<p>So let's make a more sophisticated filter, to match all messages with the relevant <code>List-ID</code> header except those which mention my messages in their <code>References</code> header.</p>
<p>After a few test queries it seems that search queries of the form <code>References:*@segnus</code> matches messages replying to things I sent from my laptop. The negative version <code>-References:*@segnus</code> appears to match the opposite. So it's not hard to add more negatives to avoid matching messages from my other machines and to build up a final query:</p>
<p><code>List:django-users.googlegroups.com -References:*@segnus -References:*@t900 -References:*@perihelion</code></p>
<p>This then goes in the "includes the words" box of the filter, and is set to apply a label and skip the inbox.</p>
<p>It's possible that other people send messages with <code>Message-ID</code> headers like mine and so I'd get some extra messages in my inbox, but I can live with that.</p>
<p>This doesn't seem to work quite correctly on existing messages. I'm guessing this is because Gmail sees the messages in the thread before any of them referenced me -- my original message or the existing conversation before I turned up -- and those messages match the filter, leading Gmail to lump the rest of the conversation in with that positive match and archive the lot. Fingers crossed it'll work with fresh incoming messages, though -- I'll update this post when I confirm.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-4101310646011791332011-12-27T21:21:00.007+00:002012-03-20T21:26:27.935+00:00Viewing HTML in mutt<p>I use mutt to read my email, and every now and then I get sent a message in HTML format with no plain text alternative. I don't like to load these files in a browser, since it'd go ahead and fetch any images, run scripts and so on with potential privacy risks. In other words, a message from a dubious source might phone home and confirm my email address or track me or whatever, just from opening their HTML in my browser.</p>
<p>So generally I just mind-parse the HTML. In more obfuscated cases (like the garbage output as newsletters by various websites) I manually pipe the message through lynx or similar.</p>
<p>Then when I reply to the message I have to pipe it through again if I want to quote something other than the HTML code.</p>
<p>I got fed up of this and looked for a solution. It consists of two parts -- changing up the entries in my mailcap file so that filtering the HTML to plain text is preferred to opening up a browser; and telling mutt to automatically filter text/html files using the rules it finds in the mailcap file. I've added some redundancy in to the mailcap entries so that it works both on my main machines (where I prefer pandoc since Markdown is nice to read, then I prefer lynx to either w3m or html2text, since lynx displays the links as references at the bottom) and on my phone (where only lynx is available).</p>
<p>In ~/.mailcap:</p>
<p><pre>text/html; pandoc -f html -t markdown; copiousoutput; description=HTML Text; test=type pandoc >/dev/null
text/html; lynx -stdin -dump -force_html -width 70; copiousoutput; description=HTML Text; test=type lynx >/dev/null
text/html; w3m -dump -T text/html -cols 70; copiousoutput; description=HTML Text; test=type w3m >/dev/null
text/html; html2text -width 70; copiousoutput; description=HTML Text; test=type html2text >/dev/null</pre></p>
<p>In ~/.mutt/muttrc:</p>
<p><pre>auto_view text/html</pre></p>
<p>Now HTML is automatically piped through one of those programs to turn it into plain text, and when I reply the quoted text is the plain text version rather than the raw HTML.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-72973876567633973502011-12-07T09:01:00.003+00:002012-03-20T21:30:40.369+00:00Different keys to push and pull to git repository<p>I was getting fed up of typing in the password for my home server when pulling changes from a git repository up to the live server. But I'm not comfortable with putting my private key on the remote since other people have root. What to do?</p>
<p>I made two new SSH key pairs. One with a passphrase and one without. I told gitosis, which handles permissions for the git repositories on my home server, to accept my main passphraseless key with read/write access, the passphrased one also with read/write access, and the new passphraseless one with read-only access. I then uploaded the private keys for the passphrased key and the new passphraseless key to the remote host.</p>
<p>So even though they have one of my passphraseless private keys, it's only good for read access to the repositories -- data which they already have anyway.</p>
<p>To tell SSH it has multiple keys you edit the config file and add an <code>IdentityFile</code> line for each key. But when connecting to the remote SSH server only the first acceptable key is tried. So if the passphraseless key is first everything will be fine when doing a read operation but gitosis will give a no permission error message when doing a write operation, and the other key won't be tried. If the key with the passphrase is first, the passphrase is asked for no matter whether it's a read or write operation.</p>
<p>So here's the solution: pretend to git using the <code>pushurl</code> option that we're pulling from and pushing to different hosts, then set up SSH to use different keys for these different hosts, but in fact then point them to the same host. Here's the configuration to illustrate.</p>
<p>Configuration in repository/.git/config:</p>
<p><pre>[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = gitosis@example.com:repository.git
pushurl = gitosis@rw.example.com:repository.git</pre></p>
<p>Configuration in ~/.ssh/config:</p>
<p><pre>Host example.com
IdentityFile ~/.ssh/id_rsa.ro
Host rw.example.com
IdentityFile ~/.ssh/id_rsa.rw
HostName example.com</pre></p>
<p>So the dummy hostname <code>rw.example.com</code> triggers SSH to use the passphrased private key at the correct hostname. A passphrase prompt appears when pushing but not when pulling.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-57800660888109408432011-04-06T18:05:00.006+01:002011-04-06T18:15:19.970+01:00Automatically switch main group when logging in to a host<p>I have accounts on couple of hosts where I don't have root powers, and on some of them I am working alongside others. We're in the same group, but our primary groups are named after ourselves. Sometimes we need to edit files the other person has made and it can be a pain if one of us forgets to set the permissions such that the other user can edit the file.</p>
<p>Instead of having to remember to run <code>chgrp semsorgrid newfile</code> and then <code>chmod g+w newfile</code> every time we make a new file, we can change our primary group after logging in with <code>newgrp semsorgrid</code> and set the file creation mask to give the group maximum privileges with <code>umask 002</code>. But we still have to remember to do that when logging in.</p>
<p>I tried putting those commands in our shell configuration files (my .zshrc and his .bashrc) but then when logging in new shells are spawned recursively. The solution is to check which group we're in and conditionally run <code>newgrp</code> like this:</p>
<p><pre>umask 007
if [ $(groups | awk '{print $1}') != "semsorgrid" ]; then
exec newgrp semsorgrid
fi</pre></p>
<p>This replaces the shell which was originally spawned with a new one with the primary group properly set, then when that shell initializes it skips the newgrp command since the primary group is already "semsorgrid".</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-34496841048134241932011-03-25T14:41:00.008+00:002011-03-28T18:25:29.633+01:00Open online PHP reference from vim in a new split vim window<p>Following <a href="http://trembits.blogspot.com/2011/03/open-online-php-reference-from-vim.html">the last post</a> I went a bit further. I haven't decided which solution out of that and the following I like best yet.</p>
<p>This time I wanted the documentation to open in a new window within vim. That means grabbing the documentation HTML, cutting out what I don't want to see, rendering to plain text and putting the result in a new window.</p>
<p>Since the documentation is given as XHTML, as long as it's valid the safest way to chop out the bits I don't want is by parsing it as XML. So since PHP is my quick hacking language of choice I cooked up the following script and saved it as ~/bin/phpman-text.</p>
<p><pre>#!/usr/bin/env php
<?php
if (!isset($_SERVER["argv"][1])) {
fwrite(STDERR, "No keyword given\n");
exit(1);
}
$xmlstring = @file_get_contents("http://php.net/" . urlencode($_SERVER["argv"][1]));
if ($xmlstring === false) {
fwrite(STDERR, "Failed to fetch doc page\n");
exit(2);
}
// remove default namespace
$xmlstring = preg_replace('%\bxmlns="[^"]*"%', "", $xmlstring);
$xml = @simplexml_load_string($xmlstring);
if ($xml === false) {
fwrite(STDERR, "Failed to parse doc page\n");
exit(3);
}
// get content div
$content = array_pop($xml->xpath("//div[@id='content']"));
if (is_null($content)) {
fwrite(STDERR, "Couldn't find div with ID 'content'\n");
exit(4);
}
// remove nav bars
foreach ($content->xpath("./div") as $nav)
if (in_array("manualnavbar", explode(" ", $nav["class"])))
simplexml_remove_node($nav);
// get new XML
$newxml = $content->asXML();
// run lynx
$lynx = proc_open("lynx -dump -stdin", array(array("pipe", "r"), array("pipe", "w"), array("pipe", "w")), $pipes);
if (!is_resource($lynx)) {
fwrite(STDERR, "Couldn't run lynx\n");
exit(5);
}
// poke new XML to lynx's stdin
fwrite($pipes[0], $newxml);
fclose($pipes[0]);
// get lynx's stdout
echo stream_get_contents($pipes[1]);
fclose($pipes[1]);
$erroroutput = stream_get_contents($pipes[2]);
fclose($pipes[2]);
// close lynx
$returnval = proc_close($lynx);
// check return value
if ($returnval != 0) {
fwrite(STDERR, "lynx exited with code $returnval -- error output follows.\n$erroroutput");
exit(6);
}
exit(0);
function simplexml_remove_node($node) {
$domnode = dom_import_simplexml($node);
$domnode->parentNode->removeChild($domnode);
}
?></pre></p>
<p>Then, with help from the vim wiki, I came up with this, to be added to my .vimrc:</p>
<p><pre>function! OpenPhpFunction (keyword)
exe "12new"
exe "silent read !phpman-text ".substitute(a:keyword, "_", "-", "g")
exe "set buftype=nofile bufhidden=delete filetype=php readonly"
exe "1"
endfunction
autocmd FileType php map <buffer> K :call OpenPhpFunction('<C-r><C-w>')<CR></pre></p>
<p>This mostly works. It loads the PHP documentation into a temporary new buffer and sets the filetype to PHP so that bits of text in <code><?php ?></code> tags are highlighted as PHP code. There are a few bad things -- some of the blocks of PHP code in the manual don't have the end tag and so highlighting continues beyond the end of the code, some of the manual pages (probably due to dodgy comments) aren't valid XML and so won't go through the script, and lynx's output isn't completely ideal.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com1tag:blogger.com,1999:blog-8567731656382992063.post-15115268992852887822011-03-09T17:18:00.005+00:002011-03-09T17:23:17.568+00:00Open online PHP reference from vim<p>Pressing K when over a keyword in vim usually opens the keyword's manpage. In Python it invokes <code>pydoc</code> instead. I wanted it to open PHP's online documentation when I press it over a function name in PHP. Easy.</p>
<p>I wrote a tiny shell script in my ~/bin directory first, phpman:</p>
<p><code>#!/bin/sh<br>sensible-browser http://php.net/"$*"</code></p>
<p>I then added to my .vimrc the line</p>
<p><code>autocmd FileType php set keywordprg=phpman</code></p>
<p>Done. And using <code>sensible-browser</code> has the added bonus that it'll run whatever my preferred graphical browser is when I am in X or a textmode browser when I'm not.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-58678859272404977532010-10-10T01:31:00.004+01:002010-10-12T13:25:15.599+01:00Merging files with unison and vimdiff -- update<p>I've just updated my post from February about merging files with unison and vimdiff -- I figured out how to do so without launching a new terminal emulator. The solution uses screen. See <a href="http://trembits.blogspot.com/2010/02/merging-unison-conflict-with-vim.html">the updated post</a>.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-5767785344331908662010-08-30T15:28:00.003+01:002010-08-30T15:31:27.617+01:00Committing part of a file<p>A fantastic feature of git I only found a few days ago is committing just part of a file. Usually I have the foresight to use <code>git stash</code> but sometimes that doesn't occur or it's not practical for whatever reason, such as only realizing after the edits that the changes in a particular file should be two commits rather than one.</p>
<p>When this happens you can interactively choose which parts of the changes to stage for committing. To do that:</p>
<p><code>git add -p file.php</code></p>
<p>Git then shows each part of the patch in sequence, asking what you want to do with them. You can stage the change, not stage it, split it into smaller chunks, decide later or even edit the patch manually.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-40845773088697231572010-08-24T13:46:00.005+01:002010-08-24T18:01:19.595+01:00Options for lpr<p>I keep on forgetting these so it's about time I wrote down the ones I use most often.</p>
<p>You can list the current options (it definitely includes the options I've set to default and I'm not sure if it also shows all other default options or just those which the Cups Gui on my machine has set) with <code>lpoptions</code>. I've set it to print full duplex by default in the past with <code>lpoptions -o sides=two-sided-long-edge</code> so that shows up when I run <code>loptions</code> and everything's printed double sided by default.</p>
<p>Sometimes I don't want to print double sided and so I'll do <code>lpr flyer.pdf -o copies=8 -o sides=one-sided</code> to print off a bunch, one per sheet. (Could also do <code>-#8</code> as on option to lpr for eight copies but that requires escaping.)</p>
<p>I can also choose a printer other than the default one such as the colour laser with <code>lpr poster.pdf -P renoir</p>
<p>To do a quick printout of a PDF 2-up and reordered so it can just be folded up into a pamphlet fresh out of the printer I can do this:</p>
<p><code>pdftops somebook.pdf - | psbook | lpr -o number-up=2 -o sides=two-sided-short-edge</code></p>
<p>To print an A4 page on an A3 printer and fill the page I <i>should</i> (not tested) be able to do this:</p>
<p><code>lpr a4poster.pdf -o media=A3 -o fit-to-page</code></p>
<p>A <code>landscape</code> option might be necessary, I guess depending on the input. I'll come back and confirm what works at some point.</p>
<dl>
<dt><code>copies=<i>num</i></code></dt>
<dd>Number of copies. Also possible to use <code>-#<i>num</i></code> but that might need escaping.</dd>
<dt><code>landscape</code></dt>
<dd>Rotates 90 degrees but doesn't resize the page. Printing an A4 portrait PDF will not fit on A4 media but it doesn't complain, it just cuts off what doesn't fit.</dd>
<dt><code>number-up=<i>num</i></code></dt>
<dd>The number of pages to print on each side.</dd>
<dt><code>number-up-layout=lrtb|lrbt|rltb|rlbt|tblr|tbrl|btlr|btrl</code></dt>
<dd>Choose how the pages are laid out when printing multiple pages on a side.</dd>
<dt><code>sides=one-sided|two-sided-long-edge|two-sided-short-edge</code></dt>
<dd>Choose how duplex works -- how the paper is turned over before the next page is printed. <code>two-sided-long-edge</code> is desirable for 1-up printing, <code>two-sided-short-edge</code> for 2-up.</dd>
<dt><code>media=A4|whatever</code></dt>
<dd>Choose the media size. If the printer only does as large as A4 and you tell it to use A3 and <code>fit-to-page</code> it'll print on A4, sideways and too big. Doesn't sound useful but it potentially is.</dd>
<dt><code>page-set=odd|even</code></dt>
<dd>Print only odd or even pages.</dd>
<dt><code>outputorder=normal|reverse</code></dt>
<dd>This takes effect after <code>number-up</code> so you still get 1234 on a page, not 4321</dd>
<dt><code>cpi=<i>charsperinch</i></code></dt>
<dd>The number of characters per inch when printing plain text (default 10).</dd>
<dt><code>lpi=<i>linesperinch</i></code></dt>
<dd>The number of lines per inch when printing plain text (default 6).</dd>
<dt><code>columns=<i>num</i></code></dt>
<dd>The number of columns to format when printing plain text.</dd>
<dt><code>page-(left|right|top|bottom)=<i>margin</i></code></dt>
<dd>The margin sizes in points when printing plain text.</dd>
<dt><code>Collate=true|false</code></dt>
<dd>Whether or not to collate the output pages -- that is, print all pages of the first copy followed by all pages of the next. By default it won't -- it'll print all copies of page 1, then all copies of page 2 and so on.</dd>
<dt><code>page-ranges=<i>page</i>,<i>page</i>,<i>start</i>-<i>end</i></code></dt>
<dd>Choose which pages to print. This takes effect after <code>number-up</code> so the second page is 5678 when printing 4-up. They're always printed in ascending order, regardless of the order the ranges are given in.</dd>
</dl>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-55291563076152983562010-07-07T23:40:00.005+01:002010-07-07T23:51:00.883+01:00Nice bit of piping<p>I just made a nice little pipeline. I love the Linux shell.</p>
<p><code>cat file | tr A-Z a-z | tr -c a-z "\n" | sort | uniq -c | sort -nr</code></p>
<p>That gives a nice ordered count of the most used words in the file. It's pretty dumb -- it breaks words at anything which isn't an alphabetical character -- but it does the trick.</p>
<p>The first tr changes uppercase characters to lowercase, the second changes anything which isn't alphabetic to a newline. Then the words are sorted alphabetically, then uniq counts successive identical lines and outputs the count with the word, then that list is sorted numerically in descending order. Lovely.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-64170783242982445322010-06-23T16:42:00.002+01:002010-06-23T16:45:51.529+01:00Make a single-line XML stream pretty in vim<p>Usually I view XML in Firefox as that's the quickest way to get a nice tree I can explore. But when I might want to edit it until today I've been loading it into vim and then if necessary adding newlines manually or using a fairly dumb regex to do it for me. Today I made a slightly more sophisticated one.</p>
<p><code>:%s/>\s*</>\r</g</code> adds the newlines in decent places and then <code>gg=G</code> indents it all.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-68361763042591459942010-06-23T11:45:00.007+01:002010-06-23T13:26:19.190+01:00Moving the pointer to the top left corner of the screen<p>I wanted a keystroke to send the pointer to the top left of the screen. Usually I do this by flipping to the other screen and back but on my laptop (with only one screen) this doesn't work.</p>
<p>The solution was to use xmousepos to get the mouse position (want to know which screen we're on) and xte to set it. These are part of the xautomation package in Ubuntu/Debian.</p>
<p>But how big are the screens? I'm using xrandr and assuming that the resolutions listed are in order left to right and no screens are up or down or rotated or anything crazy like that.</p>
<p>Here's my script.</p>
<p><pre><code>#!/bin/sh
# get current horizontal mouse position
mousex=$(xmousepos | awk '{print $1}')
# walk through widths of screens (assuming they're in order left to right) until
# the mouse is on the current screen, building offset from left
offset=0
for width in $(xrandr | grep '*' | awk '{print $1}' | sed -r 's/x.*//'); do
[ $mousex -lt $(expr $offset + $width) ] && break
offset=$(expr $offset + $width)
done
# move the pointer to the top left of the screen
xte "mousemove $offset 0"</code></pre></p>
<p>xte can send other fake input to X too such as clicking, pressing and releasing keys and so on. Potentially very useful.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-78656675520459324612010-06-09T17:01:00.005+01:002010-06-23T13:26:48.372+01:00Giving standard input to a program run in a shell script<p>I modified my "lock screen" bash script just now to also work when X isn't running -- that is, I'm on a virtual console or shelled into my machine.</p>
<p>The script as it was did this: use purple-remote to set my status to "away" (if it was "available" -- otherwise don't touch it), then run i3lock to lock the X screen, then switch the screen off to save power, then when i3lock exits restore my IM status if it was changed.</p>
<p>I added an <code>if [ $DISPLAY ]</code> test to branch for X and non-X, putting vlock in the place of the i3lock and xset commands for the non-X branch. It didn't like this, though -- vlock complained that it wasn't a virtual console.</p>
<p>In order to get it working all I had to do was redirect the script's standard input to vlock like this: <code>vlock -a </dev/stdin</code>.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-61730575739526925182010-05-25T16:56:00.002+01:002010-06-01T10:46:01.701+01:00Printing from the command line<p>I just selected some instructions from a Pidgin window and then typed <code>xsel | lpr</code> in a terminal. Fantastic.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-57375389939034105952010-05-25T14:38:00.006+01:002010-05-25T14:44:31.766+01:00Debugging an already-running process<p>I was getting a segfault in <a href="http://pms.sourceforge.net/">pms</a>, an ncurses application I contribute to, so I restarted the program within gdb. But since the bug seems to be something to do with resizing the console window the segfault never appeared again -- gdb doesn't pass the resize signal (or perhaps the new dimensions) on to the program running within it.</p>
<p>I Googled around for a way to run gdb in a separate console window from the ncurses application. It's rather simple: change to the directory containing the binary so it can find the right one to debug against and run <code>gdb -p $(pidof pms)</code>.</p>
<p>That'll attach gdb to the already running process and pause it. Type <code>continue</code> in the gdb window and await the crash.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-90475220980270854262010-04-30T14:02:00.004+01:002010-04-30T17:13:41.951+01:00Ruby dependencies<p>After getting a message that a module which was definitely installed was not found, a colleague eventually had the idea of checking the require path by running <code>puts $:</code> in irb. Sure enough there was a different module with the same name earlier in the path than the one gem had installed. I uninstalled Ubuntu's own libxml-ruby and libxml-ruby1.8 and all was well. Interestingly, though, one of these (I forget which) was required to install the gem to begin with -- I guess it needed some header it provided. There's no separate -dev package (at least in this old release of Ubuntu).</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-20505085222243241032010-02-09T18:12:00.005+00:002010-02-09T18:18:42.062+00:00Which package did a particular file come from<p>I sometimes find myself trying to figure out which package a particular file came from. It's usually a binary I want available on a different Debian-based machine and an <code>aptitude search binaryname</code> isn't returning anything.</p>
<p>I think I've done this a few times before with dpkg-query commands of some kind but it was never simple enough to remember. I Googled the problem again today and came up with <a href="http://www.howtogeek.com/howto/ubuntu/using-ubuntu-what-package-did-this-file-come-from/">an article</a> introducing dlocate. It just has to be installed (its package is helpfully called "dlocate") and then to find out where pdfimages came from I just have to type <code>dlocate pdfimages</code>.</p>
<p>But that gave me three lines of output -- I have a python script called pdfimages.py which came from python-reportlab, a gzipped manpage which came with poppler-utils and the binary I was actually looking for, also from poppler-utils. So I could have made the query more specific by giving a full path to the file -- quickest way with a binary is of course using which: <code>dlocate $(which pdfimages)</code>.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-14281052843729720812010-02-04T13:30:00.008+00:002010-10-10T01:30:43.971+01:00Merging a unison conflict with vim<p>I use unison to synchronize various files. If I've edited the same file on both ends I can pick a version to keep or, until now, I aborted, manually merged them with vimdiff or some other tool and then started synchronizing again.</p>
<p>But unison is able to call an external tool to merge for you. I figured out how to get it to use vimdiff -- add the option <code>-merge "Name * -> urxvt -e vimdiff CURRENT1 CURRENT2"</code>.</p>
<p>I've used urxvt rather than my usual urxvtcd because urxvtcd immediately forks and unison continues as soon as the process it runs exits. So why did I run a terminal emulator at all rather than just running vimdiff in the existing terminal? It won't display. Not sure why -- probably to do with it not realizing it has a virtual terminal to output on or something.</p>
<p><strong>Update on 2010-10-10:</strong> I had another shot at figuring it out. I tried using screen -- "Must be connected to a terminal". Then I tried using <code>ssh -t</code> and screen -- same issue (but I think the error was from SSH this time rather than screen). Finally I found an option in screen's manual page which can start a screen session detached but yet not fork. That'll do. So now in default.prf (which my other unison config files include) I have these two lines:</p>
<pre><code>merge = Name * -> screen -DmS unisonmerge vimdiff CURRENT1 CURRENT2
merge = Name .* -> screen -DmS unisonmerge vimdiff CURRENT1 CURRENT2
</code></pre>
<p>The second line is there because I found just now that dotfiles weren't included when only the first line was present.</p>
<p>So here's what happens when a merge is now requested. A new screen session is started with the session name "unisonmerge". That's started detached but the process doesn't fork as it usually would and so unison stops, waiting. We get the shell to stop the process and give us a prompt by pressing ^Z, then attach the new screen session with <code>screen -RS unisonmerge</code>. The vimdiff instance is running in that -- we merge the files (modifying only the one we want to keep, which will end up on both machines) and then exit. The screen session ends and we're back to the prompt. Then we bring unison back to the foreground with <code>fg</code>. Unison continues.</p>
<p>So now I can run unison on a headless server over SSH, merging when necessary.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-81569127034941994342010-01-14T14:10:00.003+00:002010-01-15T15:22:42.870+00:00Postpone current command and run another first in zsh<p>Sometimes I am halfway through writing a command and realize I need to run something else first. Just now I was writing a git commit command and realized I should edit the TODO file first. So usually I either ^U to clear the command and type <code>vim TODO</code> before writing the commit command again or maybe open a new terminal or maybe even ^A to go to the start of the line and add <code>vim TODO;</code> to the beginning.</p>
<p>But zsh has a buffer stack which comes in useful here. The default key binding for <code>push-line</code> is ESC q. This pushes the current buffer (command in progress) onto the stack and clears the buffer, then pops from the stack next time the prompt is reached. So halfway through typing a commit command I do ESC q, type <code>vim TODO</code>, make whatever changes and exit and I'm back at a commandline with my half-typed commit command restored.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-16660213939522617172010-01-12T17:09:00.001+00:002010-01-12T17:10:51.974+00:00Transcription of Stream of Consciousness by Textures<p>Forgot to post that I've transcribed <a href="http://tremby.net/tabs/textures%20--%20stream%20of%20consciousness.tab">Stream of Consciousness</a>. It has some crazy fast bits.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com1tag:blogger.com,1999:blog-8567731656382992063.post-82029123888931413922010-01-12T17:01:00.002+00:002010-01-12T17:08:44.692+00:00git stash<p>I'm currently working on a project using git as my version control system.</p>
<p>Often I'm halfway through making some change or other and an unrelated bug becomes apparent. It's an easy fix and so I go ahead and fix it but then I can't make a clean commit. I found out today how to handle this.</p>
<ol><li>Save the work in progress before the unrelated bug is fixed and run <code>git stash</code></li>
<li>The source tree is now clean -- the unfinished changes have disappeared. Reload the file in vim and fix the small bug.</li>
<li>Commit the patch</li>
<li>Restore (and merge if necessary) the stashed tree with <code>git stash pop</code></li>
</ol>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-71436624972547733382009-12-22T01:50:00.003+00:002009-12-22T01:53:59.358+00:00Transcription of Drive by Textures<p>I learnt Denying Gravity the other day (but didn't transcribe it). Today I thought since I know that and Regenesis maybe I'd learn the whole album. Transcribed Drive just now -- it's <a href="http://tremby.net/tabs/textures%20--%20drive.tab">up on my tabs page</a></p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-84267370950530113882009-12-16T15:34:00.005+00:002009-12-22T01:50:23.779+00:00Vim can read zip files<p>I accidentally opened a zip file with vim just now. It gives a directory listing just like when opening a directory. Fantastic.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0tag:blogger.com,1999:blog-8567731656382992063.post-14944440756791146952009-12-04T11:04:00.005+00:002009-12-16T15:12:23.517+00:00Transcription of Old Days Born Anew by Textures<p>I finished transcribing Old Days Born Anew by Textures to guitar tablature. It's up with my other plaintext tabs at <a href="http://tremby.net/tabs">my tabs page</a>.</p>Bart Nagelhttp://www.blogger.com/profile/09322287750886186240noreply@blogger.com0