Canon LBP2900 printer on Linux
Posted on: January 16th, 2014Two options come up when searching for a Canon LBP2900 driver for Linux:
- Download the proprietary Canon driver. Installation is complicated, it involves installing two RPM/DEB packages,
cndrvcups-common
andcndrvcups-capt
, adding the printer with the included toolccpadmin
and then adding a CUPS printer using theccp
protocol (see Tutorial). - Use the open-source foo2capt driver
Both pieces of software are quite dated, which made the installation already difficult. I didn’t manage to use the printer with them.
A new open-source driver, captdriver is being developped (mentioned here), that actually worked for me. Follow these steps in a root shell:
apt-get install build-essential git autoconf libtool libcups2-dev libcupsimage2-dev # This is on Ubuntu, might be different on your system git clone https://github.com/agalakhov/captdriver.git # Check out source code cd captdriver autoreconf -i ./configure make cp src/rastertocapt /usr/lib/cups/filter/ # On some systems this might be /usr/libexec/cups/filter? cp Canon-LBP-2900.ppd /usr/share/ppd/custom/
Now go ahead and add your USB printer, selecting the driver Canon LBP-2900 CAPT GDI printer, 0.1.0
from the list.
Note: Sometimes, printing gets stuck with the message “Rendering completed”. In that case, it helped to just turn the printer off and on again.
Importing a PKCS #12 certificate in gpgsm (or: How to use a CAcert certificate in KMail)
Posted on: June 9th, 2012If you don’t want to read all this, jump down to the solution.
When you create a client certificate with CAcert, the workflow is as follows (if I understand it correctly):
- On the CAcert website, you click the button to generate a client certificate
- The browser (usually Firefox) generates a private key and sends a Certificate Signing Request for that key to the CAcert website
- CAcert signs the key and stores the signed public key on its website
- By clicking the link to the signed certificate, the browser imports the signature and associates it with the private key
As CAcert does not have access to your private key – only your browser has –, you cannot just download your private key from the CAcert website and import it into your e-mail client. Instead, you have to export it from your browser and import it to your e-mail program then.
In my case, I used Firefox to generate the client certificate, and I want to use it in KMail. KMail uses Kleopatra as an encryption back-end, which then uses gpgsm, a tool to manage X.509 (SSL) certificates that is part of GnuPG. Exporting the certificate in Firefox is easy, but that generates a PKCS #12 file, which in my case was not that comfortable to import into gpgsm.
My attempts
The solution linked in the CAcert Wiki suggests to import the PKCS #12 file somewhere in the settings dialogue of Konqueror. In my case (KDE 4.8.3), that menu entry does not seem to exist in Konqueror anymore, so this solution does not work.
I then tried to import the certificate using the “Import” function of Kleopatra. That presented me the following error message:
An error occurred while trying to import the certificate /tmp/cert.p12:
Decryption failed
So I tried to import the certificate directly using gpgsm --import cert.p12
. Apparently, something does not work there with asking the password that the PKCS #12 file is encrypted with, this is the output:
gpgsm: gpgsm: GPG_TTY has not been set - using maybe bogus default gpgsm: gpg-protect-tool: error while asking for the passphrase: End of file gpgsm: error running `/usr/libexec/gpg-protect-tool': exit status 2 gpgsm: total number processed: 0
I then found out how to convert a PKCS #12 file to a X.509 private key file on this page: openssl pkcs12 -in cert.p12 -out cert.pem -nodes
. That generates an X.509 file that contains the private and public key, and the public key of the CA (by adding the -nocerts
options, only the private key is exported, but that doesn’t fix the following problem). On trying to import that file (gpgsm --import cert.pem
), I received the following output:
gpgsm: no issuer found in certificate gpgsm: basic certificate checks failed - not imported gpgsm: no issuer found in certificate gpgsm: basic certificate checks failed - not imported gpgsm: ksba_cert_hash failed: No value gpgsm: total number processed: 2 gpgsm: not imported: 2
Then I came across an ancient mailing-list post that suggests to execute the following command: gpgsm --call-protect-tool --p12-import --store cert.p12
. That seems to be some pretty low-level stuff that somehow calls the backend of gpgsm
. A familiar error messages was produced:
gpg-protect-tool: error while asking for the passphrase: End of file
I finally found a working solution then by passing the password of the PKCS #12 file manually using the -P
option, as described in the mailing-list post: gpgsm --call-protect-tool --p12-import --store -P password cert.p12
. That finally did the trick. Note that this only imports the private key. In order to use the certificate for signing and encrypting e-mails, you also need the public key with the signature by the CA in your keyring. In my case that was imported when I tried to import the X.509 file above.
The solution
Execute the following two commands, where cert.p12
is the certificate file you exported in Firefox and password
is the password it is encrypted with.
openssl pkcs12 -in cert.p12 | gpgsm --import gpgsm --call-protect-tool --p12-import --store -P password cert.p12
Why Apache Velocity sucks
Posted on: March 29th, 2011I was just giving Apache Velocity a try because it seems to be the most popular Java template engine on the Internet. I don’t really understand why, as it seems to be completely immature and badly-designed.
References to undefined variables
When you reference a variable $test
in Velocity and this variable is not defined, the string $test
is returned instead. To avoid this (for example in case of optional parameters), you can use the Quiet Reference Notation writing $!test
, in which case an empty string is returned. Stupidly though, this behaviour does not work consistently. When you use the variable as a parameter instead of printing it out for example, $esc.html($!test)
does not output an empty string as expected but instead the string $esc.html($!test)
. Instead, you have to use the notation $!esc.html($test)
. How stupid is that?
To avoid mistakes in your template, you can set the property runtime.references.strict
to true
, in which case undefined references aren’t replaced by their names, but instead an exception is thrown. In that case however, $!test
also throws an exception instead of returning an empty string!
Now, when the variable $test
is defined and you actually want to output the string $test
instead of its value, you do this by writing \$test
instead. This works only when $test
is defined though, and when it is not defined, it outputs \$test
instead of $test
. So depending on whether the variable is defined or not, you have to write either \$test
or $test
to get the string $test
. Things will get very confusing when you are dealing with optional parameters. The funny thing is that every undefined reference produces an error message in the log file, and because of that, there is an official “better” way to do this: to define an own variable that contains the value $
!
Another way is to set the configuration property runtime.references.strict.escape
to true
. In that case, a backslash is also interpreted as an escape character in front of a non-existent reference. Stupidly, this property is (like most other configuration properties as well) only documented in the manual of the most recent development version. Also confusing is its name, as it is only remotely related and in no way a sub-property of runtime.references.strict
.
Output formatting
This code:
<ul>
#foreach( $a in $b )
#if( $a )
<li>$a</li>
#end
#end
</ul>
Will produce the following output (assuming that $b
is a list [1,2,3]
):
<ul>
<li>1</li>
<li>2</li>
<li>3</li>
</ul>
Notice how messed up the indentation is? At least it has to be said that Velocity, in contrast to JSP, is that intelligent that it does not output the newlines of those lines that only contain Velocity directives. But it keeps all the other whitespaces from those lines?!
Documentation
As mentioned before, most configuration properties are only documented in the manual of the most recent development version. When you use the VelocityViewServlet
from the VelocityTools, additionally to the velocity.properties
file, there is a settings.xml
file where you can define global variable that can be referred to in templates. The following important things are missing from the documentation:
- It is described how to create string, number, boolean, list and object variables. However, Velocity also knows Map variables. The documentation does not say how to define these in the
settings.xml
file, it is probably not possible. Also, it does not specify how to define items in a list that have the valuefalse
,n
or similar (as those are converted to booleans) or that contain a comma (as that is the list separator). This is probably also not possible, at least it does not work using a backslash (or, as the documentation sometimes calls it, a “forward slash”). - When defining objects in that file (that is, instances that are created from a given class), you can pass properties to those objects that are either set using setters or using a method called
configure
. The documentation does not mention that there are some predefined properties that you can use. Those would be:servletContext
(javax.servlet.ServletContext
)request
(javax.servlet.http.HttpServletRequest
)response
(javax.servlet.http.HttpServletResponse
)log
(org.apache.velocity.runtime.log.Log
)velocityContext
(org.apache.velocity.tools.view.context.ChainedContext
)velocityEngine
(org.apache.velocity.app.VelocityEngine
)session
(probablyjavax.servlet.http.HttpSession
)key
(java.lang.String
, thekey
of thetool
insettings.xml
)requestPath
(java.lang.String
)scope
(java.lang.String
, thescope
of thetoolbox
insettings.xml
)locale
(java.util.Locale
)
Properties and Methods
Properties in Velocity either refer to a value in a hashtable or to the return value of a getter. Suppose you are working with an object of this class, though:
public class SampleData {
public final int value1;
public final String value2;
}
Trying to access those properties using $sampleData.value1
will not work, instead, the class will have to be added a method getValue1()
(see VELOCITY-12).
Also stupid is that the naming conventions for properties and for methods don’t fit the naming conventions for Java, as those allow a _
and a $
sign in every part of the identifier. This means that properties and methods that start with an underscore or that contain a dollar sign can’t be accessed from Velocity. For gettext, for example, I use a method with the simple name _
(which is quite a common way to use gettext). In order to use Velocity, I will have to change this class now.
Encoding videos for Samsung mobile phones
Posted on: July 6th, 2010I have tried around a bit with encoding videos to play them on my Samsung B2100 mobile phone. I could not find any information about which format it supported, no matter what video I tried to play, it said “Unsupported content”.
Then I tried to record a video on the mobile and encode a film with ffmpeg to match the container format, video and audio codec and bitrate and same resolution and framerate. The device still complained about “Unsupported content”.
So I looked for Samsung PC Studio (an official application by them to encode videos for Samsung mobile phones) and found it after hours of search in some corner of their website. It takes a huge amount of hard disk space, requires Windows and crashes regularly. But at least it managed to convert some WMV and AVI films (it couldn’t read MKV and MOV though). The resulting video was way too dark, but at least it was “supported content”.
“Supported content” has exactly these properties:
- Filename
- Not too long, ending in
.mp4
- Container format
- MP4
- Video format
- mpeg4, 176×144, 95 kb/s, 15 fps
- Audio format
- aac (bitrate is variable, I use 64 kb/s, 128 kb/s also work)
The command to encode a video with ffmpeg is: ffmpeg -i <Input file> -s "176x144" -r 15 -ab 64k -acodec aac -strict experimental -b 95k -vcodec mpeg4 <Output file>
(Note: I added the -strict experimental
parameter later as ffmpeg 0.6 refused to use aac without it.)
This actually resizes your video to 4:3, which might not be desirable. I have written a small shell script to perform the conversion and adding black paddings to the sides to keep the correct aspect ratio. It needs bash, bc and ffmpeg. Usage:
video2samsung <Input file> <Output directory>
– The encoded file will be placed inside he output directory, with.mp4
as extension. If no output directory is specified, the directory of the input file is used.video2samsung <Input file> <Output file>
– Specify the filename for the encoded video.
Removing useless newlines from JSP output
Posted on: April 27th, 2010What has always bothered me the most about JSP was the hundreds of useless newlines that the output produced. Take the following example code:
<html> <body> <ul> <% for(int i=0; i<3; i++) { %> <li><%=i%></li> <% } %> </ul> </body> </html>
Now look at the output:
<html> <body> <ul> <li>0</li> <li>1</li> <li>2</li> </ul> </body> </html>
The additional newlines are inserted because the line-breaks after the %>
are preserved. Not only does this look stupid, it can also create invalid XML if the newlines occur before the <?xml
declaration.
One solution would be to omit the newlines after the %>
, but obviously this would make correct indentation of the HTML code inside your JSP file impossible as some lines would be preceded by two additional characters.
JSP 2.1 has introduced a trimDirectiveWhitespace
parameter [source], which is included in the @page
directive: <%@page trimDirectiveWhitespaces="true"%>
. Look what the HTML output looks like now:
<html> <body> <ul> <li>0</li> <li>1</li> <li>2</li> </ul> </body> </html>
The trimDirectiveWhitespaces
parameter not only trims the newlines after the %>
, but it trims all white spaces, including the indentation of the next line! I cannot imagine how anyone could have such a stupid idea.
My solution is to replace the newlines automatically in the JSP files before deploying them to the web container. This way, both the JSP file I write and the HTML code that is created look proper. To perform this task, there exists a Maven plugin called maven-replacer-plugin. This is the configuration I use in my pom.xml
file:
<pluginRepositories> <pluginRepository> <id>maven-replacer-plugin reposoitory</id> <url>http://maven-replacer-plugin.googlecode.com/svn/release-repo</url> </pluginRepository> </pluginRepositories> <build> <plugins> <plugin> <groupId>com.google.code.maven-replacer-plugin</groupId> <artifactId>maven-replacer-plugin</artifactId> <version>1.3.1</version> <executions> <execution> <phase>package</phase> <goals> <goal>replace</goal> </goals> </execution> </executions> <configuration> <includes> <include>target/${project.build.finalName}/**/*.jsp</include> </includes> <basedir>${basedir}</basedir> <replacements> <replacement> <token>(--%>)(\n)</token> <value>$2$1</value> </replacement> <replacement> <token>(%>)(\n)</token> <value>$2$1</value> </replacement> </replacements> <regexFlags> <regexFlag>MULTILINE</regexFlag> </regexFlags> </configuration> </plugin> </plugins> </build>
This moves the newline behind the %>
before it. JSP comments (ending with --%>
) are handled separately to not break them. By only moving the newline and not removing it, the line numbers in error messages are still correct. Look at the output now:
<html> <body> <ul> <li>0</li> <li>1</li> <li>2</li> </ul> </body> </html>
Unfortunately, I have not yet found a way to use this together with maven-jspc-plugin, which accesses the JSP files from the source folder directly, so the replacement does not have any effect when you pre-compile your JSP files.
Update: Feel free to copy from my adventurous way of compiling the whitespace-fixed JSP files.
Making backups with Git
Posted on: November 19th, 2009I am the maintainer of a website where I have very restricted access to the files on the webspace. The only way to access them is using sftp or scp, and the only commands I may execute when connecting to the server using ssh are some basic ones like ls
or echo
and rsync
. So the only proper way to get a backup of the files is to copy them using rsync. I’ve been looking for a way to keep these regular backups in a revision control system for a long time, but with SVN, it just looked too complicated to maintain automatically, as you have to look which directories and files have been added or removed and then svn add
or svn rm
them.
So I was using the rsync hardlink mechanism until now. When creating the daily backup copy, I passed the directory of the previous backup to rsync using the --link-dest
parameter. rsnyc then did not create a new copy of files that existed equally in the old backup, but instead hard linked to them so they did not require any disk space.
With Git, it becomes very easy to manage the backups. I just update my Git working space to the current state of the web space and then run git add .
. This way, all newly created files will be added automatically. As Git doesn’t care about directories, there is no trouble with them. When I then run git commit -a
, files that have been removed from the directory will be automatically removed from Git! I don’t have to look for them separately, as I would have to in SVN.
With my old rsync system, the problem was that some large files slightly changed very often, such as an SQLite database where an entry was added. As the file changed, a hardlink could not be created and the whole new file had to be saved, which took lots of disk space over time. I don’t know if Git’s mechanism works better here by saving only the diffs of these files.
With the old rsync system it was very easy to remove old backups, I could just remove the backup directory, as hardlinks were used instead of softlinks, it was no problem when the old files were removed. I don’t know what I’ll do when my Git repository gets too big, maybe there is a way to remove old commits. Suggestions are welcome.
Update: The files on the webspace totally take about 1,1 GiB. The Git repository (in .git) with one commit takes 965 MiB. The next day’s backup takes additional 3,7 MiB on the file system with the rsync method, in Git the new commit takes about 2 MiB. Looks good so far.
Update: I backuped another directory that contains lots of text files and some sqlite databases. The directory uses 50 MiB of disk space, the Git repository with one commit only 9,9 MiB. The second backup takes additional 29 MiB with the rsync method, in the Git repository 7,1 MiB. Looks great so far.
Committing only parts of the changes in a file with Git
Posted on: November 19th, 2009Just found this article that shows Git’s possibility to only commit parts of the changes you did in a file. Just great.
Git submodules
Posted on: November 18th, 2009Next example why the current Git submodule support is completely shit. I have various PHP applications that use the same Java backend, calling it using exec()
. Now each of these PHP applications has its own Git repository, the Java backend has one, too. The PHP repositories include the Java repository as submodules, so, if someone clones one of these PHP repositories, they have to run git submodule update --init
after git clone
.
If I make an update now in my Java library and commit that to the public repository, the new version won’t be used automatically in the PHP applications. Instead, I have to run these commands in all PHP applications:
cd java git pull cd .. git commit -a git push
After updating the working space of the Java submodule (using git pull
), it appears like a modified file in the PHP application repository, so I have to commit the change.
Users of the PHP applications now cannot just run git pull
to get the new version of the application (including the new version of the Java submodule), instead they have to run an additional git submodule update
after that so that the working space in the submodule gets updated, too. So you have to tell your users that they can’t just git pull
changes, but instead they have to run an additional command every time.
Now things get even funnier: The Java library requires an external library to work, so it includes a submodule itself. The thing is, when people download a PHP application and load the Java submodule using git submodule update --init
, the submodules of the Java submodule won’t be initialised automatically. So users have to run the following commands to get a working copy of my PHP application after git clone
:
git submodule update --init cd java git submodule update --init
Now imagine that the external library used by my Java library introduces a new feature that I begin to use in my Java library. I have to update the submodule of the external library in my Java library and commit that change. Then I have to update the Java library submodules in all my PHP applications and commit these. Imagine what a user of my PHP application has to run every time he wants to update his working space to a working new version:
git pull git submodule update cd java git submodule update
My projects are rather small, imagine what you’d have to do to update a working copy of a huge project…
When I use a web application on my server (such as an webmail client or phpmyadmin), I usually check out the stable branch of their SVN repository and run svn up
every now and then to get new updates. I don’t need to know anything about the repository structure of the projects to do that. With Git, I would have to know in which directories I would have to update the submodules manually, or, alternatively, there could be a shell script to update the working copy, which I would have to remember to call instead of git pull
. This makes things unbelievably complicated. I hope Git will at least introduce a feature that automatically fetches the submodules on git clone
or git pull
.
Update: Man pages on the internet list a --recursive
option for both git submodule
as well as git clone
that does exactly this. On none of my computers, these are supported by the installed Git version yet, so it must be a very new feature. I don’t know though if the option is available for git pull
or git checkout
as well. I hope that it will become or already be the default behaviour of Git. Yet I am missing an option to always include the newest commit of a repository as a submodule instead of a specific one.
Update: Oviously, the --recursive
option was added in Git 1.6.5, which is still marked unstable in Gentoo Portage.
SVN pain
Posted on: November 18th, 2009Ugliest SVN issue: When committing, the revision number of the working copy is not increased when it would only have to be increased by 1. I tend to never run svn update
on my working copies of projects that I develop all alone. My working copy is always up to date, but the revision number is not…
Recently I’ve been removing SVN directories quite often, but everytime I want to commit the change, SVN tells me that the directories are out of date. And updating a directory that has been removed using svn rm
does really strange things…
Git disadvantages
Posted on: November 16th, 2009I’ve been considering merging from Subversion to Git lately, and have finally managed to understand how Git works. A good introduction is “Understanding Git Conceptually”. Not a good introduction is the Wikipedia article, as it mainly explains what Git is trying to be, and not what it actually is.
The biggest misunderstanding is that Git is called a “distributed” or “decentralised” revision control, as opposed to a “centralised revision control”. In fact, aside from the fact that you normally have a copy of the full repository in your working copy, Git isn’t that at all. When you hear about a “decentralised” revision control system, you suspect it to work a bit like p2p file sharing, commits would be exchanged directly between developers, a central server mediating only. This is not the fact with Git, you will always have a central repository that you commit to. If you try to develop without a central repository, you will end up in a mess.
The fact that Git is trying to be decentralised without being it leads to confusions that could have been avoided by designing it as a centralised system. For example, when you create a public branch in the central repository, you cannot use that branch directly but instead have to create another local branch that “tracks” the remote branch.
The main difference between Git and Subversion is often claimed to be that in Git you have the whole repository in your working copy, whereas in Subversion you only have the newest commit. This is a minor difference in my eyes, the main difference (and advantage) is that Git has native support for branches (whereas you have to emulate branch behaviour by duplicating directories in Subversion), and these branches can optionally only exist locally. It is very easy to create different branches for different minor changes, and you can develop on them even when you don’t have an internet connection.
The only disadvantage of Git compared to Subversion I have come across is a huge one, and you should really consider it before merging to Git. Git comes with very useful functionality to participate on the development of a project. However, one other very important use of Subversion is to get a copy of a project or a part of it (such as a library) and to easily keep up to date with changes without dropping your own local hacks (something like a “read-only working copy”, you never intend to commit any changes). Git is currently not made for this use at all, as you always have to download a lot more than you need. There is in fact an option to avoid downloading old commits that you don’t need (using --depth=1
), but there is no way to only download a specified sub-directory. Common practice in Git is to create an own repository for every part of a project that one might want to check out on its own, and then to include these repositories using submodules (something similar to SVN externals). The problem about that is that it creates a lot of work in Git to make a change in one submodule, commit that and then load the changes into the other repositories including that. And for many projects, it is just impractical to split the code into multiple repositories. If I want to include a part of a large project (such as a part of a library) into my project, I have to include that whole project, which can take hours to download given today’s slow internet connections. There might be workarounds around this, but they certainly aren’t as simple as a single “svn up”.
So the major advantage of SVN over Git is that it is very easy and fast to get a complete and up-to-date “read-only” copy of a project by just using “svn co” or “svn up”. In Git, you have to clone the repository, then it might be incomplete because you have to additionally initialise the sub-modules. And downloading those might take hours.
As long as this disadvantage persists in Git, many projects will keep using SVN. And as long as these projects keep using SVN, it will be difficult for other projects to merge to Git, because they reference these projects using svn:externals.
I hope that these possibilities will be included in future versions of Git:
- Download a “read-only” copy of a Git repository or one of its sub-directories, with automatic initialisation of its sub-modules. (The copy should of course still be updatable using “git pull”.) Transmitting old commits and other useless bandwidth usage should be avoided.
- Something like svn:externals. Something that automatically pulls the newest commit of a repository or its sub-directory into a sub-directory of my working copy.
- SVN support for this svn:externals-like thing. It is already well possible to clone a subversion repository, so it shouldn’t be a problem to support importing them.