Limiting the bandwidth for large downloads on a slow internet connection
Posted on: March 23rd, 2009On our super-fast 446 kbit/s internet connection, which seems above all to be configured wrong by our ISP, downloading a file with full speed results in ping times around 5000 ms. This is of course unacceptable, as every single web page takes several minutes to load then. The problem can be fixed by manually limiting the download rate to 20 KB/s or so when you plan to download a large file, but this is not always easily possible, for example when watching a YouTube video. An additional problem is that two clients that both limit their download rate to 20 KB/s can hang up the internet again when they download simultaneously. Furthermore it would be great if normal surfing, small downloads, would not be limited in bandwidth for keeping the speed at the possible maximum.
The best solution I could imagine was a transparent proxy that limited the download rate of large downloads automatically, letting small downloads pass through at full speed. Those large downloads should share a specified bandwidth, so multiple downloads would not make the internet slower than a single one.
So I wrote a simple TCP proxy that does exactly this. It listens on a port for incoming TCP connections. For each connection, it opens another TCP connection to a specified port on a specified server (so it is best used in combination with a HTTP proxy or whatever procotol you want to use it for) and redirects the traffic between these two connections. As soon as a configured download traffic limit is reached, the transfer is switched to “low-priority mode”, where it has to share a specified bandwidth with all other downloads running on low priority.
On my local server there already is a HTTP proxy running as cache. If I just used my bandwidth-limiter TCP proxy now and connected that to my HTTP proxy, the bandwidth limit would also be applied for transfers that are already cached by the HTTP proxy and thus make the cache useless. So I have to start an additional transparent HTTP proxy server that does not do anything except for redirecting the incoming connections from my bandwidth limiter to the internet. Sounds a bit like too much work and too many redirects, but it is the easiest solution I have found, as Squid does not seem to support the kind of bandwidth limit I need. So my configuration looks like this: My HTTP proxy cache (in my case Squid) is configured to use another proxy, which I set to localhost and the port my bandwidth limiter listens on. The bandwidth limiter connects to another HTTP proxy running on localhost, which is configured to do nothing but forward the request to the Web.
My bandwidth limiter (called “bwproxy”) is written in Java, you can download it from https://github.com/cdauth/bwproxy. It is available under the terms of GPL-3, feel free to send modifications or bug reports to me.
Properly ripping last.fm streams
Posted on: March 4th, 2009I’d been looking for a proper way to rip last.fm streams for a very long time. All rippers I could find (like TheLastRipper) are only able to rip lastfm://
URLs, which might be useful to get a lot of songs of a specified tag and then pick out the good ones, but if you are looking for a specific song that is playable on Last.fm in full length, a lastfm://
URL does not exist, the song is only playable via the Flash plug-in.
My solution of the problem is a combination of the Firefox plug-ins BetterCache (unstable version 1.24 works on Linux) and CacheViewer. BetterCache allows you to overwrite the HTTP caching instruction headers, thus to force Firefox to cache files that it should not following the HTTP standard (such as Last.fm MP3 files or YouTube Flash videos). CacheViewer provides you with a GUI for the about:cache
list. By searching the cached files for the MIME type audio/mpeg
and the host name ^s\d+\.last\.fm$
(s*.last.fm
, *
standing for a number), you can easily copy the cached MP3 files to your hard drive.
Be careful with the BetterCache add-on. In the default configuration, it modifies Firefox to use the cache for all files not transfered over SSL, which is in most cases not the behaviour you desire. If you want to enable BetterCache only for specified MIME types, remove the wildcard entry from the “Always-cache list” and add it to the “Never-cache list”. Also make sure that “Never-cache list works as ignore list” is enabled.
At the moment, BetterCache does not seem to support wildcards in URLs. Personally, I added audio/mpeg
for all URLs to the “Always-cache list”. As soon as wildcards are supported, you may enable it only for *.last.fm
.
PulseAudio doing random things
Posted on: June 8th, 2008I’ve tried to play around with the PulseAudio configuration, but random things seemed to happen. Sometimes I changed the configuration, started the server and then changed it back again, and suddenly PulseAudio would behave completely different. Most of the times the current configuration was not applied immediately but later when I had already changed it to something different.
The solution to this problem is very strange: Remove the “volume-restore.table” file from the user’s home directory that PulseAudio is run on. In my case, with the daemon running as user “pulse” on Gentoo Linux, that was “/var/run/pulse”. This file is created by the server sometimes, so be sure it does not exist everytime you change the configuration and restart PulseAudio.
SSL handshake failed on connecting to Google Talk
Posted on: May 29th, 2008After spending about 3 hours searching for why I cannot connect to Google Talk neither using tkabber nor using gajim (I want to move my contact list to another server, as the Google Talk integration into Google Mail seems to be completely unusable), I finally found this bug explaining a problem in openssl. Upgrading to openssl-0.9.8g-r1 didn’t help, but downgrading to 0.9.8f immediately fixed the problem. Finally.
GnuPG not finding pinentry
Posted on: April 19th, 2008This is the output of gpg-sign
that I’ve recently got:
You need a passphrase to unlock the secret key for user: "Candid Dauth" 1024-bit DSA key, ID F76ADFE9, created 2005-04-10 gpg: problem with the agent: No pinentry gpg: no default secret key: General error gpg: [stdin]: clearsign failed: General error Hit return to continue.
The pinentry
setting in my ~/.gnupg/gpg-agent.conf
looked this way:
pinentry-program /usr/bin/pinentry-qt -g
The fix is easy: just remove the -g
flag, as GPG newly seems to look for a file called “pinentry-qt -g
”, which of course does not exist.
i810 and a second monitor
Posted on: April 5th, 2008As always I have encountered lots of problems while trying to configure a second monitor on my Intel 855GM card. X just did not start, here is some of the output:
(WW) intel(0): Bad V_BIOS checksum (II) intel(0): Primary V_BIOS segment is: 0xc000 (EE) intel(0): detecting sil164 (EE) intel(0): Unable to read from DVOI2C_E Slave 112. (EE) intel(0): Unable to read from DVOI2C_E Slave 236. (EE) intel(0): ivch: Unable to read register 0x00 from DVOI2C_B:04. (EE) intel(0): Unable to read from DVOI2C_E Slave 112. (EE) intel(0): tfp410 not detected got VID FFFFFFFF: from DVOI2C_E Slave 112. (WW) intel(0): xf86AllocateGARTMemory: allocation of 10 pages failed (Cannot allocate memory)
During my research, I have found some interesting projects that could be useful to fix other problems with the Intel driver: i810switch, 915resolution and i855crt.
The common Intel driver fix was the simple solution: downgrade from the intel
driver (in my case, both version 2.1.1 and 2.2.1 did not work) to the old i810
driver (the most recent seems to be 1.7.4). Don’t forget to change the driver in your xorg.conf, see mine for comparision.
One issue though is buggy with version 1: XRANDR does not work. As soon as any application (most common for me were the kdesktop kcontrol module and wine) tries to use that, the second monitor gets black and is not recognised anymore. You can still move your mouse (and thus windows) there, but the whole thing does not work like it should. I disabled XRANDR by adding Option "RANDR" "disabled"
to the Extensions
section.
Proper If-Modified-Since handling that works with Firefox
Posted on: March 19th, 2008Finally, I managed to write an If-Modified-Since
handling that also works with Firefox.
The scenario I had to deal with looks like the following: there is a web site whose content can be changed by a few users. Those users can log in to the page to edit it. The fact whether they are logged in or not is saved in a cookie (using PHP sessions), you cannot determine from a URL if a user is logged in or not. Without the possibility to log in, the handling of If-Modified-Since
headers is easy: you send a Last-Modified
header (in my case, the modification time is easy to find out) and if this time is greater or equal to an existing If-Modified-Since
header, you send the 304 Not Modified
status without any content. To comply with the standard, we want to handle If-Unmodified-Since
headers as well. For those, the same rules apply, if the modifiction time is greater or equal to an existing If-Unmodified-Since
header, the 412 Precondition Failed
status is sent.
Things look a bit more complicated when you want to implement the login mechanism that I have described above. One problem, of course, is that the page itself has not been modified when the user logs in, so the browser will use the version from the cache and the user will not see that he is logged in, so he cannot edit the page. This one is fixed by sending the current time as Last-Modified
as long as the user is logged in. So the browser has to reload the page every time.
The second problem you will face is that Firefox usually uses the version from the cache without checking if it has changed, even when you send Cache-Control: must-revalidate
. (By the way, I also send Pragma: must-revalidate
to overwrite any rubbish that PHP sends during session_start()
.) This is easily fixed by sending Expires: 0
. Now Firefox reloads the page every time and only uses the version from the cache if it receives a 304
.
Let’s assume that the user does not change anything and logs out. Now, it seems to him that he is still logged in because the brower sends the last Last-Modified
time it received as If-Modified-Since
. The page has not been modified since then, so the version from the cache, where the user is logged in, will be used. At first, I tried to change this behaviour by sending a Cache-control: no-cache
header. You have to pay attention with this, as this only tells the browser not to load a cached page, but it still saves it (at least Firefox, I don’t know if this is the correct behaviour). So the next time the browser requests the page, it will send a If-Modified-Since
header with this newer cached page. To avoid this, also send the no-store
Cache-Control
header. (I send Cache-Control: no-cache,no-store
as well as Pragma: no-cache,no-store
.)
So, as a summary, send Cache-Control: no-cache,no-store
as well as the current time as Last-Modified
header as long as the user is logged in. Send Cache-Control: must-revalidate
and Expires: 0
as long as he isn’t.
Firefox caching behaviour
Posted on: March 9th, 2008As I have posted before, I am trying to implement a proper If-Modified-Since
handling to some web page. A problem I had to manage was that you could log in on that page, which caused the user to have to reload the page in order to be logged in, as the browser correctly used the cached not-logged-in version. I wanted to solve this problem by always sending the current time stamp as Last-Modified
header, and thus never sending a 304 Not Modified
(the fact that logged-in users cannot use the cache then is acceptable, as the number of user accounts is quite limited). Firefox did not like that solution, it kept using the cached page without checking if it had changed (wheres in Opera everything worked fine).
What I wanted to achieve Firefox to do exactly was only to use the cache if it received a 304 Not Modified
status. As I could not find out anything detailed about this on the Internet, I tried several combinations of HTTP headers, such as Cache-Control: must-revalidate
, which did not have any effect. The working solution was simple but not easy to find: Expires: 0
.
PHP sending strange Last-Modified headers
Posted on: March 8th, 2008At the moment I am trying to implement a proper handling of Last-Modified
and If-Modified-Since
headers (RFC 2616 is really useful, by the way) and have experienced very strange behaviour of PHP during sending Last-Modified
headers.
One smaller thing that occured to me was session_start()
to send Cache-Control
, Pragma
, and Expires
headers. In PHP, you can simply remove them from the sent headers by calling header("Pragma: ", true);
, for example.
Well, I set the Last-Modified
header to some date("r")
value, and it happened that my browser always received the time of the request (in GMT instead of CET) instead of the time I sent. When I sent "Last-Modified: 0"
, this was converted to a correct HTTP GMT date string. Sending the same thing as X-Last-Modified
for example sent exactly what I set although. (By the way, HTTP trace works better and faster than Firefox’s Web Developer Toolbar.)
It seems to be that PHP tries to parse and correct the Last-Modified header before sending it, and this “feature” is a little buggy. I fixed the problem by simply using gmdate("r")
instead. My PHP version is 5.2.5-pl1-gentoo, by the way.
Firefox not showing alert and confirm boxes
Posted on: February 29th, 2008I have recently had a problem with Firefox not showing any alert or confirm boxes. Even triggering these using the URL bar stopped working. I solved this problem by disabling the “iMacros for Firefox” extension, which seemed to cause various other problems, too, by throwing mysterious exceptions on different pages. The version I had installed was 6.0.2.2, Firefox 2.0.0.12.