Saturday, 1 March 2014

Fixing vidwhacker xscreensaver to successfully grab video from webcam

(Fedora 20)

Firstly ensure that you have the following packages installed:

yum install flumotion netpbm-* xawtv words libjpeg-turbo-utils

Unfortunately, vidwhacker doesn't call xscreensaver-getimage-video correctly. Happily, vidwhacker is a simple perl script, so it can simply be edited using your favourite editor.

This can be fixed by making the following changes:

 $cmd = "xscreensaver-getimage-video $v --stdout";  (line 319)
$cmd = "xscreensaver-getimage-video";

and, after,
$ppm = `$cmd`;  (line 356)
$ppm = `cat $ppm`;

I'm not saying this is particularly elegant as a fix, but it'll get it going.

If you would like vidwhacker only to grab images from the webcam (instead of sometimes searching your files directory as well), make the following change:

   #$do_file_p = (int(rand(2)) == 0);  (line 303)
  $do_file_p = 0;

Monday, 22 October 2012

Sendmail without DNS

Sendmail configuration (email configuration in general in fact), is certainly a minefield. It may look like an easy field to cross, but if you're unlucky enough to step off the path, things can get complicated pretty fast.

One area where things can get a little sticky is where you have no DNS in your environment, so your servers cannot lookup where to send mail to. This is actually very easy to work around. One server still needs to be able to send email - sensibly you'd place this beyond a firewall in a DMZ, if that's appropriate - and the other servers need to be configured to send mail here. This can be done by locating the line in "DS" and adding the name of the mail hub after it - e.g. "DSmy.mail.hub". Reload sendmail and you're away. Obviously, with DNS not present in the environment, you prevent your system from doing DNS lookups. However, there's another situation which is a little more complex.

I was recently asked to look at a production environment where mail had suddenly stopped being sent. This environment had been configured as above, and previously was sending mail without problem. However, an internal DNS service had just been implemented. This DNS service had no knowledge of the outside world. This did not prevent sendmail from attempting to do DNS lookups of any mail recipient it was asked to send to. It didn't attempt to use the mail hub and simply queued the messages until such time that DNS would be able to tell it how to send the mail. (Probably there is a way to configure DNS to always return an MX record of the mail hub, but that's not what I'm going to do to fix this here)
The internet has quite a few answers for how to configure sendmail for this, but all are based on using m4 to compile a new (using m4 is described here). If you're wary of using m4, the following can be dropped in place (the V configuration level at the top can be modified for your own vendor, and you may be able to reduce the sendmail version number from 10 down to V8.6 / V8.1 or even lower, but I can't promise it'll work).


# custom
# Philip Harries
# This will send all emails to the mail hub without a dns lookup

# Defined macros
D{REMOTE}<your mail hub here> # the name used for the mail hub

# Delivery agent definition to forward mail to hub
Mhub, P=[IPC], S=0, R=0, F=mDFMuXa, T=DNS/RFC822/SMTP, A=TCP $h

Mlocal, P=/usr/lib/mail.local, F=lsDFM5qPSXmnz9, S=0, R=0, T=DNS/RFC822/SMTP, A=mail.local -l

R$+ $#hub $@${REMOTE} $:$1 forward to hub

S1 # Generic sender rewrite (defined but unused)

S3 # preprocessing for all rule sets
R$* <> $* $n handle <> error address
R$* < $* < $* > $* > $* $2<$3>$4 de-nest brackets
R$* < $* > $* $2 basic RFC822 parsing

O QueueDirectory=/var/spool/mqueue

Finally, you should copy this as your as well (or link it):

ln -s

Friday, 23 September 2011

X11 tunneling, ssh and su

It's pretty well known that ssh can be used to tunnel X11 sessions, typically from a server to your local workstation.  This is extremely handy if you need to run an X program on a remote system on which you only have terminal/shell access.  If you're a windows user, I recommend using Putty, and you can enable X11 forwarding very simply.

From the initial putty menu, note the options on the left, and expand the + next to 'SSH', under the 'Connection' category:

On the next page, you should tick the 'Enable X11 forwarding' tick box:

As soon as you log into the server, you should see a message saying that a new authority file has been created:

This has now set a magic cookie (no really) to authorise the server you have just logged into.  If you're wondering what you've just authorised - you've just authorised the server to send X windows back through the ssl session to display on your computer.  Naturally, you need an Xwin server such as Cygwin-X running on your system.

If everything is working correctly, your session should have an environment variable (DISPLAY) set to something very similar to localhost:10.0, and if you run /usr/openwin/bin/xclock, it should appear magically on your desktop:

While all of this is really great and helpful in the extreme, it is covered in many places and isn't new.  What I recently worked out, and want to share, is the solution to the following two problems:

Having logged in to a system, and set up your X11 tunneling, you lose your X11 tunnel if you:

1) su to another user
2) ssh to a new system

Both of these are dead handy things to be able to do.  If you have to assume root privileges, for example, to run the command that generates an X session, you'll hit problem 1.  If you need to switch to a user on a different system, because you don't have a direct connection to that server, you'll hit problem 2.

1 - How to maintain your X11 tunnel while su-ing to a new user

a) log in to your system as described above
b) check your environment DISPLAY variable:

-bash-3.00$ echo $DISPLAY

c) Discover what your session magic cookie is:

-bash-3.00$ /usr/openwin/bin/xauth list
hostname/unix:10  MIT-MAGIC-COOKIE-1  fca50d4504788a86d4b680f3eda4628e

d) su to new user:

-bash-3.00$ su -

e) set your DISPLAY variable:

root@hostname # DISPLAY=localhost:10.0
root@hostname # export DISPLAY

f) set your magic cookie:
-bash-3.00$ /usr/openwin/bin/xauth add hostname/unix:10  MIT-MAGIC-COOKIE-1  fca50d4504788a86d4b680f3eda4628e

Congratulations - you should now be authorised again to send X11 commands to your local machine.

2 - How to maintain your X11 tunnel while ssh-ing to a new system

This is really easy - either of the following ssh commands works.

ssh -o "ForwardX11 yes" username@remotesystem


ssh -X -A username@remotesystem

Monday, 1 August 2011

Very slow log in to Solaris Server

Error keywords seen:  None - just exceptionally slow login

Operating System: Solaris variants

Software: default ssh daemon

Keywords:  solaris stop suppress reverse dns lookup ssh

I've googled many times to find the answer to this one, and read through many man pages.

It's very easy to discover that slow log ins are almost always due to sshd trying to do an nslookup on the login client.  This is a perfectly reasonable security measure, but if the lookup fails it can take a while to timeout, giving the impression that the login is very slow.  In Linux, you change sshd_config and add 'UseDNS no' - this I found recommended many times.

In Solaris, the answer is just as easy, though harder to find:

Add the following line to /etc/ssh/sshd_config:

Restart sshd:

svcadm restart ssh

Check that ssh is running okay - do this before you log out:
svcs -l ssh

If sshd is in maintenance mode, revert your changes and restart ssh:

svcadm clear ssh

Check your change for a typo, and debug as usual.

You should now be able to log in much faster.

Sunday, 10 April 2011

Solaris/OpenIndiana NFS server to Ubuntu NFS host - 4294967294 problem

Error keywords seen:  User and Group set to 4294967294 instead of UID/GID expected

Operating System: Open Indiana oi_148 (Solaris) and Ubuntu 11.04 (Natty Narwhal)

Software: NFS v4

When mounting an nfs shared directory from an OpenIndiana home server, the directory ownership was set to 4294967294:4294967294, despite the ownership on the server being 1000:1000, and the equivalent UID / GID being set up on the client machine.

The solution is to edit the config file /etc/default/nfs-common - the two lines required are:


This is enough to change the reported ownership from 4294967294 to 'nobody:nogroup' - which is progress, of a sort.

Our next requirement is to make sure that the nfs client and the nfs server are both using the same domain name.  On the client, change /etc/idmapd.conf so that the domain parameter is correct - in my case, 'homenetwork'.

Domain = homenetwork

Secondly, on the server, make sure that the domainname is correctly set.  As this is running a Solaris(-based) OS, it's very easy - just create or edit the contents of /etc/defaultdomain so that it contains (nothing more than) the correct domain:


And you're done - reboot both sides for luck, and everything should now appear as you expect.