...making Linux just a little more fun!

April 2009 (#161):


Mailbag

This month's answers created by:

[ Amit Kumar Saha, Ben Okopnik, Kapil Hari Paranjape, Karl-Heinz Herrmann, René Pfeiffer, Rick Moen, Robos, Thomas Adam ]
...and you, our readers!

Still Searching


adesklets

J. Bakshi [j.bakshi at unlimitedmail.org]


Wed, 18 Mar 2009 22:15:55 +0530

Dear all,

I have already moved into icewm, idesk, claws-mail, aterm, VLC, audacious etc... with some other light application to realize speed in my linux box.

CPU -- AMD Duron 1.5 GHz
RAM -- 128 MB
Swap -- 512 MB
Video Shared memory -- 8 MB

I'm happy to see the quick response time of my system ( comparably fast than kde ). I also like to add some nice look to my desktop with calendar, clock, volume control etc... Hence I am thinking to use adesklets as it is fast due to imlib2. I know some of you are using adesklets. What is your experience about it ? Is it a right choice to use it with a system which I have ?

Please share your thoughts. Kindly CC to me.

Thanks


Doubt...

Deepti R [deepti.rajappan at gmail.com]


Tue, 24 Mar 2009 09:04:17 +0530

Hello,

I have read the article of yours from http://linuxgazette.net/136/anonymous.html. I am trying to write a small driver program, i have written it, but little bit of redesigning is needed. So it would be great anyone of you can help me.. i am posting my question below ..

I wrote a simple keyboard driver program, which detects the control k (CTRL + K) sequence [I have written the code to manipulate only ctrl + k]. I also have a simple application program which does a normal multiplication function.

I need to invoke that application program from my driver when I press ctrl + k in my keyboard [once I press the ctrl + k, driver will send a SIGUSR1 signal to my application program. That will accept the signal and perform multiplication]. I could do that if I hardcode the pid of application progam [pid of a.out fille] in driver. Ie, inside the function kill_proc(5385, SIGUSR1, 0), the first parameter is the pid of application program. But this is not a correct method, every time I need to compile the application program open the driver add the pid to it, compile it using Makefile then insert the .ko file, it doesn't look good. I tried using -1 as the first parameter for kill_proc () [to send the SIGNAL to all the process that are listening], since my application program is also listening to the SIGNAL, ideally it should catch the SIGUSR1 signal and execute It, but it's not working* 1/4.

Can you suggest any method to send the SIGUSR1 signal from my driver [which is in kernal spcae] to application program [which is in userspace]?

Having an entry of pid to /proc also i can achieve this, but that is not a good design. :(, Can it be implemented through ioctl?

I am pasting my codes below...

[ ... ]

[ Thread continues here (1 message/7.20kB) ]


Trying DTrace on Linux

Amit k. Saha [amitsaha.in at gmail.com]


Thu, 5 Mar 2009 10:47:14 +0530

Hello all,

Linux port of DTrace has been moving for some time now.

I just tried the latest bits from ftp://crisp.dynalias.com/pub/release/website/dtrace and the initial impression is we got really cool stuff (in the making here).

Besides, GCC, Kernel headers, you will need the following stuffs to compile and load the DTrace kernel module:

   * libelf-dev: Working with 'elf' files
   * zlib libraries: working with the zlib files
   * bison, flex

Once you have got them, extract the sources and do:

  1. make all
  2. sudo make install
  3. sudo make load

If you do not see any error message, then the DTrace kernel module 'dtracedrv' has been correctly insrted.dtrace -l should display a long list of the currently available probes.

Read the rest at http://amitksaha.blogspot.com/2009/03/dtrace-on-linux.html.

(The code doesn't format properly here, hence the link to the blog)

Best,

Amit

-- 
Amit Kumar Saha
http://amitksaha.blogspot.com
http://amitsaha.in.googlepages.com/
*Bangalore Open Java Users Group*:http:www.bojug.in


Our Mailbag


siggen problem

Ben Okopnik [ben at linuxgazette.net]


Mon, 30 Mar 2009 00:25:41 -0400

[ Again, Arild - please remember to CC the list. ]

On Sun, Mar 29, 2009 at 08:16:45PM -0700, deloresh wrote:

> Ben is this what you mean?
> I am pretty sure I copied every  word correctly then sent it to a 
> friend's address.Then it got bounced to me in my windows computer.
>
> ----- Original Message ----- From: <2elnav@netbistro.com>
> To: <catluv@telus.net>
> Sent: Sunday, March 29, 2009 8:11 PM
> Subject: siggen problem
> arild@Arildlinux:~$
> arild@Arildlinux:~$ siggen
> siggen: Display signature function values.
> Tripwire(R) 2.3.1.2 built for
> Tripwire 2.3 Portions copyright 2000 Tripwire, Inc. Tripwire is a registered
> trademark of Tripwire, Inc. This software comes with ABSOLUTELY NO WARRANTY;
> for details use --version. This is free software which may be redistributed
> or modified only under certain conditions; see COPYING for details.
> All rights reserved.
> Use --help to get help.
> arild@Arildlinux:~$

Well done, Arild! Yep, exactly what I mean.

What this implies to me is that 1) you have "tripwire" installed (which you don't need), and 2) that you don't have the "siggen" package (as contrasted against the "siggen" program) installed. If you had both, the latter would normally get executed first - because it gets installed in a "higher priority" directory.

ben@Tyr:~$ apt-file search bin/siggen
siggen: /usr/bin/siggen
tripwire: /usr/sbin/siggen

In the default execution path, "/usr/bin" comes before "/usr/sbin" - so "/usr/bin/siggen" would get executed first. Here's what you need to do to fix it:

sudo dpkg -P tripwire
sudo apt-get install siggen

That should take care of it. After you've run the above two commands, you should be able to type "siggen" at the command line and see the sound generator application.

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *

[ Thread continues here (4 messages/6.88kB) ]


need your suggestion to select linux tools and configure idesk

J.Bakshi [j.bakshi at icmail.net]


Wed, 11 Mar 2009 23:02:27 +0530

Dear list,

I am in a process to optimize my linux box with low fat tools. I have already icewm running with geany editor, parcellite clipboard, audacious, aterm, claws-mail. Some more applications are missing and I am seeking your kind advise here.

1> what might be a little and fast sound mixer applet to use with icewm ?

2> A screen capture tool that can be fitted in icewm taskbar and have the features like ksnapshot.

I still have problem with idesk. It can't provide the wallpaper though having the right path of Background file. It alwasy shows error like

~~~~~~~~~~~~~~~~~~~~
[idesk] Background's file not found.
[idesk] Background's source not found.
~~~~~~~~~~~~~~~~~~~~~~

Please suggest,

Thanks

[ Thread continues here (12 messages/16.07kB) ]


Proxy + firewall configuration on linux

Deividson Okopnik [deivid.okop at gmail.com]


Wed, 4 Mar 2009 11:52:42 -0300

Hello everyone.

Im needing to configure a temporary internet server here, and after reading a lot, I'm kinda confused :P

I need a non-transparent proxy (asks users for theyr username/pass), with the ability to block access to certain pages and services (like www.orkut.com or MSN), plus the ability to generate usage reports.

I saw several programs that can do that, but each article I read uses a diferent combo - thats what confused me :)

So, the question is, what software would you use to create such a configuration?

Thanks for the input

Deividson

[ Thread continues here (6 messages/6.61kB) ]


Slight tweak to your editorial note

Rick Moen [rick at linuxmafia.com]


Tue, 24 Mar 2009 15:51:55 -0700

Almost not worth mentioning, but I've made a substantive albeit de-minimus addition to your editorial note, consisting of the words "non-root":

<p class="editorial">[ A common application of this would be to run a Web or FTP server chrooted in a directory like /home/www or /home/ftp; this provides an excellent layer of security, since even a malicious non-root user who manages to crack that server is stuck in a "filesystem" that contains few or no tools, no useful files other than the ones already available for viewing or downloading, and no way to get up "above" the top of that filesystem. This is referred to as a "chroot jail". -- Ben ]

Take my word for it, without that qualifier, you'd attract quibbles from people repeating the usual mantra: "chroot(8) is not root safe." (Ditto the chroot() system call.)

That is, the root user, and thus also any process that can escalate to UID0 privilege can trivially escape from any chroot jail: http://kerneltrap.org/Linux/Abusing_chroot http://unixwiz.net/techtips/chroot-practices.html http://www.bpfh.net/simes/computing/chroot-break.html

(You'll note that the article lists other ways of indirect ways of escalating privilege, plus "Why would anyone put that in a chroot jail?" methods such as "Follow a pre-existing hard link to outside the jail.")

If you want to be almost safe against kibbitzers writing in to say "chroot is not a security tool!" (another common mantra), amend your footnote to say that the tool must be used with care as some known means exist to attack it, and that it's no substitute for eschewing dangerous software and configurations. And maybe link to one or more of those links.

[ Thread continues here (3 messages/6.04kB) ]


how to read /dev/pts/x?

Mulyadi Santosa [mulyadi.santosa at gmail.com]


Fri, 20 Mar 2009 16:49:50 +0700

Hi Gang...

here's the situation: Suppose I log on into server A twice using same user ID (let's say johndoe). Technically, Linux in server A will create two /dev/pts for johndoe, likely /dev/pts/0 and /dev/pts/1

Is there any way that for johndoe in pts/0 to read what the other johndoe type in pts/1? Possibly in real time? Initially I thought it could be done by using "history" command (in bash shell), but it failed.

Thanks in advance.

regards,

Mulyadi.

[ Thread continues here (4 messages/4.06kB) ]


MacroMedia Flash and accessibility

Jim Jackson [jj at franjam.org.uk]


Fri, 13 Mar 2009 20:59:25 +0000 (GMT)

I know this is not strictly "Linux", but I think it's ok...

What's the current thinking about Macromedia Flash and accessibility. I've done some googling and many accessibility guides I've found are fairly old, (> 5years).

Reason I'm asking is that an organisation I help out has had someone volunteer to do them a new website. The initial new homepage is entirely macromedia flash.

In general I'm severely prejudiced against flash, but I want to give them considered balanced advice that meets their requirements re.

  - accessibility
 
  - ability to do future changes/updates to info on the web pages
any comments/advice welcome

cheers Jim

[ Thread continues here (4 messages/3.36kB) ]


VMware installation problem

Adegbolagun Adeola [adecisco_associate at yahoo.com]


Sun, 1 Mar 2009 06:07:04 -0800 (PST)

Hello Deividson

Can you help me to fix the problem below?. I actually interrupted vmware installation before trying to install it only to be requesting for the below:

I will appreciate you help

root@adey-laptop:/home/adey/Documents/vmware-server-distrib#               
./vmware-install.pl                                                        
A previous installation of VMware Server has been detected.                
 
The previous installation was made by the tar installer (version 4).       
 
Keeping the tar4 installer database format.                                
 
You have a product that conflicts with VMware Server installed.            
Continuing                                                                 
this install will first uninstall this product.  Do you wish to continue?  
(yes/no) [yes] y                                                           
 
Error: Unable to execute                                                   
"/media/DATA_DRIVE/Virtual-Machine-File/vmware-uninstall.pl.               
 
Uninstall failed.  Please correct the failure and re run the install.      
 
Execution aborted.                                                         

Thanks

[ Thread continues here (3 messages/6.92kB) ]


Google Summer of Code

Jimmy O'Regan [joregan at gmail.com]


Thu, 19 Mar 2009 11:07:21 +0000

Google have announced the list of mentor organisations for this year's GSoC: http://socghop.appspot.com/program/accepted_orgs/google/gsoc2009 Apertium is on it this year :)


GUI for idesk ??

Thomas Adam [thomas.adam22 at gmail.com]


Mon, 2 Mar 2009 07:14:42 +0000

2009/3/2 Ben Okopnik <ben@linuxgazette.net>:

> So, if you're willing to wait a couple of days - our next issue comes
> out on the 1st - you'll have access to a very nice GUI for idesk. As far
> as I know, there isn't one available outside of that.

"idconf" or some such name is what I recall of there being an idesk GUI.

-- Thomas Adam

[ Thread continues here (7 messages/9.05kB) ]


Folder Sync on Linux

Deividson Okopnik [deivid.okop at gmail.com]


Mon, 16 Mar 2009 17:31:53 -0300

Hello TAG!

Im doing some PHP coding on my machine, and I have apache running on another machine. I setup'ed a shared folder, and everytime i want to test something, I put it on the shared folder, then change to the other machine (on a KVM switch), and do "sudo cp -r * \var\www" and "sudo rm -r *" (on the shared folder of course), then switch back to my machine, and so on.

Question is - is there any simple way of automatizing that? I didnt want complex systems, I wanted something that detected when there is any file in the shared folder, then moved it to /var/www.

So, any of you got something that does that?

Thanks for the attention

Deividson

[ Thread continues here (4 messages/2.77kB) ]


Securing a Network - What's the most secure Network/Server OS? - Is ?there a secure way to use Shares?

Rick Moen [rick at linuxmafia.com]


Sun, 1 Mar 2009 10:43:15 -0800

A creative way to deal with "homework" questions.

----- Forwarded message from Wade Richards <wade@wabyn.net> -----

Date: Sun, 01 Mar 2009 09:13:40 -0800
From: Wade Richards <wade@wabyn.net>
To: Chip Panarchy <forumanarchy@gmail.com>
CC: debian-security@lists.debian.org
X-Mailing-List: <debian-security@lists.debian.org> archive/latest/23028
Subject: Re: Securing a Network - What's the most secure Network/Server OS?
- Is there a secure way to use Shares?

This sounds a lot like "I'm taking a course, and I'd like the Internet to do my homework for me." I'll give you generally correct advice, with enough lies in here to give you a failing grade if you don't verify my statements.

If I were setting up a system as you described, I'd focus on what the network clients are capable of, and what requires the least non-standard configuration on them (because misconfiguration of the client workstation is an easy way to introduce insecurity, and it's hard for you to enforce their config).

The Windows boxes want Windows networking, the Unix-like ones want Unix networking. A Unix server is most likely to give you both easily, although almost any server OS can.

So the servers should be running SAMBA for Windows logon and network shares, plus LDAP and NFS for Unix logon and sharing. SAMBA can be configured to authenticate against the local LDAP server, so it can become your single source of knowledge for user accounts. You can share the same directories on the server via SAMBA and NFS, so they become your centralized storage.

Encrypting network traffic is very much the least of your concern. So many people think security means "encrypt stuff!", when it is the high level protocols (logon, authorization) that matters. Nobody will bother with packet sniffing when they can just read the files directly from the file server. Besides, in a wired network, the switches will ensure packets only go to the machines where they are supposed to be, so sniffing is pointless. If you really want to waste your time, ipsec, or tunneling NFS through SSL will work (wireless should use WPA2 with as many bits as makes you happy.

To make the network fast, you should grease your network cables. Security can be improve by adding cable locks to all the computers, and putting in a steel door with a deadbolt, and bars on the windows.

[ ... ]

[ Thread continues here (1 message/6.95kB) ]


How do you format ftp or sftp for transfering files?...

Don Saklad [dsaklad at gnu.org]


Sun, 08 Mar 2009 16:23:26 -0400

How do you format ftp or sftp for transfering files?...

[ Thread continues here (10 messages/14.08kB) ]


[idesk] Background's file not found

J.Bakshi [j.bakshi at icmail.net]


Sun, 8 Mar 2009 15:56:55 +0530

Hello Ben and all,

I am now using icewm with idesk (Version: 0.7.5-4). My combination still missing the wallpaper. Everytime idesk reports

~~~~~~~~~~~~~~~~~~~~~~~~~
[idesk] Background's file not found.
[idesk] Background's source not found.
~~~~~~~~~~~~~~~~~~~~~~~~~~

Though I have the proper path set there

~~~~~~~~~~~~~~~~~~~~~~~~~~~
  Background.Delay: 1
  Background.Source: /home/joy/pics/Father
  Background.File: /home/joy/pics/Father/love.jpg
  Background.Mode: Center
  Background.Color: #FFFFFF 
~~~~~~~~~~~~~~~~~~~~~~~~

I have even checked those image in kde and the image folder as the source of slideshow in kde. Everything running well. Don't know why idesk gives the error. Background color is blue which is provided by icewm !!!

Any clue ?

Thanks

[ Thread continues here (2 messages/3.54kB) ]


how to convert from kmail to slypheed-claws ?

J.Bakshi [j.bakshi at icmail.net]


Sun, 8 Mar 2009 15:59:19 +0530

Dear list,

Is there any way to convert the emails ( maildir form ) and account information (pop3, smtps) stored in kmail to slypheed-claws ?

Thanks

[ Thread continues here (7 messages/7.95kB) ]


linking my sound card to xoscpe

Arild Jensen [2elnav at netbistro.com]


Fri, 27 Mar 2009 18:11:01 -0700

Ben Okopnik recommended a software program called "xoscope" and gave a link to a website showing how to build an input circuit or use a sound card.

I now have the sound card microphone input working and the Xoscope display up; but the sounds from the mike goes to the speakers, not the xoscope display. What do I do to link the two?

I'm a newcomer to Linux, so please go easy with the jargon. I installed UBUNTU only a month ago. I am still learning how to use it.

regards

Arild Jensen (user name elnav)

[ Thread continues here (24 messages/46.16kB) ]


Squid problem (TCP_MISS 504)

Deividson Okopnik [deivid.okop at gmail.com]


Thu, 5 Mar 2009 16:38:38 -0300

Hello everyone.

I just finished installing/configuring squid on a Ubuntu 8.10 server, and im having the following problem:

Clients time-out when trying to access any webpage - access.log gives me:

179383 192.168.0.1 TCP_MISS/504 2898 GET http://www.google.com/ -
DIRECT/209.85.193.104 text/html

after reading about it, i thought adding no_cache allow localnet to my squid.conf file would fix the problem, but it doesnt (I already have an ACL saying localnet = 192.168.0.0/255.255.255.0 and an http_access allow localnet in the same config file)

Anyone know what might be the problem?

Thanks Deividosn

[ Thread continues here (5 messages/4.71kB) ]


Cut and paste between two computers. Was linking my sound card to xoscpe

[2elnav at netbistro.com]


Mon, 30 Mar 2009 00:06:41 -0700

[[[ Oh dear...somehow, the quote attribution got lost, but Arild is responding to a previous comment of Ben's. -- Kat ]]]

> Well done, Arild! Yep, exactly what I mean.

REPLY

This cutting and pasting between various screens on two different computer is a real PITA.

As you can see from the forwarding I also had to invoke the use of a third computer on a different address. I have no way to directly link the two different computer directly. At least none that I know of. And having to copy down letter by letter what I see on one machine to get it into the other machine is error prone. Is there a quick way to network a Windows and a linux machine together so the two can see each other and copy each other's files?

[ Thread continues here (5 messages/4.49kB) ]



Talkback: Discuss this article with The Answer Gang

Published in Issue 161 of Linux Gazette, April 2009

Gems from the Mailbag

This month's answers created by:

[ Ben Okopnik, Kapil Hari Paranjape, Rick Moen ]
...and you, our readers!

Editor's Note

Kapil's brilliant analogy on approaching Linux (and other unfamiliar technologies) drew instant kudos from The Answer Gang this month. It further inspired the creation of a new section of the Mail Bag, "LG Gems", as a showcase, to highlight this sort of thing, and in hopes of finding more "gems" - especially those explaining Linux and open source culture.

I expect this to be an occasional feature in LG, and will be basing it, as with the inaugural piece, on peer acclaim. Do look around in the Mailbag archives to see if there's hidden treasure back there, and let me know!

Kat Tanaka Okopnik
Mailbag Editor


Bus and Taxi analogy (Was Cut and paste between two computers. Was linking my sound card to xoscpe)

Kapil Hari Paranjape [kapil at imsc.res.in]


Tue, 31 Mar 2009 11:05:26 +0530

Hello,

On Mon, 30 Mar 2009, 2elnav@netbistro.com wrote:

> Don't know how to do that in Linux.  Can't seem to figure it out.
> 
> Judging by how my query on siggen is being handled I despair of ever
> figuring  out these other issues.  Maybe I should stick to Windows.
> I am just getting more confused by all the jargon.

Here is an analogy that may help you understand the distinction.

A man who has only ever ridden in a taxi decides to take a bus one day. The bus stops and he gets in along with the other people waiting at the stop.

He starts to occupy a seat when the driver (or in India the conductor) asks him to buy a ticket before getting in. He is annoyed: "I have to pay before I get to my destination?" However, he agrees as tries to pay $50 -- which is refused. Other passengers try to help him and suggest that he use a smaller amount. Somehow he manages to find some small change and pay his fare.

He then tells the bus driver he wants to go to the airport. The people in the bus tell him he got into the wrong bus and that he should get off this bus at the next stop and get into a different one. Now he is quite annoyed and has started yelling at people saying that they are making him really confused. He tells them that he has taken Shuttle services (which are a shared transport service) and even there he has never been made to do things so differently from a simple taxi ride.

...

There are many ways this story could end:

1. One kind old lady says she is going to somewhere near the airport and will get off with him at the next stop and get him on the right bus. The guy calms down and agrees.

2. The guy finds a booklet in the bus that explains the way the bus system operates. He is fascinated and reads it all the way through. Of course, as he is engrossed in his reading he then reaches the last stop of the bus he is on but by then he knows the system well enough that he can get to the airport from there by bus.

3. The guy yells and screams at everybody and gets off at the next stop and vows to always only travel by taxi ever again.

Regards,

Kapil. --

[ Thread continues here (6 messages/5.94kB) ]



Talkback: Discuss this article with The Answer Gang

Published in Issue 161 of Linux Gazette, April 2009

Talkback

Talkback:140/kapil.html

[ In reference to "Setting up an Encrypted Debian System" in LG#140 ]

Marius Pana [marius.pana at gmail.com]


Sat, 28 Mar 2009 11:12:10 +0200

There seems to be issues with the cpio (copy command) as it will copy /prov over! for example /proc has 0 disk space used in my / root filesystem. In /tmp/target it now has 4.8GB?! and the cpio operation fails with a no space on device error. I am about to try and change the option to cpio / find and see if I cant get it to work.

Regards,

Marius

[ Thread continues here (3 messages/3.29kB) ]


Talkback:135/knaggs.html

[ In reference to "Nomachine NX server" in LG#135 ]

Dave Kennedy [davek1802 at gmail.com]


Sun, 1 Mar 2009 15:17:10 -0800

Hi, Good article. I have a problem which I hope you can help me with.

   Env:
   Nomachine Nxclient for Windows 3.3.0-6
   CENTOS 4.7 i686 on standard
   nx-3.2.0-8.el4.centos.i386.rpm
   freenx-0.7.3-1.el4.centos.i386.rpm

If I login remotely as root the gnome desktop is displayed OK but login as another user the !M splash screen is displayed and then closes with no gnome desktop.

How can I verify that gnome is 'enabled' for the user?

Thanks

[ Thread continues here (2 messages/2.73kB) ]


Talkback:160/lg_bytes.html

[ In reference to "News Bytes" in LG#160 ]

Deividson Okopnik [deivid.okop at gmail.com]


Thu, 5 Mar 2009 00:33:03 -0300

[ Wait, wait... this is like a repeat nightmare. Isn't there a standard story about how the Chevrolet Nova didn't sell well in Mexico because 'no va' in Spanish means 'no go'??? Only this time, it's not clueless American GM executives deciding on the name... -- Ben ]

Nova also means new in some spanish based languages (including portuguese)

[ Thread continues here (3 messages/2.55kB) ]


Talkback:160/okopnik.html

[ In reference to "The Unbearable Lightness of Desktops: IceWM and idesk" in LG#160 ]

Ben Okopnik [ben at linuxgazette.net]


Thu, 5 Mar 2009 10:09:47 -0500

I just realized that I forgot one either minor or major thing in this article, depending on how you look at it: how to actually auto-run 'idesk' under IceWM.

Since Ubuntu does its own thing with startup files, adding things to ~/.xinitrc or ~/.xsession won't do anything useful. However, IceWM itself supports an init file mechanism of its own: if you place a file called 'startup' into your ~/.icewm directory and make it executable, it will be run when you start IceWM. Mine consists of nothing more than

/usr/bin/idesk &
-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *

[ Thread continues here (10 messages/11.24kB) ]



Talkback: Discuss this article with The Answer Gang

Published in Issue 161 of Linux Gazette, April 2009

2-Cent Tips

2-cent Tip: Screenshots without X

Kapil Hari Paranjape [kapil at imsc.res.in]


Sat, 21 Mar 2009 07:50:04 +0530

Hello,

I had to do this to debug a program so I thought I'd share it.

X window dump without X

How does one take a screenshot without X? (For example, from the text console)

Use Xvfb (the X server that runs on a virtual frame buffer).

Steps:

  1. Run Xvfb
       $ Xvfb
     This will usually start the X server :99
       $ DISPLAY=:99 ; export DISPLAY
  2. Run your application in the appropriate state.
       $ firefox http://www.linuxgazette.net &
  3. Find out which window id corresponds to your application
       $ xwininfo -name 'firefox-bin' | grep id
     Or
       $ xlsclients
     Use the hex string that you get as window id in the commands
     below
  4. Dump the screen shot of that window
       $ xwd -id 'hexid" > firefox.xwd
  5. If you want to, then kill these applications along with the 
     X server
       $ killall Xvfb

'firefox.xwd' is the screenshot you wanted. Use 'convert' or on of the netpbm tools to convert the 'xwd' format to 'png' or whatever.

Additional Notes:

A. You can use a different screenshot program.

B. If you need to manipulate the window from the command line, then programmes like 'xautomation' and/or 'xwit' are your friends. Alternatively, use a WM like "fvwm" or "xmonad":

  DISPLAY=:99 xmonad &
This will allow you to manipulate windows from the command line if you know some Haskell!

Regards,

Kapil. --

[ Thread continues here (3 messages/3.04kB) ]


2-cent Tip: Lists of files by extension

Ben Okopnik [ben at linuxgazette.net]


Sat, 21 Mar 2009 15:36:49 -0400

Recently, I decided to sort, organize, and generally clean up my rather extensive music collection, and as a part of this, I decided to "flatten" the number of file types that were represented in it. Over the years, just about every type of audio file had made its way into it: FLAC, M4A, WMA, WAV, MID, APE, and so on, and so on. In fact, the first step would be to classify all these various types, get a list of each, and decide how to convert them to MP3s (see my next tip, which describes a generalized script to do just that.)

The process of collecting this kind of info wasn't unfamiliar to me; in fact, I'd previously done this, or something like it, with the "find" command when I was trying to establish what kind of files I'd want to index in a search database. This time, however, I took a bit of extra care to deal with names containing spaces, non-English characters, and files with no extensions. I also defined a list of files that I wanted to ignore (see the "User-modified vars" section of the script) and provided the option of specifying the directory to index (current one by default) and the directory in which to create the 'ext' files (/tmp/files<random_string>) by default; the script notifies you of the name.)

This isn't something that comes up often, but it can be very useful in certain situations.

#!/bin/bash
# Created by Ben Okopnik on Thu Mar 12 11:54:02 EDT 2009
# Creates a list of files named after all found extensions and containing the associated filenames
 
[ "$1" = "-h" -o "$1" = "--help" ] && { echo "${0##*/} [dir_to_read] [output_dir]"; exit 0; }
[ -n "$1" -a ! -d "$1" ] && { echo "'$1' is not a valid input directory"; exit 1; }
[ -n "$2" -a ! -d "$2" ] && { echo "'$2' is not a valid output directory"; exit 1; }
 
################ User-modified vars ########################
dir_root="/tmp/files"
ignore_exts="m3u bak"
################ User-modified vars ########################
snap=`pwd`
[ -n "$1" ] && snap="$1"
[ -n "$2" ] && dir_root="$2"
out_dir=`mktemp -d "${dir_root}XXX"`
echo "The output will be written to the '$out_dir' directory"
cd /
 
old=$IFS
IFS='
'
[ -n "`/bin/ls $out_dir`" ] && /bin/rm $out_dir/*
for n in `/usr/bin/find "$snap" -type f`
do
    ext="`echo ${n/*.}|tr 'A-Z' 'a-z'`"
    # Ignore all specified extensions
    [ -n "`echo $ignore_exts|/bin/grep -i \"\\<$ext\\>\"`" ] && continue
    # No extension means the substitution won't work; no substitution means
    # we get the entire path and filename. So, no ext gets spun off to 'none'.
    [ -n "`echo $ext|grep '/'`" ] && ext=none
    echo $n >> $out_dir/$ext
done
 
echo "Done."
-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


2-cent Tip: Converting from $FOO to MP3

Ben Okopnik [ben at linuxgazette.net]


Wed, 25 Mar 2009 10:03:21 -0400

Recently, while organizing my (very large) music library, I analyzed the whole thing and found out that I had almost 30 (!) different file types. Much of this was a variety of info files that came with the music (text, PDF, MS-docs, etc.) as well as image files in every conceivable format (which I ended up "flattening" to JPG) - but a large number of these were music formats of every kind, a sort of a living museum of "Music Formats Throughout the Ages." I decided to "flatten" all of that as well by converting all the odd formats to MP3.

Fortunately, there's a wonderful Linux app that will take pretty much every kind of audio - "mplayer" (http://www.mplayerhq.hu/DOCS/codecs-status.html#ac). It can also dump that audio to a single, easily-convertible format (WAV). As a result, I created a script that uses "mplayer" and "lame" to process a directory of music files called "2mp3".

It was surprisingly difficult to get everything to work together as it should, with some odd challenges along the way; for example, redirecting error output for either of the above programs was rather tricky. The script processes each file, creates an MP3, and appends to a log called '2mp3.LOG' in the current directory. It does not delete the original files - that part is up to you. Enjoy!

#!/bin/bash
# Created by Ben Okopnik on Mon Jul  2 01:16:32 EDT 2007
# Convert various audio files to MP3 format
#
# Copyright (C) 2007 Ben Okopnik <ben@okopnik.com>
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
 
########## User-modifiable variables ###########################
set="*{ape,flac,m4a,wma,qt,ra,pcm,dv,aac,mlp,ac3,mpc,ogg}"
########## User-modifiable variables ###########################
 
# Need to have Bash expand the construct
set=`eval "ls -1 $set" 2>/dev/null`
# Set the IFS to a newline (i.e., ignore spaces and tabs in filenames)
IFS='
'
# Turn off the 'fake filenames' for failed matches
shopt -s nullglob
 
# Figure out if any of these files are present. 'ls' doesn't work (reports
# '.' for the match when no matching files are present) and neither does
# 'echo [pattern]|wc -w' (fails on filenames with spaces); this strange
# method seems to do just fine. 
for f in "$set"; do ((count++)); done
[ -z "$count" ] && { echo "None of '$set' found; exiting."; exit 1; }
 
# Blow away the previous log, if any

[ ... ]

[ Thread continues here (1 message/4.19kB) ]



Talkback: Discuss this article with The Answer Gang

Published in Issue 161 of Linux Gazette, April 2009

News Bytes

By Deividson Luiz Okopnik and Howard Dyckoff

News Bytes

Contents:

Selected and Edited by Deividson Okopnik

Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net.


News in General

lightning boltIDC: IT Turning to Linux in Economic Downturn

A recent market survey conducted by IDC (and sponsored by Novell) reveals a surge in the acquisition of Linux as the worldwide recession deepens. More than half of the IT executives surveyed will accelerate Linux adoption in 2009. Specifically, more than 72 percent of respondents say they are either actively evaluating or have already decided to increase their adoption of Linux on the server in 2009, with more than 68 percent making the same claim for the desktop.

The study surveyed more than 300 senior IT executives spanning manufacturing, financial services, and retail industries across the globe, as well as government agencies. The number one motivation executives gave for migrating to Linux was economic and related to lowering ongoing support costs.

"The feedback gleaned from this market survey confirms our belief that, as organizations fight to cut costs and find value in this tough economic climate, Linux adoption will accelerate," said Markus Rex, general manager and senior vice president for Open Platform Solutions at Novell. "Companies also told us that strengthening Linux application support, interoperability, virtualization capabilities and technical support will all fuel adoption even more."

Additional key survey findings include:

The research conducted in February 2009 showed 55 percent of respondents had Linux server operating systems in use, 39 percent had Unix server operating systems in use, and 97 percent had Windows server operating systems in use. Respondents were pre-screened via demographics screeners and completed the survey online. Novell was not involved in recruiting, and respondents did not need to be Novell customers.

An IDC white paper summarizing the survey findings can be found at http://www.novell.com/idc.

lightning boltSun Microsystems Unveils Open Cloud Platform

At its CommunityOne developer event in mid-March, Sun Microsystems showcased its Sun Open Cloud Platform. Sun previewed plans to launch the Sun Cloud, its first public cloud service targeted at developers and startups, and to provide public APIs.

Sun is opening its cloud APIs for public review and comment, so that others building public and private clouds can easily design them for compatibility with the Sun Cloud. Sun's Cloud API specifications are published under the Creative Commons license, which essentially allows anyone to use them in any way. Developers will be able to deploy applications to the Sun Cloud immediately, by leveraging pre-packaged VMIs (virtual machine images) of Sun's open source software, eliminating the need to download, install and configure infrastructure software. To participate in the discussion and development of Sun's Cloud APIs, go to http://sun.com/cloud.

At the core of the Sun Cloud will be the first two services - Sun Cloud Storage Service and Sun Cloud Compute Service - which will be available this summer. Customers will be able to take advantage of the combined benefits of open source and cloud computing via the Sun Cloud to accelerate the delivery of new applications, reducing overall risk and quickly scaling compute and storage capacity up and down to meet demand. Sun is leveraging its extensive portfolio of products, unparalleled world-class professional services and extensive expertise in building open communities and partner ecosystems to deliver the Sun Cloud. Sun will also take the technologies and architectural blueprints developed for the Sun Cloud and make them available to customers building their own clouds, ensuring interoperability among clouds.

Sun is leveraging several technologies to make its Sun Cloud incredibly easy to use - from deploying applications to provisioning resources. At the core of the Sun Cloud Compute Service are the Virtual Data Center (VDC) capabilities acquired in Sun's purchase of Q-layer in January 2009, which provide everything an individual or team of developers needs to build and operate a datacenter in the cloud. The VDC provides a unified, integrated interface to stage an application running on any operating system within a cloud, including OpenSolaris, Linux or Windows. It features a drag-and-drop method, in addition to APIs and a command line interface for provisioning compute, storage and networking resources via any Web browser. The Sun Cloud Storage Service supports WebDAV protocols for easy file access and object store APIs that are compatible with Amazon's S3 APIs.

Sun announced that leading partners and key advocates for cloud standards are supporting its goal to deliver an open cloud platform. Cloud Foundry, RightScale and Zmanda are three of the many cloud application providers, cloud management solution providers, Service Providers and cloud consulting companies partnering with Sun. Eucalyptus, an open source infrastructure for implementing cloud computing, is also supporting Sun's approach to drive standards-based, open source cloud platforms and applications, enabling users to integrate with other platforms and services.

To view the CommunityOne event webcast live at 9 am ET, go to http://sun.com/communityone. It will also be available on-demand.

To register for the Sun Cloud Early Access program, go to http://sun.com/cloud

lightning boltLPIC-3 Enterprise-level "Security" Exam

The Linux Professional Institute (LPI),has launched their new "Security" exam elective for their LPIC-3 certification program effective March 1, 2009. The LPI-303 "Security" exam is the second elective available in the organization's enterprise-level LPIC-3 certification program for Linux professionals.

The LPIC-3 certification program consists of a single "Core" exam (LPI 301) which focuses on skills in authentication, troubleshooting, network integration and capacity planning. This "Core" certification can be supplemented by existing speciality electives in "Mixed Environments" (LPI-302) and "Security" (LPI-303). Additional speciality electives are planned for release in "High Availability and Virtualization", "Web and Intranet", and "Mail and Messaging". Detailed information on the LPIC-3 program, exam objectives, tasks and sample questions can be found at http://www.lpi.org/lpic-3.

lightning boltEclipse Announces First Release of Swordfish, a Next Generation ESB

At the EclipseCon conference in April, The Eclipse Foundation announced the first release of Swordfish, a next-generation enterprise service bus (ESB) that provides the flexibility and extensibility required by enterprises to successfully deploy a service-oriented architecture (SOA) strategy. Swordfish is based on the OSGi standard and builds upon successful open source projects, including Eclipse Equinox and Apache ServiceMix.

Swordfish provides the features and extensible framework required by enterprises and system integrators to customize their ESB to meet the specific needs of an enterprise. These features include:

"We are developing Swordfish to meet the requirements we experienced deploying large scale SOA applications at Deutsche Post and other large enterprises," explained Ricco Deutscher, CTO of Sopera and a member of the Eclipse Runtime Project Management Committee. "Using Equinox and OSGi, we are able to provide the flexible and extensible architecture required for SOA deployments to be successful."

"Last year we announced a strategy to provide open source runtime technology based on Equinox and OSGi," remarked Mike Milinkovich, Executive Director of the Eclipse Foundation. "The first release of Swordfish is a great example of the progress that is being made to develop our runtime technology portfolio. Over the next year I expect we will see more interesting runtime technology built at Eclipse."

The first release of Swordfish 0.8 will be available for download the first week of April from http://www.eclipse.org/swordfish/.

lightning boltPulsar Mobile Group formed by Eclipse

In March, the Eclipse Foundation announced Pulsar, a new industry initiative to define and create a standard mobile application development tools platform. The initiative is led by Motorola, Nokia and Genuitec. Other participating members include IBM, RIM and Sony Ericsson.

Pulsar will support major mobile development environments such as JavaME, mobile Web technologies, and native mobile platforms.

Instead of requiring mobile developers to use a variety of software development kits (SDKs) to develop their applications for different handset manufacturers, Pulsar will define a common set of Eclipse-based tools in a packaged distribution that will inter-operate with the various handset SDKs. This will enable developers to stay within one familiar development environment while creating mobile applications that target multiple device families.

The Pulsar initiative will focus on four areas:

The first release of Pulsar Platform is expected to be available at the end of June 2009 and will be part of the Eclipse Galileo annual release.

lightning boltOpen Source AWS Toolkit for Eclipse available now

The AWS Toolkit for Eclipse was announced at a keynote presentation at EclipseCon 2009. This is a new plugin for Eclipse, targeted for Tomcat or other application servers running in the Amazon cloud. Support for Glassfish, JBoss, WebSphere, and WebLogic will be coming.

The AWS Toolkit for Eclipse, based on the Eclipse Web Tools Platform, guides Java developers through common workflows and automates tool configuration, such as setting up remote debugger connections and managing Tomcat containers. The steps to configure Tomcat servers, run applications on Amazon EC2, and debug the software remotely are now done seamlessly through the Eclipse IDE.

The new plugin requires Java 1.5 or higher. The Eclipse IDE for Java Developers 3.4 is recommended. Find more info here: http://aws.amazon.com/eclipse/


Conferences and Events

TechTarget Advanced Virtualization Roadshow
March - December 2009, various cities
http://go.techtarget.com/r/5861576/5098473
ESC Silicon Valley 2009 / Embedded Systems
March 30 - April 3, San Jose, CA
http://esc-sv09.techinsightsevents.com/
USENIX HotPar '09 Workshop on Hot Topics in Parallelism
March 30 - 31, Claremont Resort, Berkeley, CA
http://usenix.org/events/hotpar09/
Web 2.0 Expo San Francisco
Co-presented by O'Reilly Media and TechWeb
March 31 - April 3, San Francisco, CA
STPCon Spring
March 31 - April 2, San Mateo, CA
Linux Collaboration Summit 2009
April 8 - 10, San Francisco, CA
http://events.linuxfoundation.org/events/collaboration-summit
Black Hat Europe 2009
April 14 - 17, Moevenpick City Center, Amsterdam, NL
http://www.blackhat.com/html/bh-europe-09/bh-eu-09-main.html
MySQL Conference & Expo
April 20 - 23, Santa Clara, CA
http://www.mysqlconf.com/
RSAConference 2009
April 20-24, San Francisco, CA
http://www.rsaconference.com/2009/US/Home.aspx
USENIX/ACM LEET '09 & NSDI '09

The 6th USENIX Symposium on Networked Systems Design & Implementation (USENIX NSDI '09) will take place April 22–24, 2009, in Boston, MA.

Please join us at The Boston Park Plaza Hotel & Towers for this symposium covering the most innovative networked systems research, including 32 high-quality papers in areas including trust and privacy, storage, and content distribution; and a poster session. Don't miss the opportunity to gather with researchers from across the networking and systems community to foster cross-disciplinary approaches and address shared research challenges.

http://www.usenix.org/nsdi09/lg

IDC Virtualization Forum
April 23, Four Seasons Hotel, San Francisco, CA
http://www.idc.com/virtualization-west09
SOA Summit 2009
May 4 - 5, Scottsdale, AZ
http://www.soasummit2009.com/
RailsConf 2009
May 4 - 7, Las Vegas, NV
STAREAST - Software Testing, Analysis & Review
May 4 - 8, Rosen Hotel, Orlando, FL
http://www.sqe.com/go?SE09home
EMC World 2009
May 18, Orlando, FL
http://www.emcworld.com/
Interop Las Vegas 2009
May 19 - 21, Las Vegas, NV
http://www.interop.com/lasvegas/
SouthEast LinuxFest

The SouthEast LinuxFest will hold its first annual conference at Clemson University on June 13, 2009.

The SouthEast LinuxFest is a community event for anyone who wants to learn more about Linux and Free & Open Source software. It is part educational conference, and part social gathering. Like Linux itself, it is shared with attendees of all skill levels to communicate tips, ideas, and to benefit all who use Linux/Free and Open Source Software. LinuxFest is the place to learn, to make new friends, to network with new business partners, and most importantly, to have fun! It is FREE to attend. Please see our website for details and speakers.

http://southeastlinuxfest.org/

Semantic Technology Conference
June 14 - 18, Fairmont Hotel, San Jose, CA
HP Tech Forum 2009
June 15 - 18, Las Vegas, NV
http://www.hptechnologyforum.com/
Velocity Conference 2009
June 22 - 24, San Jose, CA
http://conferences.oreillynet.com
SharePoint TechCon Boston
June 22 - 24, Cambridge, MA
Gartner IT Security Summit 2009
June 28 - July 1, Washington, DC
http://www.gartner.com/it/page.jsp?id=749433
Cisco Live/Networkers 2009
June 28 - July 2, San Francisco, CA
http://www.cisco-live.com/
OSCON 2009
July 20 - 24, San Jose, CA
http://en.oreilly.com/oscon2009


Distro News

lightning boltRed Hat Enterprise Linux 4.8 Beta

Red Hat is now testing the beta release of RHEL 4.8 (kernel-2.6.9-82.EL) for the Red Hat Enterprise Linux 4 family of products

Red Hat Enterprise Linux 4.8 is in development and the implemented features and supported configurations are subject to change before the release of the final product. The beta CD and DVD images are intended for testing purposes only. Benchmark and performance results cannot be published based on this beta release without explicit approval from Red Hat.

While 'anaconda' upgrade option upgrade from Red Hat Enterprise Linux 4.7 to the Red Hat Enterprise Linux 4.8 beta, there is no guarantee that the upgrade will preserve all of a system's settings, services, and custom configurations. For this reason, Red Hat recommends a fresh installation rather than an upgrade. Also note that upgrading from beta release to the GA product is not supported.

Red Hat is moving GCC4 from Tech Preview to supported but notes that GCC4 in Enterprise Linux 4 is not fully ABI compatible with Red Hat Enterprise Linux 5. Applications compiled on the older version, Enterprise Linux 4, are expected to continue to work on the newer version 5 as long as they use libraries that are also supported on Enterprise Linux 5 (either directly or via compatibility libraries).

RHEL 4.8 is available to existing Red Hat Enterprise Linux subscribers via RHN. Installable binary and source ISO images are available via Red Hat Network at: https://rhn.redhat.com/network/software/download_isos_full.pxt.

lightning boltPre-Orders OpenBSD 4.5 Available Now

The OpenBSD project's upcoming release, version 4.5, is now available as a pre-order ($50.00 + shipping). Scheduled for May 2009, OpenBSD 4.5 will ship with a large number of new features and broad hardware support, including x86, Sparc, ARM and PowerPC CPUs.

Among the software inclusions are:

For more information please see the OpenBSD 4.5 features page, here: http://openbsd.org/45.html.


Software and Product News

lightning boltOracle Middleware Pack for Eclipse Now Available

During EclipseCon, Oracle announced it is providing Java developers with new tools, including the Oracle Enterprise Pack for Eclipse, a free component of Oracle Fusion Middleware. Included in the Enterprise Pack is an Oracle WebLogic Server Plug-in, Object-Relational Mapping (ORM) tools, and Spring and Web Service tools to reduce development complexity for Java and database applications.

In addition to the WebLogic Server Plug-in, Oracle Enterprise Pack for Eclipse Release 11g adds new features, including:

An Eclipse Foundation Board Member, Oracle has a long history of participation in the Eclipse community. Oracle currently leads several Eclipse-based projects including JavaServer Faces (JSF) Tools, Dali JPA Tooling, Eclipse Data Tools Platform, and EclipseLink (derived from Oracle TopLink).

"It is great to see Oracle expanding on its Eclipse tools strategy and further contributing to the community," said Mike Milinkovich, executive director of the Eclipse foundation. "The Oracle Enterprise Pack for Eclipse 11g release provides a nice complement to the work they are doing in the Web Tools Platform Projects, which includes the Dali project, the JSF tools project, the Java EE tools project, and the EclipseLink project."

For more info, visit:
* http://blogs.oracle.com/devtools/
* http://java-persistence.blogspot.com/
* http://blogs.oracle.com/gstachni/

lightning boltLinpus Shows Instant-on Netbook At CeBIT

Linpus Technologies, a leader in the field of Linux solutions for low cost notebooks, netbooks and nettops, announced its entry into the fast boot product market with a sneak preview of the new version of its flagship product, Linpus Linux Lite.

To achieve fast boot-up and launch of applications for Linpus QuickOS, Linpus engineers set out to leverage their expertise in fine-tuning and maximizing software performance for less powerful hardware platforms in the netbook market. With QuickOS they redefined functionality for the product and striped away unnecessary libraries.

Also included is a customized virtual engine to read, edit and save Windows files and also run popular multimedia, productivity, and gaming software while running Linpus Linux Lite.

"The netbook market requires operating system solutions that are rich, powerful, yet lightweight and fast," said Warren Coles, the marketing director for Linpus."Our work on the Acer Aspire One and since with the Moblin project taught us that much could be done at the software level to decrease boot-up time."

lightning boltItemis Extends Eclipse EMF Modeling Capability

Itemis is releasing the new development of TMF-Xtext for inclusion in the next version of Eclipse in June 2009, and piloting the new EMF-Index project. Both of the projects were discussed in sessions at EclipseCon 2009: "Next generation textual DSLs with Xtext" and "Managing Big Ecore Models with EMF Index".

TMF-Xtext which will be released in the next version of Eclipse in June 2009. With Xtext, very simple so-called domain specific languages (DSLs) can be created. This open-source framework is part of the Eclipse Modeling Project and is being further developed by the itemis employees in the Textual Modeling Framework (TMF).

Itemis also is woking on the new Eclipse project, EMF-Index, for the creation of scalable modeling. The EMF-Index is a key element for the use of a large number of models in a working environment and enables a quick search for model elements.

For more info, go to: http://www.itemis.com

lightning boltInstantiations Releases WindowBuilder Pro v7.0 for Eclipse

Instantiations, has released version 7.0 of its market-leading WindowBuilder Pro Java graphical user-interface (GUI) builder. WindowBuilder Pro includes powerful functionality for creating user interfaces based on the popular Swing, SWT (Standard Widget Toolkit), and GWT (Google Web Toolkit) UI frameworks. This product won an Eclipse Technology Award at EclipseCon in March for Best Commercial Eclipse-Based Developer Tool.

WindowBuilder Pro is a bi-directional Eclipse GUI builder with drag-and-drop functionality and automatic Java code generation. The product includes a visual design editor, wizards, intelligent layout assistants, localization and more. WindowBuilder Pro component products include Swing Designer, SWT Designer, and GWT Designer.

"It has been impressive to see the continued growth and popularity of WindowBuilder Pro," said Mike Milinkovich, executive director of the Eclipse Foundation. "Instantiations continues to deliver high quality, innovative tools for the Eclipse platform that help developers utilize Eclipse more effectively, and we're pleased with their continued support of Eclipse."

Updates in v7.0 include UI Factories, a convenient way to create customized, reusable versions of common components, improved parsing using binary execution flow, a new customization API for third party extensibility, Eclipse Nebula widgets integration (SWT), Swing Data Binding, JSR 295 (Swing), and full support for GWT-Ext widgets and layouts (GWT).

WindowBuilder Pro v7.0 is available for $329 USD with a traditional software license that includes 90 days of upgrades, maintenance and technical support. Product upgrades are available at no cost to customers with current support agreements. Download full-feature trial evaluation software from http://www.instantiations.com/prods/docs/download.html.

Instantiations is a founding member of the Eclipse Foundation and the Smalltalk Industry Council. The company is also a major contributor in the Smalltalk language market with its VA Smalltalk.



Talkback: Discuss this article with The Answer Gang


[BIO]

Deividson was born in União da Vitória, PR, Brazil, on 14/04/1984. He became interested in computing when he was still a kid, and started to code when he was 12 years old. He is a graduate in Information Systems and is finishing his specialization in Networks and Web Development. He codes in several languages, including C/C++/C#, PHP, Visual Basic, Object Pascal and others.

Deividson works in Porto União's Town Hall as a Computer Technician, and specializes in Web and Desktop system development, and Database/Network Maintenance.



Bio picture

Howard Dyckoff is a long term IT professional with primary experience at Fortune 100 and 200 firms. Before his IT career, he worked for Aviation Week and Space Technology magazine and before that used to edit SkyCom, a newsletter for astronomers and rocketeers. He hails from the Republic of Brooklyn [and Polytechnic Institute] and now, after several trips to Himalayan mountain tops, resides in the SF Bay Area with a large book collection and several pet rocks.

Howard maintains the Technology-Events blog at blogspot.com from which he contributes the Events listing for Linux Gazette. Visit the blog to preview some of the next month's NewsBytes Events.


Copyright © 2009, Deividson Luiz Okopnik and Howard Dyckoff. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 161 of Linux Gazette, April 2009

Upgrading your Slug

By Silas Brown

If you installed Debian on an NSLU2 device ("Slug") following Kapil Hari Paranjape's instructions in LG #138, then you might now wish to upgrade from etch (old stable) to lenny (current stable). Debian itself contains instructions for doing this, which you can follow if you like (but see the note below about the locales package causing a crash). If you use the NSLU2's watchdog driver, then I recommend first booting without it, otherwise the general unresponsiveness caused by the upgrade can cause the watchdog to reboot when the system is unbootable, and you'll have to restore the filesystem from backup. However, following the standard Debian dist-upgrade will still leave you running lenny using the arm architecture. Debian's arm port is now considered deprecated, and in future releases will be replaced by the armel port, which has (among other things) significant speed improvements in floating-point emulation just by changing the handling of the ARM's registers, stack frames, etc.

If you want to move to armel at the same time as you're upgrading to lenny, this normally requires re-installing from scratch (since ArchTakeover is not implemented, yet.) However, there is also a way to do it incrementally. Martin Michlmayr has produced an unpacked version of the lenny armel install, which can be downloaded and untarred into a subdirectory of your etch system. Etch will not be able to chroot into this, but at least it lets you compare key configuration files and make needed changes from the comfort of a working system.

Moving Files and Setting up Configuration

The first thing you need to do is copy fstab from the old /etc to the new one; you'll also need resolv.conf, hostname, hosts, mailname, timezone, and adjtime. You might also like to copy apt/sources.list (change it to "lenny" or "stable", if it says "etch"), and the following files if you have customised them: inittab, inetd.conf, logrotate.conf. For the password files (i.e., passwd, shadow, group, and gshadow - you don't have to worry about any backup versions ending in -), it is best not to simply copy them from etch, because that can delete the new accounts that various lenny base packages use. Instead, you can use diff -u etc/passwd /etc/passwd | grep \+, pick out the real user accounts, and merge them in (and do the same with group, shadow, gshadow). To review all changes that the new distribution will make, do something like diff -ur /etc etc|less and read through it, looking for what you want to restore. (However, note that some packages won't be there yet; try searching the list for "Only in /etc" to see which new config files you might want to copy across in preparation for them.) Note that lenny uses rsyslog.conf instead of syslog.conf.

After you are happy with /etc, please remember to copy across the crontabs in /var/spool/cron. (I've lost count of the number of times I as a user have had to re-instate my crontab because some admin forgot to copy the crontabs during an upgrade). Also, take a list of the useful packages you've installed that you want to re-install on the new system. Finally, remove (rm -r) the following top-level directories from the new system (make sure you're in the new system, not in /!): media, home, root, tmp, lost+found, proc, mnt, sys, srv. Removing these means they will not overwrite the corresponding top-level directories from the old system during the next step.

The new system now needs to be copied onto the old one (with the old directories being kept as backups), and the NSLU2's firmware needs updating to the lenny version (also downloadable from the above-mentioned site) using upslug2 from a desktop, as documented. The directories are best copied over from another system: halt the NSLU2, mount its disk in another system, and do

cd /new-system
for D in * ; do
  mv /old-system/$D /old-system/$D.old && mv $D /old-system
done

substituting /new-system and /old-system appropriately. (This assumes you have room to keep the *.old top-level directories on the same partition; modify it, if not.)

Partitioning Complications

Because Martin Michlmayr's downloadable firmware image was (at the time of writing) generated from a system that assumes /dev/sda2 is the root device, you should make sure that your root filesystem is on the disk's second partition. If it is on the first, then you can move and/or shrink it slightly with gparted, create a small additional partition before it, and use fdisk to correct the order if necessary. (The fdisk commands you need are x, f, r, w, and q.) Then, put only that disk back into the NSLU2, and switch on. If all goes well, you should now boot into the new distribution. Then, you can start installing packages and re-compiling local programs, and do apt-get update, apt-get upgrade and apt-get dist-upgrade. You might still need to run dpkg-reconfigure tzdata, even though it should show your correct timezone as the default choice.

Of course, if you have 2 disks connected at boot time, then there's only a 50/50 chance it will choose the correct one to boot from. (If it doesn't, disconnect and reconnect the power and try again, or boot with only one disk connected.) If you have set up your /etc/fstab to boot from UUIDs, then this will take effect when you install your own kernel (which should happen automatically as you upgrade the lenny packages). You can get a partition's UUID using dumpe2fs /dev/sda2 | grep UUID, and use UUID=(this number) in place of /dev/sda2 or whatever in fstab, as long as it's not a swap partition. One trick for getting swap to work is to ensure that it is on a partition number that is valid on only one disk, and then list all the disks having swap partitions with this number. The correct disk will be used, and the others will cause harmless errors during boot.

You may experience further complications on account of the differences between ext2 and ext3 filesystems: In Debian Etch, if you wanted to reduce the wear on a flash disk, you could tell /etc/fstab to mount the partition as ext2, even though the installer formatted it as ext3. Mounting as ext2 simply leaves out writing the ext3 crash-recovery journal. Apparently, however, the newer kernel in lenny cannot really mount an ext3 partition as ext2 (it tells you it's doing so, but it doesn't, see Ubuntu bug 251999), and moreover, if your fstab says it's ext2, the update-initramfs utility's omitting the ext3 module from the kernel will result in an unbootable system when you try to upgrade to lenny's latest kernel (but you can still boot Martin's kernel, which expects ext3). Conversely, if your filesystem really is ext2, you won't be able to boot Martin's kernel. Therefore, you have to:

To convert an ext3 partition back to ext2, connect the disk to a separate computer and, if for example the partition is sdb2 on that computer, make sure it is unmounted and do

e2fsck -fy /dev/sdb2
tune2fs -O ^has_journal /dev/sdb2
e2fsck -fy /dev/sdb2

and to convert it back to ext3,

tune2fs -O has_journal /dev/sdb2

Martin has filed Debian bug #519800 to suggest that initramfs support both versions of extfs no matter what fstab says, which should mean (when fixed) you don't have to run tune2fs just to get a bootable system. You might still want to do it anyway to work around the other bug (kernel updating the journal even when ext2 is requested).

Locales Package

When doing an apt-get upgrade or dist-upgrade, make sure the locales package is not installed, or at least that you are not generating any locales with it. That package's new version requires too much RAM to generate the locales; the 32MB NSLU2 cannot cope, and may crash. If you need any locales other than C and POSIX, then you can get them from another Linux system by copying the appropriate subdirectories of /usr/lib/locale (and possibly /usr/share/i18n if you want locale -m to work, too).

Sound

If you have fitted a "3D Sound" USB dongle, you might find that, in the new distribution, the audio becomes choppy and/or echoey. This seems to be on account of an inappropriate default choice of algorithms in the ALSA system, and it can be fixed by creating an /etc/asound.conf with the following contents:

pcm.converter {
  type plug
  slave {
    pcm "hw:0,0"
    rate 48000
    channels 2
    format S16_LE
  }
}
pcm.!default converter

Note that this configuration deliberately bypasses the mixer, so only one sound can play at once. Mixing sounds in real time on an embedded system like this is likely to be more trouble than it's worth.

Unfortunately, it no longer seems possible to drive the soundcard itself at lower sample rates and channels, which is a pity because having to up-convert any lower-samplerate audio (such as the mono 22.05kHz audio generated by eSpeak) not only wastes bandwidth on the USB bus but also seems to slightly reduce the sound quality, but the difference is not immense.

If you are playing MP3s, then you also have the option of getting madplay (rather than the ALSA system) to do the resampling, and this could theoretically be better because madplay is aware of the original MP3 stream, but I for one can't hear the difference.

madplay file.mp3  -A -9  -R 48000 -S  -o wav:-|aplay -q -D hw:0,0

On lenny (unlike on etch), recording works, too, and it can be done with arecord -D hw:0,0 -f S16_LE -r 24000 test.wav, but the quality is not likely to be good. (Mine had a whine in the background.)

Everything Finished

If you have done this, then you should, with luck, have an NSLU2 running lenny on the armel architecture, which has significantly faster floating-point emulation (although it's not as fast as a real floating-point processor), and perhaps more important has better long-term support. (You won't be stuck when arm is dropped in the release after lenny.)

The *.old top-level directories created above can be removed when you are sure you no longer need to retrieve anything from them, or you can rename them to "old/bin", "old/usr", etc., and have a chroot environment in old/. (The new kernel can run an old system in chroot, but not vice-versa.)


Talkback: Discuss this article with The Answer Gang


[BIO]

Silas Brown is a legally blind computer scientist based in Cambridge UK. He has been using heavily-customised versions of Debian Linux since 1999.


Copyright © 2009, Silas Brown. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 161 of Linux Gazette, April 2009

Away Mission: 2008 in Review - part 3

By Howard Dyckoff

April is another Mad Month with competing tech events. Besides the events reviewed here, there are Black Hat Europe 2009, April 14-17, in Amsterdam, and the USENIX LEET (Large-Scale Exploits and Emergent Threats) conference in Boston, April 21-24.

This year will feature a new major event - the Linux Collaboration Summit, organized by the Linux Foundation. The 3rd Annual Collaboration Summit will be co-located with the CELF Embedded Linux Conference and the Linux Storage and Filesystem Workshop. It occurs April 8-10 in San Francisco. More information is here: http://events.linuxfoundation.org/events/collaboration-summit/

Web 2.0 Expo and Velocity

Over the years, the Web 2.0 event has split into the Expo, for Web production people; and the Web 2.0 Summit, for the leaders (which operates by invitation only). More recently, the Web 2.0 Expo has become a forum for social networking and Web designers.

The better bet for sysadmins and Linux hackers is the O'Reilly Velocity Conference in June, which was spun off from the Web 2.0 Expo last year by presenters from the tech tracks who wanted a dedicated event. It was very successful, even though the tech community was given barely 3 months notice. If you have to choose only one or two conferences this year, and you build or maintain data centers, Velocity should definitely be on your short list.

For starters, all the good tech presentations were repeated at Velocity. Many of those were expanded, and the networking opportunities are different. Web 2.0 Expo is about the nexus of art, Web tech, and - to some extent - marketing. Velocity is about getting things to work at Web scale and Web velocity. It's fundamentally nerdier.

Since social networking was a big piece of the show, there was a social networking site for Web 2.0 Expo San Francisco, used to find and connect with people at the conference and for general opinion mongering. See what people said: http://webexsf2008.crowdvine.com/

Another social networking site connected to the Web 2.0 Expo is http://socialtext.net/web20sf.

Praise: there is breakfast every AM - fruit, bagels, cream cheese, juice... which stayed out through the mid-morning break.

No praise: the coffee and tea disappeared after the AM break, and returned only for the short mid-afternoon break. Moral? Get caffeine early.

Conference attendees get a box lunch, each day. There were special requests, but a lot of those got lost the first day. They also tried to limit the number of vegetarian lunches, saying they were out temporarily. A long line waited against the wall on Wednesday, and seemed rather unhappy. (That seems to have been fixed on Thursday with a full table, one of four. That's only fair, as carnivores can eat vegetarian lunches, but vegetarians can't eat carnivore lunches.)

The sense I got from the panoply of 50-minute breakout sessions is that more people are using open source, and using it in more sophisticated ways for cutting-edge Web sites. There was some buzz around OpenID and OAuth, and also around open platforms such as the Google APIs and Google App Engine, the new Yahoo Social APIs, and lots of wiki and community hosting sites like Vox and Movable Type.

Although mashups and social networking are "so 2007", there are also lots of new platforms and frameworks to make it easier to roll your own site and (attempt to) bring these technologies into the business enterprise. Another emerging trend is the increasing interest corporate IT is showing in these technologies.

Some session presenters, mostly from platform companies or custom software houses, reported several large IT organizations experimenting with Web 2.0 and letting the business execs bring in (with some control and reservations) SaaS versions of the tools they want to use. The demand is getting to be too big to wait another year. However, those IT shops are trying to identify the data that needs serious protection, and trying to quarantine just that, as enterprises become porous and more vulnerable to penetration.

Everyone is talking about making the huge amount of data from social networks both easier to leverage and more protected for user privacy. That means increased use of Open Social and similar APIs and identity federation management frameworks. The work never seems to end.

There aren't really tracks for presentations at Web 2.0, but three rooms are set aside for "sponsored presentations". That often means a bit of marchitecture in those presentations, but most were hardly any different from the main presentations, except that the presenters were from bigger companies.

The sponsor session with Adobe's Duane Nuetall was actually a very technical discussion on folksonomies and ontologies, and did not mention any Adobe products because they aren't in the ontology business. Rather, much like the rest of us, they are interested in using semantic technology as it matures. The Microsoft-sponsored presentation, however, was on their new Mesh product, and it was heavy on the marketing side.

Other praise-worthy details: The keynotes and breakout sessions have no power taps or extension cords for users, except those built into the fixed walls (and in very, very few rooms). I have been bringing a 15-foot extension cord of my own to allow sitting at some distance from the outlet if necessary, and sharing one plug with up to 3 users. I don't know if I've inspired folks or if it's just the zeitgeist, but I am seeing others now emulating my actions, sometimes with only a 3-way tap. On Thursday, I saw some one with a full 6-tap power-strip! Bless that person.

On the last day, there was a block of sysadmin-oriented sessions, two of them in the same room. (There it is, again, another room scheduling issue.) As it turns out, all were to be technical presenters at the new and upcoming O'Reilly Velocity conference in June. That conference and these sessions took a bead on performance and capacity issues for the operations crowd. Seems that the O'Reilly folks understand that the interactive Web requires more than design artists and AJAX.

Steve Souders' presentation at Web 2.0 Expo 2007 was rated #2, which isn't that surprising since he also worked on YSlow extension for Firefox. Formerly the head performance guy at Yahoo, and now doing a similar job at Google, he is focused on the the client side or front end of Web transactions. He already has a recent O'Reilly book out on "Web Site Performance" that provided part of his talk at the 2007 Web 2.0 conference, and is now preparing a second book, which covers the user side of the equation.

His "Even Faster Web Sites" presentation was a gem, as was its follow-up at Velocity, a few months later. I'll distill a little of it here. Souders' research shows that most major sites spend 80-95% of their net time on front-end processing and browser issues. It's gotten to this state due to the ubiquity of JavaScript and the proliferation of scattered, individual scripts.

Even a 50% improvement on back-end performance yields only 5-10% gain for the user. However, simple changes in a few lines of source code can work wonders. According to Souders, you can easily get a 25% advantage in page load times by applying 14 performance rules. Just pick a few appropriate ones to get a fair gain. Here are a few:

  1. Make fewer HTTP requests, or bundle them up for less network delay.
  2. Use a CDN (content distribution network - use edge delivery as Akamai does).
  3. Add an Expires header, especially with a CDN.
  4. gzip components.
  5. Use YSlow and Firebug to test the load times and performance of individual items.
  6. Since scripts block other content downloading - move 'em to the end of the page code.
  7. Use multiple content domains for even faster downloads.

The last three points were subjects for the second half of the presentation. There are multiple JavaScript-Fu techniques to break up the monolithic script payloads. One of these calls for making scripts individual elements in the DOM (used at MSN.com.) Others use separate script payloads in different Iframes, or XHR injection (which may be best for the same domain) and no ordering of the scripts. Another option is to move some scripts to an external script, which can allow parallel downloads. Souders also suggests ordering the scripts so they can be executed in the order they're received.

See slide 31 from his presentation, showing the effect of script loading and execution on a Wikipedia page here: misc/dyckoff/Script-Load-Wikipedia.otp

For the full description of these techniques with code samples and a decision tree for selection of the most suitable in your environment, check out his presentation here: http://assets.en.oreilly.com/1/event/3/Even%20Faster%20Web%20Sites%20Presentation.pdf

Jesse Robbins and Artur Bergman of O'Reilly Radar presented an entertaining and informative review of major failures, disasters, and painful lessons learned in the past year. Check it out here: Failure Happens: What Broke Since Last Year (and What We Learned from It)

The conference party was an outside event... literally. It was a pub crawl through the restaurants and bars of the artsy, techy San Francisco South Park area - a brilliant stroke for pleasing the 30-something and 20-something crowd. The crawl had some sprawl and also was over 5 blocks from the Convention Center on a cold, foggy night. With a bag on each shoulder - a laptop bag and a tote with mags and swag - I opted for an early night and the faster subway ride home. That may have been the better choice, since many pub crawlers missed the earlier Friday AM sessions.

If Web design or social networking sites are your bag, then this is a must-attend event. However, if you are working the infrastructure and biting the scaling bullet, you might take a shine to the Velocity conference. Some people attend both.

RSA for Security Trends

Still one of the first-tier security events, RSA 2009 returns to San Francisco the same week as the MySQL conference in Santa Clara. Both are excellent, and have different audiences.

Alert: Last year (and 2009 as well), both TCG (Trusted Computing Group) and several identity communities under the auspices of the Liberty Alliance and Concordia Project held separate semi-public sessions on the first day of the RSA conference. The identity event includes representatives from major initiatives in the global identity sector, and is focused on how the identity industry can deliver new benefits to users of enterprise and Web 2.0 identity-enabled applications and services.

These sessions are open to all registrants, which should include expo pass holders. Since that day was (and continues to be) committed to tutorials, this is effectively a free extension to the conference. However, pre-registration is a requirement.

For 2008, the identity management workshop was titled, "Identity Federation & Web Services: Happening Today - Enabling Tomorrow". Materials from that event are here: http://projectconcordia.org/index.php/Concordia_workshop_RSA_2008_notes and the actual slide deck is here: http://projectconcordia.org/images/7/76/Concordia-Apr2008-wiki.pdf

For 2009, the event is longer - 8 am to 5 pm - and is titled, "Harnessing the Power of Digital Identity: 2009 and the Promising Road Ahead". It is supposed to be open to the public. A detailed workshop agenda and registration information is available at http://projectconcordia.org/index.php/April_20_pre-conference_workshop

At the separate TCG session, the room was broken up into a main area and four mini-classrooms where network and data security presentations could be given hourly. I believe they also provided box lunches and a USB drive with some of the presentation materials. The slide deck for the TCG 2008 presentation is here: https://www.trustedcomputinggroup.org/news/events/rsa_2008/.

I heard that the attendance was up for RSA 2008, after 2 or 3 years of modest decline. Pre-conference, the number was projected to be 17,000 - quite respectable. I hope they don't suffer a significant decline in this extreme recession.

I found the conference very well organized, with things to do for both full attendees and expo-only types. There was a hacker smack-down contest setup in the main corridor, and adjacent to it was a Jeopardy-like contest during day hours.

There was also the "crypto commons" lounge with plenty of space to sit down and charge up that laptop between conference sessions. Rows of tables allowed more focused work, and there were Ethernet drops too.

Most of the tracks were exclusively in single rooms, which minimized travel for attendees focused on a single area. Besides the two concurrent Hacker Tracks, there was an ID Management track, a very popular developer security track, a business track, a sponsor track for items that didn't rate a keynote, and also a new legal track. Of course, similar content might appear in different tracks, like presentations on XACML by Oasis Members.

One highlight of RSA 2008 was the Cyber Security Town Hall meeting, open to expo attendees as well. For 2008, this featured a presentation by Greg Garcia, the Department of Homeland Security Assistant Secretary for Cybersecurity and Telecommunications (who also spoke in 2007). Garcia spoke on the then recent Cyberstorm II exercise results. Unlike Cyberstorm I, which was more like a board game, this was a real-time cyber-attack scenario. The exercise planning began in 2007 and culminated in March of 2008, involving 40 companies, 9 states, and 5 countries (Canada, Australia, New Zealand, the US, and the UK). One thing DHS learned from the effort, Garcia said, was just how important critical vendors and support staff are in an international emergency. This sentiment was echoed by reps from EMC and Microsoft at a participant panel after Garcia's talk. Collect business cards from your peers at events like RSA, and be prepared for cyber-disruption, they advised.

Presentations and other conference materials for RSA 2008 are locked up, but many presenters post their own presentations on-line. So, looking up the presenter and the presentation title may turn up a presentation you want. This link offers several presentations from RSA conferences here and in Europe, including "Darwin and Security: What Evolution Tells Us About the Past and Future of Security: http://www.cryptography.com/research/presentations.html

The RSA conference archives have articles and podcasts that are public. See it here: https://365.rsaconference.com/community/rsaconference_archives

Also see this link: Podcast Series: RSA Conference 2008 https://365.rsaconference.com/blogs/podcast_series_rsa_conference_2008

I'd also recommend Bryan Sullivan's highly rated presentation from RSA Conference 2008 on "AJAX Security". This is an update of the RSA 2008 talk which was called: "AJAX applications: A blueprint for Disaster" due to the greatly expanded attack surface.

I do have some quibbles about the once spectacular "Cryptographers Bash", the night before the last day. I don't know if the ballrooms at the Marriott Hotel were collectively smaller than the Treasure Island venue, but the crowd seemed much smaller and the food stations had only a few variations repeated in all the ballrooms. It seemed like a step or two down from previous bashes.

While the variety of food and entertainment was already much more limited, some crazy person thought it best to hold back the desserts until after 9 pm, long after a sizable chunk of party-goers had departed to sleep off alcoholic and carnivorous excesses. Perhaps this reflected some new-felt economy measures, but for the folks who had eaten something before, it was an excessive wait. Many of us just left before they rolled out the sweets.

End day end-game: 3 track sessions without a break in the AM, and 2 keynotes in the afternoon. The first keynote featured Hugh Thompson in his techno-celebrity incarnation. He had also closed out RSA 2007.

The real closing honor went to Al Gore and his Green Energy message. Unfortunately, Gore's keynote was contractually a non-Press event. That meant all bloggers, tech writers, and local news hacks were escorted out before he spoke - by security staff. And that included this lowly Linux Gazette reporter. Of course, some press people had obtained separate expo passes, and snuck in anyway.

That event was not recorded or posted publicly, but Gore also spoke at the Web 2.0 Summit last November, and that video is in the conference archive. Check it out here: http://www.web2summit.com/web2008/public/schedule/detail/5068

MySQL - No Disappointments

The MySQL user conference never disappoints, and is usually tightly scheduled and well-organized. It features keynotes by technologists and researchers, and presentations by the MySQL development team and key partners. With nearly 2,000 attendees, this is the probably the world's largest community event for open source database developers and users.

Last year's event followed soon after Sun's purchase of MySQL, but the conference was substantially unchanged. Former CEO Marten Mickos addressed concerns and anxieties during and after his keynote, noting that Sun provided the resources that MySQL needed at that stage of its growth.

Mickos had tripped off concern in the blogosphere and on Slashdot that MySQL was moving in a proprietary direction by mentioning that "commercial extensions" planned for 6.0 would only be available to subscribers to the enterprise edition of MySQL 6. However, these management additions are really an outgrowth of the MySQL Network subscription for enterprise users, and have little impact on the user community.

Rick Falkvinge, of Swedish Pirate Party, gave a challenging keynote on "Copyright Regime vs. Civil Liberties" on the second day. He and his party consider modern copyrights and their legal regime a threat to civil liberties, taken from a very long historical viewpoint.

Recounting the battles between the medieval church and printing press, and later the exclusive charter to London printing guild by Henry VIII, Falkvinge described our rules of intellectual property as protections almost entirely for the publishers, not the creators. So, it seems not that much has changed in over 300 years.

Originally, copyrights were about public use and public performances of copyrighted materials. Now, IP owners like record companies are arguing against messenger immunity, an idea going back to the Roman Empire. They are also arguing for the right to inspect private e-mail and to pierce postal secrets and common carrier privacy. He argued that this undermines whistle-blowers and freedom of press, which need privacy to protect these "private" communications.

Some presentations at the user conference dealt with performance improvements and tuning in the then-new 5.1 release of MySQL, and other sessions discussed planning for a future 6.0 release, probably after 2010. That will probably be a major discussion point at the 2009 MySQL user conference. Here is the schedule for the upcoming 2009 conference: http://en.oreilly.com/mysql2009/public/schedule/grid

One potentially interesting session for 2009 talks is on "Drizzle", a fork of the MySQL server targeted at Web development and cloud computing. Monty Taylor, a very senior MySQL/Sun engineer, is working on it full time. Drizzle is also discussed at a panel session discussing the MySQL roadmap. Check out the keynotes and presentations at the O'Reilly archives. (See below.)

For on-line O'Reilly Conference archives, visit this link: http://conferences.oreillynet.com/archive.csp

I do have to give a nod to O'Reilly on this: They put up event archives quickly, and these are publicly accessible.


Talkback: Discuss this article with The Answer Gang


Bio picture

Howard Dyckoff is a long term IT professional with primary experience at Fortune 100 and 200 firms. Before his IT career, he worked for Aviation Week and Space Technology magazine and before that used to edit SkyCom, a newsletter for astronomers and rocketeers. He hails from the Republic of Brooklyn [and Polytechnic Institute] and now, after several trips to Himalayan mountain tops, resides in the SF Bay Area with a large book collection and several pet rocks.

Howard maintains the Technology-Events blog at blogspot.com from which he contributes the Events listing for Linux Gazette. Visit the blog to preview some of the next month's NewsBytes Events.


Copyright © 2009, Howard Dyckoff. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 161 of Linux Gazette, April 2009

Playing with Chroot

By Oscar Laycock

It took me a while to realise what chroot does. As I found out, it runs a command with the root directory for file name translation changed to the specified directory. Usually, only root can do this. [1]

Here is a quick example:

First, I use ldd to print the shared libraries needed by my bash:

    libtermcap.so.2 => /lib/libtermcap.so.2
    libdl.so.2 => /lib/libdl.so.2
    libc.so.6 => /lib/libc.so.6
    /lib/ld-linux.so.2 => /lib/ld-linux.so.2

Then, I create a directory and copy in the files:

    myroot/bin:
        ls bash
    myroot/lib:
        ld-linux.so.2 libc.so.6 libtermcap.so.2 libdl.so.2

then I just:

    chroot myroot /bin/bash
    cd /
    ls

Note: the bash prompt will very likely say "I have no name!", as there is no /etc/passwd file in the chrooted structure.

In the Kernel

The chroot program is part of the GNU shell utilities package. It is tiny, merely calling the C library function chroot() and then executing its second argument (or the default /bin/sh) with the C function execvp(). Here, it uses the shell PATH, or "/bin:/usr/bin" if it is not set. The chroot library function has its definition in unistd.h:

/* Make PATH be the root directory (the starting point for absolute paths).
      This call is restricted to the super-user.  */
extern int chroot (__const char *__path)

Inside the kernel is the function "sys_chroot". It checks for the CAP_SYS_CHROOT capability. Then, it simply changes the "current->fs" global structure's "rootmnt" and "root" fields to the filename's "dentry". Other code then uses these fields to determine the root directory. Have a look in the kernel sources in fs/open.c and fs/namespace.c (the function name is 'set_fs_root') for more info.

Chroot in Linux from Scratch

Chroot is a key part of the Linux from Scratch (LFS) project, which allows you to build a handmade Linux system. The actual chroot command there is a bit more complex:

chroot "$LFS" /tools/bin/env -i \ 
    HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \ 
    PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin \ 
    /tools/bin/bash --login +h 

The -i option gives an empty environment. Bash hashing is switched off, as we will be changing the location of the tools.

You can see how chroot fits in the whole LFS project. Once we have the above set up, we take the following steps:

  1. Create a new partition and base directories (/lib, /bin, /usr, etc.)
  2. Build a new "toolchain" in the partition, comprising binutils (the assembler and linker), the gcc compiler, and the large glibc (C library).
  3. Rebuild gcc, using configure options to use the new glibc and changing the gcc specs to use the new glibc's dynamic linker. (You usually "configure", "make", and "make install" when building a program from source code. Try running "gcc -dumpspecs" to see the mysterious compiler specs.)
  4. Rebuild binutils using the '--prefix' option of "configure" to use the new glibc.
  5. Build lots of tools such as bash, core/file utils, make, perl, and so on.
  6. CHROOT INTO THE NEW PARTITION'S DIRECTORIES!
  7. Rebuild glibc.
  8. Rebuild binutils and gcc, changing the directories to be relative to the chroot top directory. Build all the tools again.
  9. Build a kernel.
  10. Add the new partition and kernel to the bootloader.

As you can see, you end up building the basic tools three times! Luckily, there is another LFS project that automates this process, with scripts. Even more, the "Beyond Linux from Scratch" project shows you how to add much more, such as Web servers and the GNOME and KDE desktop environments.

A Quick Compiler

I am currently building an LFS system on an old laptop a friend gave me. I started with a kernel, and some small tools (fdisk, ls, cp, etc.), statically built and squeezed onto a floppy. I then copied across Damn Small Linux (DSL), floppy by floppy, before setting up a ppp link with a serial cable. DSL does not have a compiler by default, and I wanted to get one going quickly. The compiler seemed to conflict with the DSL system (a smaller old 2.4 kernel with no "thread local storage" for the C library to use), so I created a chroot directory with just enough to build a simple "hello world" program. I added the following files. (I believe "crt" stands for "C run-time", and "begin" files are code added at the start of the program(?). A prefix or suffix of "s" usually means using shared libraries as normal.)

myroot/usr
|
+---include:
|       a.out.h ... xlocale.h
|
+---lib:
|       Mcrt1.o Scrt1.o crt1.o crti.o crtn.o gcrt1.o
|
+---local
    |
    +---bin:
    |       gcc
    |          
    +---i686-pc-linux-gnu
    |   |
    |   +---bin: 
    |   |       as ld
    |   |
    |   +---lib
    |       +---ldscripts:
    |               elf_i386.x ...
    |    
    +---lib:
    |   |   libgcc_s.so libgcc_s.so.1 libgmp.so.3 libmpfr.so.1
    |   |
    |   |---gcc
    |       +---i686-pc-linux-gnu
    |           +---4.3.2:
    |                   crtbegin.o crtbeginS.o ...
    |                   libgcc.a ... 
    |    
    +---libexec
        +---gcc
            +---i686-pc-linux-gnu
                +---4.3.2:
                        cc1 cc1plus collect2


[1]
A common application of the chroot call would be to run a Web or FTP server chrooted in a directory like /home/www or /home/ftp; this provides an excellent layer of security, since even a malicious non-root user who manages to crack that server is stuck in a "filesystem" that contains few or no tools, no useful files other than the ones already available for viewing or downloading, and no way to get up "above" the top of that filesystem. This is referred to as a "chroot jail".
Do note, however, that allowing a user to log in as root into your chroot account is not safe: root can break out of a chroot jail with trivial ease. Please see the following links for more information:
http://kerneltrap.org/Linux/Abusing_chroot
http://unixwiz.net/techtips/chroot-practices.html
http://www.bpfh.net/simes/computing/chroot-break.html
-- Ben Okopnik

Talkback: Discuss this article with The Answer Gang


[BIO]

I live by the River Thames in the suburbs of London, England. I play with Linux in my spare time on a ten year old PC. I was a C and Oracle programmer when I was younger.


Copyright © 2009, Oscar Laycock. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 161 of Linux Gazette, April 2009

Bash configuration under Ubuntu

By Ben Okopnik

[ From a moldering fragment of ancient writings discovered among the dustbunnies in an abandoned computer room ]

"...And when Ubuntu first came into the land, there was much rejoicing at the nice interface, the ever-reliable "dpkg" package system, the user-friendly community, and the rest - and all was good. But lo, there came the darker days of further discovery: those who had, for ages untold, set up their environment variables and other configuration bits in their ~/.bash_profile suddenly discovered that this was no longer processed. Furthermore, they found that seeking advice in the wonderful Ubuntu user forums availed them not. And there arose a cry in those latter days of 'Dude - WHAT HAPPENED TO MY RESOURCE FILES?'"

With luck, this article will answer that question - and maybe even tell you what you can do about it.

The Way Things Were, and The Way They Are

Once upon a time, life with Bash under X was easy and predictable: when you booted your system, the final runlevel either A) handed you a login console, which started your login shell and read all its init files, at which point you could start X, or B) ran a graphical display manager that would start X, fire off a login shell (which read its init files), and hand control over to your ~/.xinitrc or ~/.xsession, where you could run up whatever X configuration, programs, and desktop manager you wanted. Lots of flexibility, plenty of choices - although that latter could be somewhat confusing to Linux newcomers - and all was well.

Ubuntu, however, did something different: the runlevel passes control to the GNOME display manager (GDM), which runs your desktop manager (GNOME) and... that's pretty much it. Sure, it's easier for newcomers - but there's no such thing as control over the shell behavior anymore; in fact, there's no login shell, which means that the per-user configuration files are no longer sourced at login time. There's also no standard way to fire up any X startup-time configuration. What to do?

When I switched to Ubuntu, I found the situation unpleasant but dealt with it in various ways (mostly hacks involving becoming root and messing about with Deep GDM Magick - not something I'd recommend for a new user, since it's a good way to quickly make your system unbootable). Recently, though, I decided to see if it could be fixed within the limits of what the average user could do.

Following the Wily X Beast

First, I traced the execution of the X startup scripts in /etc/X11 and /etc/gdm; this mostly involved chasing the path through the Xsession file, which sets up variables and loads the external files, then hands control off to the display manager defined in /etc/X11/default-display-manager (gdm). GDM, in turn, runs its own version of Xsession (/etc/gdm/Xsession) which goes back and reads a series of scripts in /etc/X11/Xsession.d/, and so on. In the process, I noticed that one of the resources read by /etc/gdm/Xsession was a file called "$HOME/.xprofile". Bingo - a user-controllable resource! There was one catch, however: since the shebang line at the top of /etc/gdm/Xsession consisted of "#!/bin/sh", this meant that .xprofile would be read by that shell - not by Bash - which meant that I had to avoid any "Bashisms" (i.e., structures or commands specific to bash as contrasted against ones executable by a plain Bourne shell.) The positive side to this was that Bash would inherit any of the variables set by the Bourne shell (I guess some kids do listen to their parents...) Overall, this didn't look like much of a hardship: it just required a little extra caution. Previously, I would have just edited the shebang in /etc/gdm/Xsession - but I was determined to do this from the non-root perspective, so that option was out.

Since the default shell under Ubuntu is Bash, I knew that every invocation of the shell would read the ~/.bashrc file. The traditional use of the two bash resource files has always been to place the "run once" stuff like PATH, functions, "mesg n", etc. in ~/.bash_profile, and "run for every shell" stuff like aliases in ~/.bashrc. The latter was to be kept as small and simple as possible, since it was run for every shell invocation. Given this new system, though, that would have to change a bit:

  1. I decided to leave my ~/.bash_profile alone. As a result of Ubuntu's hackery, it's not being sourced now, but that doesn't mean that it won't be at some point in the future - and if I move to some other distro, it's still as valid as it always was.
  2. ~/.xprofile would now take over - more or less - the function of ~/.bash_profile, but would have to follow Bourne syntax. This means that functions can no longer be exported (as Bourne does not support the "export -f" convention), but must be moved into ~/.bashrc. Furthermore, since all of it is being read before a console is spawned, any tty-specific functions (e.g., "mesg") also need to be moved there. I also made sure not to include the lines from ~/.bash_profile that sourced ~/.bashrc; that would be sourced every time I spawned a shell, but was not wanted when X was sourcing ~/.xprofile.
  3. ~/.bashrc would now carry a bit more of a load: all the functions would now be defined there (meaning they don't have to be exported any longer - every shell gets the whole set as it starts up), and all the console configuration would be there as well. The alias definitions, which were in there previously, would stay just as they were.

In essence, what I ended up doing is combining ~/.bash_profile and ~/.bashrc and splitting them back out into ~/.xprofile and ~/.bashrc, according to the new "rules" that I set up above.

The Details

Be aware that you'll be "judged harshly" if you make a mistake: any error in ~/.xprofile will crash your /etc/gdm/Xsession and cause GDM to show you an error message - something like "Your session lasted for less than 10 seconds. Failed to start the X server (your graphical interface). It is likely that it is not set up correctly. [...]" If this happens, go to 'Options/Select session' in GDM and choose 'failsafe', check out your ~/.xsession_errors to find out why it crashed and fix that problem, then try again.

Just below, I'll give (somewhat reduced) examples of my ~/.bash_profile, ~/.bashrc, and ~/.xprofile. The important thing to note is what got moved out of the former and where it went, or if it went anywhere at all. I'll highlight the ~/.xprofile lines in blue and the ~/.bashrc lines in green; anything in bold black got left out because it was no longer applicable.

~/.bash_profile

# ~/.bash_profile: executed by bash during startup.

if [ -f ~/.bashrc ]; then
  . ~/.bashrc
fi

eval $(lesspipe)
stty stop ''
mesg n

# Note: these lines would normally need to be revised for Bourne syntax,
# since the original Bourne shell did not accept exporting and declaration
# in one statement; however, '/bin/sh' in Debian/Ubuntu does accept it, so
# it's not a concern.

export EDITOR=/usr/bin/vi
export ENV=~/.shrc
export LESSCHARSET=utf-8
export LIBGL_DRIVERS_PATH=/usr/lib/dri
export LYNX_CFG=${HOME}/.lynxrc
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/games:/usr/local/games:/var/svn/linuxgazette.net/bin
export PERLDOC="-otext"
export PI=`perl -we 'printf "%.48f\n", atan2(0,-1)'`
export RSYNC_RSH=/usr/bin/ssh
export SVN_SSH=/usr/bin/ssh
export WWW_HOME=file://${HOME}/lynx_bookmarks.html
export XTIDE_DEFAULT_LOCATION='St. Augustine, city dock, Florida'

# Sites
export LG="linuxgazette.net"
export NHC="www.nhc.noaa.gov"
export WWW="okopnik.com"

TTY=`/usr/bin/tty 2>/dev/null`
[ ${TTY:5:3} == "tty" ] && {		     # If not a console, bail!
	color=(foo blue green magenta)       # tty's start at 1, arrays at 0...
	setterm -foreground ${color[${TTY#*y}]} -store
}

~/.xprofile

# ~/.xprofile: executed by X during startup (modified version of
# .bash_profile, must be executable under /bin/sh)

export EDITOR=/usr/bin/vi
export LESSCHARSET=utf-8
export LIBGL_DRIVERS_PATH=/usr/lib/dri
export LYNX_CFG=${HOME}/.lynxrc
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/games:/usr/local/games:/var/svn/linuxgazette.net/bin
export PERLDOC="-otext"
export PI=`perl -we 'printf "%.48f\n", atan2(0,-1)'`
export RSYNC_RSH=/usr/bin/ssh
export SVN_SSH=/usr/bin/ssh
export WWW_HOME=file://${HOME}/lynx_bookmarks.html

# Sites
export LG="linuxgazette.net"
export NHC="www.nhc.noaa.gov"
export WWW="okopnik.com"

~/.bashrc

# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files for examples

# If running interactively, then:
if [ "$PS1" ]; then
	mesg n
	eval $(lesspipe)
	# Load aliases initially; part of the 'realias' hack
	source ~/.aliases
	# Set up the LG build vars
	source $HOME/.lgrc
	# Update LINES and COLUMNS
	shopt -s checkwinsize
	# Set the xterm title
	case $TERM in
		 gnome|nxterm|xterm*|rxvt*)
				 PROMPT_COMMAND='echo -ne "\033]0;$USER@`hostname`: ${PWD}\007"' ;;
	esac
fi

####### Temp proxy settings ################
[ -f ~/ENABLE_PROXY ] && {
	export HTTP_PROXY=`cat ~/ENABLE_PROXY`
	export http_proxy=$HTTP_PROXY
	export FTP_PROXY=$HTTP_PROXY
	export ftp_proxy=$HTTP_PROXY
	export no_proxy=localhost
	export NO_PROXY=localhost

    # Automate w3m proxying
	export W3M_OPTIONS='-o use_proxy=1 -o http_proxy='$HTTP_PROXY' -o ftp_proxy='$FTP_PROXY' -o no_proxy=localhost'
	alias w3m="$W3M_OPTIONS "
}
####### Temp proxy settings ################

############ Functions #####################
calc() { perl -wle'print eval join "", @ARGV' $@; }
cdlg() { cd $LG_ARTICLES/`sed -n 's/currentIssue.*= *//;T;p' $LG_LIBPYTHON/lgconfig.py`; }
h() { history|grep "^ *[0-9]* *$1"; }
searchmail() { less -P "'n' to see the next match, 'q' to quit"  -p "$1" ~/Mail/Sent_mail; }
shake() { zless -p "$1" $HOME/Books/Other/The\ Complete\ Shakespeare.gz; }
ip() { ifconfig "${1:-eth0}"|sed -n '2s/.* inet addr:\([0-9.]*\) .*/\1/p'; }
pod() { cd /usr/share/perl/`perl -e'printf "%vd", $^V'`/pod; egrep "$1" *|less; }
export -f calc cdlg h searchmail shake ip pod
############ Functions #####################

Wrap-up

In practice, the only concern that I had - i.e., that each shell invocation would now load more slowly due to a larger ~/.bashrc - did not prove to be a problem; testing it with 'time bash -c exit' showed a load+exit time of 0.004 seconds. For the moment, I'm willing to consider this problem solved to my satisfaction.


Talkback: Discuss this article with The Answer Gang


picture

Ben is the Editor-in-Chief for Linux Gazette and a member of The Answer Gang.

Ben was born in Moscow, Russia in 1962. He became interested in electricity at the tender age of six, promptly demonstrated it by sticking a fork into a socket and starting a fire, and has been falling down technological mineshafts ever since. He has been working with computers since the Elder Days, when they had to be built by soldering parts onto printed circuit boards and programs had to fit into 4k of memory (the recurring nightmares have almost faded, actually.)

His subsequent experiences include creating software in more than two dozen languages, network and database maintenance during the approach of a hurricane, writing articles for publications ranging from sailing magazines to technological journals, and teaching on a variety of topics ranging from Soviet weaponry and IBM hardware repair to Solaris and Linux administration, engineering, and programming. He also has the distinction of setting up the first Linux-based public access network in St. Georges, Bermuda as well as one of the first large-scale Linux-based mail servers in St. Thomas, USVI.

After a seven-year Atlantic/Caribbean cruise under sail and passages up and down the East coast of the US, he is currently anchored in northern Florida. His consulting business presents him with a variety of challenges such as teaching professional advancement courses for Sun Microsystems and providing Open Source solutions for local companies.

His current set of hobbies includes flying, yoga, martial arts, motorcycles, writing, Roman history, and mangling playing with his Ubuntu-based home network, in which he is ably assisted by his wife and son; his Palm Pilot is crammed full of alarms, many of which contain exclamation points.

He has been working with Linux since 1997, and credits it with his complete loss of interest in waging nuclear warfare on parts of the Pacific Northwest.


Copyright © 2009, Ben Okopnik. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 161 of Linux Gazette, April 2009

Joey's Notes: Using Squid Web proxy to control Web access

By Joey Prestia

Joey's Notes image

This month's article covers configuration of the Squid proxy server on RHEL 5.x. Squid is best known for its Web proxy caching functionality; it's deployed in a vast number of installations in this aspect, and can drastically reduce server load by reusing commonly requested Web pages. It is also very handy as an access control mechanism for managing an internal network. Squid is very effective - but it takes proper configuration to make it do exactly what you want. This article is intended as a guide for achieving that configuration.

The Squid configuration file that comes packed with version 2.6 for RHEL-5.x has some 4,325 lines in it. That's a big file, and it's easy to lose track of what's been done in it. In my opinion, it is best to use external files to deal with frequent changes - By doing things in this modular fashion, changes can be done quickly and safely. So, after making a backup of the original, we'll get started.

Access control lists

Access control lists work very simply in Squid. These definitions come directly from the Squid site http://www.squid-cache.org/Doc/config/acl/, where you will find a multitude of ACL guidelines and samples. Here are the basics, to get you up and running:

Defining an Access List

	Every access list definition must begin with an aclname and acltype, 
	followed by either type-specific arguments or a quoted filename that
	they are read from.

	   acl aclname acltype argument ...
	   acl aclname acltype "file" ...

	When using "file", the file should contain one item per line.

	By default, regular expressions are CASE-SENSITIVE.  To make
	them case-insensitive, use the -i option.

Some examples:

acl aclname acltype (ip-address/netmask or .domain.com)

acl   aclname   src         ip-address/netmask 		       # clients IP address
acl   aclname   src         addr1-addr2/netmask 	       # range of addresses
acl   aclname   dst         ip-address/netmask  	       # URL host's IP address
acl   aclname   myip        ip-address/netmask 		       # local socket IP address
acl   aclname   srcdomain   .foo.com       	               # reverse lookup, from client IP
acl   aclname   dstdomain   .foo.com            	       # Destination server from URL
acl   aclname   dstdomain   "/etc/squid/allow/safe-sites"  # file must exist
acl   aclname   srcdom_regex [-i] \.foo\.com ...	       # regex matching client name
acl   aclname   dstdom_regex [-i] \.foo\.com ...	       # regex matching server

http_access allow aclname  # allow access 
http_access deny aclname   # deny access

http_access allow localhost  # allow localhost
http_access deny all         # deny access not specifically allowed
Caching Server

The image below is an example of how our classrooms are set up here at my college. The proxy has multiple network cards in it and acts as a simple caching proxy.

Proxy Sample Image

Getting Squid up and running as a simple caching proxy web server is very easy, and can save on bandwidth. We can just change the default listening port, define a source network address, set that up with an http_access allow aclname, and be done with it. The lines we will need to search for and modify in the squid.conf file are shown below, along with the changes I made.

Example of a Basic Proxy Server Configuration
# Squid normally listens on port 3128
#http_port 3128

# Joey 1-12-09 changed http_port to use port 80
http_port 80


#acl our_networks src 192.168.1.0/24 192.168.2.0/24
#http_access allow our_networks

# Joey 1-12-09 changed source network for caching 
acl our_networks src 192.168.7.0/24
http_access allow our_networks

For a caching server, you would merely have to modify the lines as shown above, adjust the network source address(es) to accommodate your situation, save the changes, and restart the server. Then, point your internal machines to this server's IP address and port as their proxy. Don't forget to check your firewall to make sure connections are permitted.

Configuration for Restricting Sites

Let's say our employer wants to prevent all employees from accessing Web sites that are detrimental to productivity. That is an ideal job for Squid.

As always, you should have a good concept of the big picture as pertains to your company, so that you can design your implementation well. Important factors include the company's projected growth and overall business plan: don't build a non-scalable network, for instance. You most certainly do not want to spend a lot of time fixing problems caused by unexpected company growth.

The following configuration samples may be used individually or in combination. When making changes, it's best to do one at a time, reload Squid, and test your results before going on.

Setting Default Port

To get this up and running, there are some things we might want to modify. For example, the default http_port is 3128: most admins will want to change that.

# Squid normally listens on port 3128
http_port 3128
Visible Hostname

The next thing we want to set is the visible_hostname directive. This will make it easier for you to find the appropriate server if needed, and make changes if an issue arises. You can specify what you want, or you can have the return value of gethostname() as stated in the comments. It is mainly useful for managing clusters.

#  TAG: visible_hostname
#       If you want to present a special hostname in error messages, etc,
#       define this.  Otherwise, the return value of gethostname()
#       will be used. If you have multiple caches in a cluster and
#       get errors about IP-forwarding you must set them to have individual
#       names with this setting.
#
visible_hostname restrictor1.example.com
Cache Manager

You will probably want to have the e-mail address of the cache administrator displayed. This would, for example, allow a junior member to receive requests for access, and update the access files as needed.

# ADMINISTRATIVE PARAMETERS
# -----------------------------------------------------------------------------

#  TAG: cache_mgr
#       Email-address of local cache manager who will receive
#       mail if the cache dies. The default is "root".
#
#Default:
# cache_mgr root
# Joey 1-12-09 changed cache_mgr to Orion. He has permissions to 
# authorize and allow new sites and reload Squid. 
cache_mgr orion@example.com

The following configuration examples are various ACL rules that you may want to change.

Unrestricted Access for a Subnet

This one will allow unrestricted access for a subnet, if the server is on several networks. Set the source src network in a statement and allow unrestricted access for administrators (or others).

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
# Example rule allowing access from your local networks. Adapt
# to list your (internal) IP networks from where browsing should
# be allowed
#acl our_networks src 192.168.1.0/24 192.168.2.0/24
#http_access allow our_networks

# Joey 1-12-09 allow unrestricted access for admin staff on subnet
acl admin src 192.168.5.0/24
http_access allow admin
Using Reference Files to Control Access

Here, you need to create a directory and put the files you reference in it. The files should contain the domains you will allow or deny access to.

# Joey 1-12-09 otherguys are all other employees and have restrictions
# Edit the referenced file - not this one - to make a change!!!
acl otherguys dstdomain "/etc/squid/approved-sites/safe-sites-gov"
acl otherguys dstdomain "/etc/squid/approved-sites/safe-sites-com"
acl otherguys dstdomain "/etc/squid/approved-sites/safe-sites-net"
acl otherguys dstdomain "/etc/squid/approved-sites/safe-sites-edu"
acl otherguys dstdomain "/etc/squid/approved-sites/safe-sites-org"
acl otherguys dstdomain "/etc/squid/approved-sites/safe-sites-non-us"

http_access allow otherguys

# And finally, deny all other access to this proxy

http_access allow localhost
http_access deny all

This is what our safe-sites-org reference file could contain. Note that the file should contain one item per line. The comments are strictly for future reference.

.pbs.org                     # PBS - News
.publicagenda.org            # Public Agenda - News
.ortl.org                    # Oregon Right to Life - Research
.acponline.org               # American College of Physicians - Research
.afsp.org                    # American Foundation Suicide Prevention - Research
.dioceseofnewark.org         # Episcopal Diocese - Research
.internationaltaskforce.org  # Task Force on Euthanasia - Research
.policyalmanac.org           # Almanac of Policy Issues - Research
.content.nejm.org            # The New England Journal of Medicine - Research
.npr.org                     # NPR - News
.ncsl.org                    # National Conference of State Legislatures - Research
ACL to Restrict Allowed User Agents

Maybe you've heard by now that Internet Explorer has a lot of vulnerabilities? Why not just prevent it from being used, altogether? This ACL does just that.

#acl with_allowed_useragents browser (Firefox) 
acl MSIE browser MSIE
http_access deny MSIE
Conclusion

Most new users trying out Squid get intimidated by the number of comments in the file, and quickly get discouraged when trying to set it up. There are lots of configurable options in Squid, and it can take time to learn them and get the setup just right. The way to success is to change only one option at a time and make sure it works properly before moving on. Students frequently come into the Red Hat lab, try to configure multiple options at once, and break Squid because of it. It also pays to use tail -f /var/log/squid/access.log to watch and read messages. Another good troubleshooting method is to run Squid in debug mode, with squid -NCd1 . Squid has countless possibilities, and this article goes into just a few of them. Be sure to read the manual, and see more of what it can do.

Resources

Talkback: Discuss this article with The Answer Gang


[BIO]

Joey was born in Phoenix and started programming at the age fourteen on a Timex Sinclair 1000. He was driven by hopes he might be able to do something with this early model computer. He soon became proficient in the BASIC and Assembly programming languages. Joey became a programmer in 1990 and added COBOL, Fortran, and Pascal to his repertoire of programming languages. Since then has become obsessed with just about every aspect of computer science. He became enlightened and discovered RedHat Linux in 2002 when someone gave him RedHat version six. This started off a new passion centered around Linux. Currently Joey is completing his degree in Linux Networking and working on campus for the college's RedHat Academy in Arizona. He is also on the staff of the Linux Gazette as the Mirror Coordinator.


Copyright © 2009, Joey Prestia. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 161 of Linux Gazette, April 2009

XKCD

By Randall Munroe

[cartoon]

Click here to see the full-sized image

[cartoon]

Click here to see the full-sized image

More XKCD cartoons can be found here.

Talkback: Discuss this article with The Answer Gang


[BIO]

I'm just this guy, you know? I'm a CNU graduate with a degree in physics. Before starting xkcd, I worked on robots at NASA's Langley Research Center in Virginia. As of June 2007 I live in Massachusetts. In my spare time I climb things, open strange doors, and go to goth clubs dressed as a frat guy so I can stand around and look terribly uncomfortable. At frat parties I do the same thing, but the other way around.


Copyright © 2009, Randall Munroe. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 161 of Linux Gazette, April 2009

The Linux Launderette

Amazing IBM ad for Linux...

Ben Okopnik [ben at linuxgazette.net]


Sat, 7 Mar 2009 20:03:42 -0500

...that I just ran across. It's two years old, but still awesome: "Matrix"-like effects, music, everything. Huge fun.

http://www.youtube.com/watch?v=EwL0G9wK8j4

Latinist: "Res publica non dominetur." [1] Muhammad Ali: "Speak your mind. Don't back down."

[1] Free translation: "Do not let public property fall into the hands of the tyrants."

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Talkback: Discuss this article with The Answer Gang

Published in Issue 161 of Linux Gazette, April 2009

Tux