Custom Search

Wednesday, March 14, 2012

Dream Studio 11.10: Upgrade or Hands Off?

Many Linux distributions specialized for multimedia distributions have come and gone. Some were pretty good, but Dream Studio has outshone them all. Musician and maintainer Dick Macinnis has just released Dream Studio 11.10, based on Ubuntu Oneiric Ocelot. Dream Studio 11.04 is a tough act to follow – is it worth upgrading to 11.10?


Many Linux distributions specialized for multimedia distributions have come and gone. Some were pretty good, but Dream Studio has outshone them all. Musician and maintainer Dick Macinnis has just released Dream Studio 11.10, based on Ubuntu Oneiric Ocelot. Dream Studio 11.04 is a tough act to follow – is it worth upgrading to 11.10?

Chasing Ubuntu

Basing a custom distribution on Ubuntu has a lot of advantages, but it also means chasing a fast-moving target. There are ways to minimize the pain, as Macinnis explains. "The decision to create different versions of Dream Studio is one I had made quite a while ago, and is one of the reasons I decided to get all my packages into PPAs rather than on a personally hosted repo."


PPAs are Personal Package Archives hosted on Canonical's Launchpad. This is a slick way to make third-party package repositories available in a central location. Ever wonder what goes into making your own Linux distribution? Even when you base it on another distro like Ubuntu it's still work.


"When I build the Dream Studio each release cycle, I basically install Ubuntu on a VM, make sure all the packages will install properly, and run a script to add my personal optimizations and such. Then I use Ubuntu Customization Kit to unpack the stock Ubuntu liveCD, run the script I've made on it via chroot, and pack it up again," says Macinnis. "Dealing with changes in Ubuntu from one release to the next is the biggest issue and takes the most time, which is why I don't begin until Ubuntu has been released (as chasing a moving target was driving me nuts a couple releases ago). However, since almost all my packages are now desktop independent (except artwork), making derivatives with different DEs is quite easy."


Sure, it's easy when you know how. Vote for your favorite desktop environment in Macinnis' poll, and be sure to vote for LXDE because that is my favorite. Or E17, which is beautiful and kind to system resources. Or maybe Xfce.

System Requirements

Dream Studio 11.10 is a 2GB ISO that expands to 5.6GB after installation. You can run it from a live CD or USB stick, but given the higher performance requirements of audio and video production you really want to run it from a hard disk. While we're on the subject of hard drives, don't get excited over 6Gb/s SATA hard disk drives. They're not much faster than old-fashioned 3Gb/s or 1.5Gb/s SATA HDDs, and you need a compatible motherboard or PCI-e controller. Put your extra money into a good CPU instead. Audio and video production, and editing photo and image files are CPU-intensive. Bales of RAM never hurts, and a discrete video card, even a lower-end one, gives better performance than cheapo onboard video that uses shared system memory.


My studio PC is powered by a three-core AMD CPU, 4GB RAM, an Nvidia GPU, and a couple of 2TB SATA 3Gb/s hard drives. It's plenty good enough, though someday I'm sure I'm going to muscle it up more. Why? Why not?Figure 1: Dream Studio 11.10 with Tuxguitar on the Unity desktop


You want your hardware running your applications and not getting weighed down driving your operating system. Dream Studio ships with GNOME 2, GNOME 2 with no effects, Unity, and Unity 2D. Just for giggles I compared how each one looked in top, freshly booted and no applications running:


 

GNOME 2: 440,204k memory, 6% CPUGNOME 2, no effects: 453,640k memory, 3.7% CPUUnity: 592,432k memory, 4.8% CPUUnity 2D: 569,936 1.0% CPU

 


It's not a big difference, measuring memory usage is not precise in Linux, and your mileage may vary, so use what makes you happy.

What's Inside

Dream Studio installs with a vast array of audio, movie, photography, and graphics applications. It's a great showcase for the richness of multimedia production software on Linux. Audio is probably the biggest pain in the behind, as the Linux audio subsystem can be a real joy* to get sorted out. One of the best things Dream Studio does is box it all up sanely, and on most systems you just fire up your audio apps and get to work. It comes with a low-latency kernel, the JACK (JACK Audio Connection Kit) low-latency sound server and device router, and the pulseaudio-module-jack for integrating PulseAudio with JACK. If you have a single sound card this doesn't give you anything extra, so you're probably better off disabling PulseAudio while JACK is running. This is easy: in Qjackctrl go to Setup -> Options and un-check "Execute script after startup: pulsejack" and "Execute script after shutdown: pulsejackdisconnect". Leave "Execute script on startup: pausepulse" and "Execute script after shutdown: killall jackd" checked.


If you have more than one audio interface PulseAudio gives you some extra device routing options that you don't have with JACK alone. Once upon a time PulseAudio was buggy and annoying because it was new, and it introduced latency. It's stable and reliable now, but it still introduces some latency which is not good for audio production. But when you're capturing and recording audio streams, as long as everything is in sync then latency doesn't matter. Try it for yourself; it is easy and fun.


The creative applications are nicely-organized in both Unity and GNOME 2. Some notable audio apps are Audacity, Ardour, Hydrogen drum kit, DJ Tools, Tuxguitar, batches of special effects, and the excellent Linux Multimedia Studio (LMMS). On the graphics and video side you get FontForge, Luminance HDR, Scribus, Hugin, Stopmotion, Openshot, Blender, Agave, and a whole lot more.


There are a few of the usual productivity apps like Firefox, Libreoffice, Empathy, and Gwibber. And of course you may install anything in Linux-land that your heart desires.

Upgrade or No?

The problems I've run into are mostly Ubuntu glitches. During installation, the partitioning tool only gives a teeny tiny bit of room to show your existing partitions, and it does not resize, so you can't see all of your partitions without figuring out how to make it scroll. (Click on any visible partition and navigate with the arrow keys.) Ubuntu wants to you play audio CDs with Banshee; it wants this so badly it does not have a "play CD with" option. But Banshee doesn't work — it doesn't see the CD. My cure for this was to install VLC. There were some other nits I forget so they couldn't have been all that serious.


The one significant issue I ran into was with mass xruns in JACK. A xrun is a buffer underrun; an interruption or dropout in throughput. This can cause noticeable dropouts in your sound recordings. xruns should not be a problem on a system as powerful as mine, and they never have been. Until now. It could be a kernel problem, or a bug in JACK, it's hard to say. So before you upgrade a good working system test this new release well first.


*If you define joy as head-banging aggravation.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Random Linux Tips: Making KDE4 Behave, Thwacking Those Weirdo U3 Partitions on USB Sticks

Sometimes, we have little tips and tricks that make life easier – but don't quite take up a full article. So today I've bundled a few practices that many Linux.com readers might find helpful. You'll learn how to control window behavior in KDE4, and make Nepomuk and Strigi be useful; and remove those silly proprietary U3 partitions from USB sticks.


 


Sometimes, we have little tips and tricks that make life easier – but don't quite take up a full article. So today I've bundled a few practices that many Linux.com readers might find helpful. You'll learn how to control window behavior in KDE4, and make Nepomuk and Strigi be useful; and remove those silly proprietary U3 partitions from USB sticks.


 

Making KDE4 Behave

KDE4 comes with eleventy-eight hundred and fifteen fancy special effects, and the one that are turned on by default seem rather random. With a little digging you can figure out how to control them. The one that drives me around the bend is when dragging a window by its title bar maximizes it. This is entertaining when all you want to do is move it. How many ways do we need to maximize windows? Apparently a whole lot of them. But KDE4 has controls for everything, so you can turn this particular feature on or off in System Settings > Workspace Behavior. Go to the Screen Edges screen, and uncheck "Maximize windows by dragging them to the top of the screen." Click the Apply button and you're done.


Underneath that is "Tile windows by dragging them to the side of the screen." What I really want is a one-click tile all windows, like in Windows 3.1 and some Linux desktop environments. But this feature is fairly useful once you figure it out; when you drag a window to the top right or left edge it shrinks to one-quarter of the screen. Drag it to the bottom left or right it shrinks to half-screen size. Drag it away to restore to its previous size.


Do you get tired of swatting KDE's oversize information pop-ups out of the way? Go to the Workspace screen and un-check "Show informational tips" and then click Apply.


The fine KDE4 folks have invested a lot of energy into the semantic desktop and semantic search. You've probably heard some of the wailing against Nepomuk and Strigi because they crash or bog down the whole system. Together they index files, file metadata, comments, tags, ratings, and file contents.


Nepomuk and Strigi do bog down your system on their first run. If you can leave your computer running until they complete that initial indexing, then after that you won't even notice them. For example, on my system with a nearly full 2TB hard drive, the first run took about two days, and it used a fair whack of CPU cycles. Nepomuk and Strigi indexed over 80,000 files, and the Nepomuk database is now over 700MB. Since that first run I leave them on all the time, and they barely use 1% of memory and CPU.


You can configure Nepomuk and Strigi in System Settings > Desktop Search. On the Basic Settings tab you can turn them on and off, and the Details link tells you the Nepomuk data store statistics. Use the Desktop Query tab to choose which files and directories are indexed by Strigi. By default temporary files, core dumps, and other non-data files are not indexed. The Advanced Settings tab lets you control how much memory Nepomuk uses.


Once you've let Nepomuk and Strigi search and index your files, then what? Here is one example. Figure 1 shows the results of a search for "Dick Macinnis" (author of the Dream Studio distribution) from inside the Dolphin file manager. It found emails we had exchanged, and articles I had written that mentioned him.


You can fine-tune your search in a number of ways: Filename, Content, From Here (the current directory), Everywhere, and on the bottom right you can select Documents, Audio, Video, and other file types. Then right-click on any item found in the search to choose which application to open it, open it in a new window with the full file path displayed, and several other useful actions. So even though this semantic stuff is still a baby, it's already useful.Figure 1: A Strigi search dives deeply into your system.

Getting Rid of Weirdo U3 Partitions on USB Sticks

I have an 8GB Sandisk Cruzer that comes with a special partition loaded with the U3 Launchpad, which is a portable environment for running applications from the USB stick on Windows. There is an actual U3 specification, and compliant applications can write to the Windows registry and load files on a Windows PC. Then when the device is removed the files and registry entries are also removed, and any application settings are stored on the USB stick. This all sounds cool, except it has a hidden partition that you cannot remove by ordinary means and can't use for data storage, and it won't let you create a nice bootable Linux USB stick.


The hidden partition presents itself as a USB hub with a CD drive attached. When you plug one of these into a Linux PC it looks like a normal partitioned block device to fdisk and Gparted, but in Figure 2 we see how it looks to the KDE4 Device Notifier: it sees the U3 partition as an optical device, and can mount it and read the files.


On my system it is mounted as /media/U3 System/. The ls command, like so many old Unix-y utilities, doesn't like spaces in filenames. But not to worry, because we can still list the files with the help of some quotation marks:

$ ls "/media/U3 System/"autorun.inf LaunchPad.zip LaunchU3.exe

You can't delete these files the usual way. One way is to search your vendor's site for a delete utility, which most likely will run only on Windows. We hardy Linux users have our own special tool, and that is u3-tool. If it's not in your distro repo fetch it from Sourceforge. Run it with no options to see an option summary:

$ u3-tool Not enough argumentsu3-tool 0.3 - U3 USB stick managerUsage: u3-tool [options] Options: -c Change password -d Disable device security -D Dump all raw info(for debug) -e Enable device security -h Print this help message -i Display device info -l Load CD image into device -p Repartition device -R Reset device security, destroying private data -u Unlock device -v Use verbose output -V Print version informationFor the device name use: '/dev/sda0', '/dev/sg3'

Try running sudo u3-tool -i /dev/sdc, using your own device name of course, to see the device's partitions:


 

$ sudo u3-tool -i /dev/sdcTotal device size: 7.51 GB (8065646592 bytes)CD size: 7.69 MB (8060928 bytes)Data partition size: 7.50 GB (8057520128 bytes)

 


Use the -D option to dump complete device information. This reveals a number of interesting things, such as "Max. pass. try: 5". If it is password-protected that means you get five attempts to enter the correct password. If you fail then the device locks and the only way to get back in is to reset the password, which deletes all the data. You'll also see the serial number, manufacturer, and exact partition sizes.Figure 2: KDE4's Device Notifier sees the special U3 partition.


Sometimes, though not always, you can read these devices on Linux even when they are password-protected. They'll mount like a normal USB stick with no password. Sometimes Linux won't even be able to create the block device and you'll see a ton of these messages in dmesg:


 

[ 5773.262417] sd 8:0:0:0: [sdc] No Caching mode page present[ 5773.262422] sd 8:0:0:0: [sdc] Assuming drive cache: write through

 


If that's the case all you can do is open it on a Windows machine and reset the password. This wipes all the data, but at least you can still use the device.


Now we come to the fun part: eliminating the U3 partition entirely. Stuff it into your Linux box and run this command, using your own device name:


 

$ sudo u3-tool -p 0 /dev/sdcWARNING: Loading a new cd image causes the whole device to be whiped. This INCLUDES the data partition.I repeat: ANY EXCISTING DATA WILL BE LOST!Are you sure you want to continue? [yn] y

 


And you now have a nice normal universal USB stick without any tricksy Windows-only guff. Enjoy!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Brewtarget: Hop into Beer Brewing with Open Source

If you've always wanted to brew your own beer, you'll be glad to know there's an app for that. Created by Philip Lee, Brewtarget is an open source application that helps home brewers create and manage beer recipes. We talk to Lee about Brewtarget's history, its features, and its future.


If you've always wanted to brew your own beer, you'll be glad to know there's an app for that. Created by Philip Lee, Brewtarget is an open source application that helps home brewers create and manage beer recipes. We talk to Lee about Brewtarget's history, its features, and its future.


Since my college days at the University of Texas, I've wanted to start home brewing my own beer. One of my college roommates dabbled in home brewing for a while, and now I'm friends with several members of the Lawrence Brewer's Guild. Maybe by the time my daughter goes to college, I will have hopped into home brewing, and when I do, I'll have a free, open source program to help me master my mixes of fermentables.


Philip Lee is an Electrical Engineering and Computer Science doctoral candidate who brews his own beer. In fact, he put his computer science chops to use in his home brews by developing Brewtarget, a free, open source application that helps brewers create and manage beer recipes. In this interview, Lee explains what happens when you mix open source programming with a passion for brewing beer.Brewtarget


Linux.com: What inspired you to write Brewtarget?


Philip Lee: Right after I got into homebrewing in 2008, I was looking for open source beer tools for Linux, and I found QBrew, but after looking at its implementation and contemplating whether to extend it or start from scratch, I decided I could do better by starting from scratch. I made some simple attempts early in 2008, but didn't get very far, and resorted to calculating recipes by hand. I'm actually glad I did this, because after doing this for about a year, I learned all the math I would need to make a piece of software, plus some extra. The serious work started in December 2008, when I was sitting at home over the holidays – I was, and still am, a grad student – and had some free time to kill.


Linux.com: Which Linux and open source tools did you use to create Brewtarget?


Philip Lee: At the time, I had a Sony Vaio laptop loaded with Debian "Sarge" and KDE. After writing most of the underlying code in C++, I was starting to look for GUI libraries. I had never really used any GUI library in C++ before, so I tried out a number of them like GTK, FLTK, and finally Qt. I ended up with Qt, because out of all of them, it was by far the best documented and had the best tools as far as I could tell.


What I really love about Qt itself is its "meta object" system. It effectively extends C++ to include class property information that you can retrieve at run-time, like an object's class name, what it inherits, accessor functions and so on, much like more modern languages. This allows you to do stuff like set an object's properties and fields by name, rather than having to know the actual accessor function at compile time. Qt also has meta functions as well, which is really nice as Brewtarget becomes more complex and abstract, allowing us to execute a generic function on a generic object without having to write object-specific code.


Qt Designer is a great little drag-n-drop graphical layout tool that we use heavily to design most of our widgets and windows. It also allows you to connect GUI events, modify the basic properties, and so on in order to keep the codebase shorter and cleaner.


One of Qt's other tools I like a lot is Qt Linguist for translation and localization. Basically, when you wrap a string literal in the code with tr(), Qt will automatically keep a list of those translatable strings and let you export them for giving to a translator. Linguist makes it pretty simple for the translator, even if s/he doesn't know anything about coding, to open those files and send us back all the translations. I'm positive that the translations we get are really what sets Brewtarget apart from other beer software. Beer is international.


CMake is by far my favorite tool that we use, though. I have to say, qmake, which is Qt's default project build tool, is really quite painful to work with, but I didn't know any better for a while; however, after asking one too many questions on the KDE IRC channels, someone told me to just switch to CMake, since that's what they use instead of qmake. Life has never been the same. Other than the fact that you can make UNIX makefiles, Visual Studio projects, and other toolchain-specific build files without any effort, it allows you to even package the output up into an NSIS installer for Windows, or a Debian or Red Hat package, or a Mac disk image, or whatever with the CPack module. Coupled with the fact that it has built-in robust tools for finding all the Qt libraries on your system, it becomes a real trifecta of a build tool.


Linux.com: I see that Brewtarget is still actively developed. Who helps you with it?


Philip Lee: We have quite a few contributors. Most of them just come to fix a particular bug or implement a specific feature and then go on to other projects. I would consider my good buddy Mik to be the "second in command," so to speak. He is a long-term developer who always helps me out when he has time. He's really good to bounce ideas off of, and is the one who suggested that we move off of our old in-memory database to SQLite, and we are working on finalizing that transition now for the next release.


Linux.com: Do you have any idea how many people actually use Brewtarget?


Philip Lee: I only have a vague guess at the number of users. We have about 51k downloads from the SourceForge site, and get about 60 new downloads per day, but now as the package is accepted into Debian, that won't be an accurate number going forward.


Linux.com: If I'm interested in starting to brew my own beer at home, what will I need to know before I start using Brewtarget?


Philip Lee: First that it's very rewarding. It's very much like going from eating fast food to cooking better, tastier meals at home. It's a hobby that can be a simple or complex as you want really. My favorite introduction is a book by John Palmer, available for free, at howtobrew.com. The basic things that matter in a beer are how much sugar is in it before and after fermentation — determines the sweetness and amount of alcohol; how bitter it is, to balance the sweetness; and what color it is. These are things that you can calculate from Palmer's book, but that Brewtarget will do automatically for you.


Linux.com: What's next for Brewtarget?


 


Philip Lee: This next release is mostly an internal clean up. The way the database was designed previously really hadn't been changed since the my first code in 2008, and we were running into a brick wall with some of the features we wanted. After we move to SQLite, there will be quite a lot of new features like being able to search through the ingredients in the database and stuff like that. I also plan to add some water chemistry tools for people that like to alter the ions and salts to fit a particular profile.


Linux.com: Anything else you'd like to add?


Philip Lee: Maybe I'll take this time to answer typical first questions about home brewing. Will I get Jake leg from it? No way. Can you make Bud? Yes, but why would you want to when you can make a vanilla cream ale, or a black IPA, or a cranberry wheat ale? Is it cheap? It typically costs about 60 cents for a bottle. How long does it take? About a month to go from grain to glass. Does it taste good? Extremely. Where can I ask questions? homebrewtalk.com. Where can I get supplies? midwestsupplies.com.


Beer and coding. Love it.


Linux.com: Thanks to Philip for taking time out of his busy brewing schedule for this interview.


Brewtarget is available for Linux, Mac OS X, Windows and some other UNIX-like platforms. See a video demo of Brewtarget in action on the project's SourceForge page.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Sunday, March 11, 2012

Linux Training Opportunities at Linux Foundation Collaboration Summit

The Linux Foundation's Collaboration Summit is a great time to, well, collaborate. But it's also a really good opportunity to learn.

We're offering three courses at this year's Collaboration Summit, each in a different area, to help build skills while rubbing elbows with other top kernel developers.

Advanced Linux Performance Tuning is a deep dive into proven tools and methods used to identify and resolve performance problems, resulting in system that is better optimized for specific workloads.  This is particularly for those who write or use applications that have unusual characteristics, that behave differently than kernel performance heuristics anticipate.  It is a hands-on course that assume some familiarity with basic performance tools.  This course is offered on Monday, April 2nd.

Overview of Open Source Compliance End-to-End Process
is for any company that is redistributing Linux or other open source code.  It provides a thorough discussion of the processes that should be in place to ensure that all open source code is being tracked and that licensing obligations are being met.  This is a very practical course designed to give your company the ability to design your own internal process.  This course is offered on Sunday, April 1st.

Practical Guide to Open Source Development is not a course on coding.  Rather, it is about maximizing the effectiveness of your contributions.  It is structured to give you a thorough understanding of the characteristics that make the open source model work well for corporate develoment organizations, and covers best practices when joining an external open source project, when launching your own, and when open sourcing proprietary code.  This course is offered on Monday, April 2nd.

All of these courses are available for registered invitees now.  If you've already registered for Collaboration Summit, you can modify your conference registration and add these courses.

See you there!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

5 Tips and Tricks for Using Yum

If you're using one of the Fedora/Red Hat derived Linux distributions, odds are you spend some time working with Yum. You probably already know the basics, like searching packages and how to install or remove them. But if that's all you know, you're missing out on a lot of features that make Yum interesting. Let's take a look at a few of the less commonly used Yum features.

 

If you're using one of the Fedora/Red Hat derived Linux distributions, odds are you spend some time working with Yum. You probably already know the basics, like searching packages and how to install or remove them. But if that's all you know, you're missing out on a lot of features that make Yum interesting. Let's take a look at a few of the less commonly used Yum features.

Yum comes with a lot of different distributions, but I'm going to focus on Fedora here. Mainly because that's what I'm running while I'm writing this piece. I believe most, if not all, of this should apply to CentOS, Red Hat Enterprise Linux, etc., but if not, you may need to check your man pages or system documentation.

Working with Groups

If you use the PackageKit GUI, you can view and manage packages by groups. This is pretty convenient if you want to install everything for a MySQL database or all packages you should need for RPM development.

But, what if you (like me) prefer to use the command line? Then you've got the group commands.

To search all groups, use yum group list. This will produce a full list of available groups to install or remove. (Yum lists the groups by installed groups, installed language groups, and then available groups and language groups.)

To install a group, use yum group install "group name". Note that you may need quotes because most group names are two words. If you need to remove the group, just use yum group remove "group name".

Want to learn more about a group? Just use yum group info with the group name.

Love the Yum Shell

If you're going to be doing a lot of package management, you want to get to know the Yum shell.

Just run yum shell and you'll be dumped in the Yum shell. (Which makes sense. It'd be weird if you got a DOS prompt or something...) Now you can run whatever Yum commands you need to until you're ready to exit the Yum shell.

For instance, want to search packages? Just type search packagename.

Here's the primary difference, when running things like install or remove, Yum will not complete the transaction immediately. You need to issue the run command to tell Yum to do it. This gives you the advantage of being able to tell Yum to do several things, and then actually run the transactions.

The Yum shell does have a few commands that aren't available at the command line. For instance, you can use config to set configuration options, and ts will show you the transaction set or reset it. The repo command will let you list, enable, and disable repos.

If you're not sure what commands the shell has, run help and check the yum-shell man page.

You can exit the Yum shell with exit or quit.

Use Yum Plugins

Yum isn't a one-size-fits-all tool. It's actually extensible, and has a plugin system that allows developers to create added functionality that doesn't have to be added to the core of Yum.

This helps contribute to Yum's performance by not including all functionality by default. If the user doesn't need the plugin's functionalty, why bog Yum down with it?

Different distributions have different plugins available, but the fastest way to see which Yum plugins available is to run yum search yum-plugin or yum search yum | grep plugin. (Note that a few plugins might not turn up with the first search, like yum-presto or yum-langpacks.)

Most likely, plugins are enabled by default. To be sure, though, open /etc/yum.conf and check that you have this line:

plugins=1

If it says plugins=0 you'll need to change it.

View Changelogs

One of the plugins I'm fond of is the changelog plugin. If you have this installed you can view the changelogs for packages, even if they're not installed.

To view a changelog just run yum changelog packagename or just changelog packagename if you're in the Yum shell.

Yum Downgrade

Sometimes upgrades aren't all they're cracked up to be. If an upgrade has you down (sorry), you might want to try downgrading to the previous version.

To do this, just use yum downgrade name where "name" is either the package, group or other target that Yum will work with. (See the man page for the full list.

The caveat here is that it doesn't work with some packages, like kernel packages. But if you're in a pinch, give it a shot.

More to Come...

That's all, for now. But there's plenty more fun where that came from. Next time around, we'll take a tour of some of the most interesting Yum plugins and how you can use them to make managing your system even easier.

]]>
This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Making Short Work of Image Conversions with Converseen

Thanks to my photography background, I tend to equate "image manipulation" with "photo adjustment and retouching" – but of course that is just one task. For a lot of jobs, image manipulation means dull work like repeatedly converting, resizing, and compressing images for output – fitting them to the proper size for a post on the Web site, converting to the right format for print, and many other similar, repetitive tasks. You can use tools like GIMP and Krita for this class of work, but you would usually be better off firing up a dedicated batch conversion tool – saving yourself considerable time and mental energy. A comparatively young tool called Converseen is a good place to start.

 

Thanks to my photography background, I tend to equate "image manipulation" with "photo adjustment and retouching" – but of course that is just one task. For a lot of jobs, image manipulation means dull work like repeatedly converting, resizing, and compressing images for output – fitting them to the proper size for a post on the Web site, converting to the right format for print, and many other similar, repetitive tasks. You can use tools like GIMP and Krita for this class of work, but you would usually be better off firing up a dedicated batch conversion tool – saving yourself considerable time and mental energy. A comparatively young tool called Converseen is a good place to start.

 

Getting Started with Converseen

Converseen is a Qt-based GUI front-end to a collection of ImageMagick conversion tools. ImageMagick is itself a suite of multiple command-line image manipulation utilities with different aims, and Converseen does not implement all of them. Instead, it focuses on image resizing and format conversion, and builds a queue-like graphical interface around it. In practice, you load a series of files into the Converseen queue, adjust the output settings for each one, then punch the "Convert" button.

The project makes binaries available packaged for Ubuntu, Fedora, and openSUSE, as well as Windows and source code archives. The latest release is 0.4.9, from January 2012.

Converseen delivers time savings in two ways. First, ImageMagick itself is ridiculously fast, so Converseen gains speed right off the bat. There is no comparison between Converseen's ImageMagick-driven setup – where you configure the output settings first, then apply them en masse – and the time required to open each individual picture in a raster graphics editing application.

But second, using Converseen is also faster than using ImageMagick itself at the command line. The list-based interface allows you to tab through the list of images, and it allows you to select multiple files at once (even an entire folder-ful) then apply the same set of settings to the whole batch. The more images you need to process, the better that option becomes. You can set the desired size constraints and compression level once, and add literally as many files to the queue as you want.

Yes, it also helps that ImageMagick can convert between more than 100 file types – including potentially tricky offerings like PDF and PostScript that may not be supported in your raster editor of choice. I have found that to be a handy option when exchanging PDF documents with users on other OSes, who for one reason or another don't seem to like Evince's version of saved form data.

Using the application is straightforward; you select files to add to the queue on the right-hand side of the window pane. When you highlight any image in the queue, you can change its output settings: the output format is selectable in the drop-down box beneath the queue, and the other options are located in the preview pane on the left. This release lets you change the dimensions, output resolution, and format only. However, you can also provide renaming options, including a per-image output directory and a regular-expression-based rename.

The Competition

On the other hand, I have always found Phatch's interface a tiny bit confusing. With Phatch, you first construct a conversion "pipeline" by stacking together operations for each picture. That is certainly important if you want to take advantage of the app's extensive transformation tools, such as distorting an image, adding a reflection beneath it, and rendering it on top of a black background. But it is overkill for repetitive conversion and resizing tasks, which are where some automation is most appreciated.

In practice, I find Converseen to be the best option for generating screen-friendly sizes of images I need to upload to a Web service like Flickr, publish in a news story, or otherwise move around for output. You get better results if you edit your photos in full resolution, then export to a high-quality archival format like TIFF – but that isn't the format you would want to use in a slide show or a presentation. Although a lot of image editors allow you to choose your output format or queue up two export versions of each picture, it is a timely endeavor. Far easier is turning Converseen on the folder once, and being done with it.

I'm hopeful that future releases will expand on Converseen's functionality. As I mentioned earlier, ImageMagick offers a wide assortment of manipulation options, but I have never successfully memorized them. Consequently, every time I have to use ImageMagick to composite or stitch two files together, I end up spending several minutes either paging through the man entries or searching the lengthy ImageMagick online documentation. And that is exactly the kind of job that cries out for a little automation.

]]>
This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Weekend Project: Take a Look at Cron Replacement Whenjobs

The cron scheduler has been a useful tool for Linux and Unix admins for decades, but it just might be time to retire cron in favor of a more modern design. One replacement, still in heavy development, is whenjobs. This weekend, let's take a look at whenjobs and see what the future of scheduling on Linux might look like.

The cron scheduler has been a useful tool for Linux and Unix admins for decades, but it just might be time to retire cron in favor of a more modern design. One replacement, still in heavy development, is whenjobs. This weekend, let's take a look at whenjobs and see what the future of scheduling on Linux might look like.

The default cron version for most Linux distributions these days is derived from Paul Vixie's cron. On Ubuntu, you'll find the Vixie/ISC version of cron. On Fedora and Red Hat releases you'll find cronie, which was forked from Vixie cron in 2007.

You can also find variants like fcron, but none of the variants that are around today have really advanced cron very much. You can't tell cron "when this happens, run this job." For instance, if you want to be notified when you start to run low on disk space, or if the load on a machine is above a certain threshold. You could write scripts that check those things, and then run them frequently. But it'd be better if you could just count on the scheduler to do that for you.

There's also the less than friendly formatting to cron. When I've worked in hosting, I found plenty of cron jobs that were either running too often because the owner mis-formatted the time fields, or that didn't run as often as expected for the same reason. Back up jobs that should have been running daily were running once a month. Oops.

Getting whenjobs

Red Hat's Richard W.M. Jones has been working on whenjobs as "a powerful but simple cron replacement." The most recent source tarball is 0.5, so we expect that it's starting to be usable but still has some bugs. (And it does, more on that in a moment.)

A slight word of warning, you're probably going to need a lot of dependencies to build whenjobs on your own. It doesn't look to be packaged upstream by any of the distros, including Fedora. It also requires a few newer packages that you're not going to find in Fedora 16. I ended up trying it out on Fedora 17, and installing a slew of OCaml packages.

To get whenjobs, you can either grab the most recent tarball or go for the most recent code out of the git repository. You can get the latest by using git clone git://git.annexia.org/git/whenjobs.git but I'd recommend going with the tarball.

Jones recommends just building whenjobs with rpmbuild -ta whenjobs-*.tar.gz. (Replace the * with the version you have, of course.) If you have all the dependencies needed, you should wind up with an RPM that you can install after it's done compiling.

Using whenjobs

To use whenjobs, you'll need to start the daemon. Note that you don't start this as root, you want to run it as your normal user.

According to the documentation, you want to start the daemon with whenjobs --daemon, but Jones tells me this isn't implemented just yet. Instead, you'll want to run /usr/sbin/whenjobsd to start the daemon. You can verify that it's running by using pgrep whenjobsd. (Eventually you'll be able to use whenjobs --daemon-status.)

To start adding jobs to your queue, use whenjobs -e. This should drop you into your jobs script and let you start adding jobs. The format is markedly different than cron's, so let's look at what we've got from the sample whenjobs scripts.

 

every 10 minutes :<< # Get free blocks in /home free=`stat -f -c %b /home` # Set the variable 'free_space' whenjobs --type int --set free_space $free>>when changes free_space && free_space < 100000 :<< mail -s "ALERT: only $free_space blocks left on /home" $LOGNAME >

 

Jobs start with a periodic statement or a when statement. If you want a job to run at selected intervals no matter what, use the every period statement. The period can be something like every day or every 2 weeks or even every 2 millenia. I think Jones is being a wee bit optimistic with that one, but on the plus side – if whenjobs is still in use 1,000 years from now and it doesn't run your job, Jones probably won't have to deal with the bug report...

Otherwise, you can use the when statement to evaluate an expression, and then perform a job if the statement is true. See the man page online for all of the possible when statements that are supported.

As you can see, you can also set variables for whenjobs. It accepts several types of variables, such as ints, strings, booleans, and so forth. You can see and set variables using whenjobs --variables or whenjobs --set variable.

The script for the job is placed between brackets (<< >>). Here you can use normal shell scripts. The scripts are evaluated with the $SHELL variable or /bin/sh if that's not set. Presumably you could use tcsh or zsh instead if you prefer those.

If you want to see the jobs script without having to pop it open for editing, use whenjobs -l. You can use whenjobs --jobs to see jobs that are actually running. If you need to cancel one, use whenjobs --cancel serial where the serial number is the one given by the whenjobs --jobs command.

A cron Replacement?

I don't think whenjobs is quite ready to be rolled out as a cron replacement just yet, but it's definitely worth taking a look at if you've ever felt frustrated with cron's limitations. It will probably be a while before whenjobs starts making its way into the major distros, but if you're feeling a little adventurous, why not start working with it this weekend?

]]>
This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.