Custom Search

Tuesday, March 27, 2012

Disorganized? Get Tracks on Linux


Feeling a bit disorganized? Looking to take control of your projects? Take a look at Tracks, an open source Web-based application built with Ruby on Rails. Tracks can help you get organized and Get Things Done (GTD) in no time.


Getting Tracks


You can get Tracks in a couple of ways. Some distributions may offer Tracks packages, and the Tracks project has downloads you can use to set it up.


The easiest way to get Tracks, though, is the BitNami Tracks stack which includes all the dependencies and just takes a few minutes to download and set up. No need to try to set up a database, Ruby on Rails, or anything. (If you're already using Ruby on Rails for something else, you might want to go for the project downloads, of course.)


I'm running the BitNami stack on one of my Linux systems on my home intranet. BitNami provides installers for Linux, Windows, and Mac OS X. They also provide pre-built VMs with Tracks pre-configured, so you could fire it up in VirtualBox or VMware if you prefer.


What Tracks Does


Tracks is good for single-user project management. It's based around the GTD methodology, and helps users organize their to-do list using GTD.


You can use Tracks without employing GTD, but Tracks is an "opinionated" tool. That is, it's structured around GTD, so it has some artifacts that might not fit into other methodologies quite as well.


Like any other to-do list or organizer, Tracks lets you set up actions. These have a description, notes, tags, due dates, and dependencies.


Dependencies are actions that have to be done before you can complete the current action. So, for instance, if you have a to-do item to deploy Tracks, it might be dependent on installing Ruby on Rails first.


Tracks also has contexts and projects. Projects are just what they sound like. For example, I use separate projects for each site that I write for (Linux.com, ReadWriteWeb, etc.), and also for home, hobbies, etc.


Contexts, on the other hand, are sort of nebulous but can be best defined as groups of actions that can be performed at the same time. You might have a "phone" context for all of the phone calls you need to make, regardless of whether you're going to be making calls for work or personal reasons. So if you're already making a few phone calls for one project, you might decide to just get all of your phone calls out of the way rather than switching context and being less effective.


Likewise, you might have a "system administration" context or "errands" context. If you're already logged into a client's server, you might want to take care of all the system administration tasks in one sitting rather than switching back and forth between phone calls and system administration. If you're going to run an errand to buy a new backup drive for a desktop system, you might also go ahead and do grocery shopping while you're out instead of making two trips.


There's also a special tickler context that allows you to throw in action items that don't really relate to other contexts. This is good for items that don't have specific due dates, or might be "things I kind of want to do, but have no specific plans for yet." I use the "tickler" context for article/post ideas that I don't have scheduled or assigned anywhere, as well as for things I want to do someday but aren't on the immediate horizon. Tickler items can eventually be put into other contexts as they become more important, or you might (yeah, right) finish all your other work and decide to tackle tickler items.


The Tracks Web-based interface is pretty straightforward. You have a Home page with each context and its action items. On the right-hand side you have a form for adding actions.


One thing you'll like right away is that Tracks will auto-complete actions, contexts and so forth that you already have in the system. If you have a "writing" context, for example, you just have to type a few letters and it'll offer to auto-complete it for you.


It also will create contexts and projects for you automatically the first time you use them, rather than requiring you to create them first and then use them.


Tracks is easy to get started with and use. It's not entirely perfect, but it's darn good if you want a single-user organization system. Give it a shot and see if it works for you!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Bodhi Linux, the Beautiful Configurable Lightweight Linux


Bodhi Linux is gorgeous, functional, and very customizable. It just so happens that's what grumpy old Linux nerds like me think Linux is always supposed to be. Let's take this Linux newcomer for a spin and learn what sets it apart from the zillions of other Linux distributions.


Like Sand Through the Hourglass


Bodhi Linux is fast-growing newcomer to the Linux distro scene. The first release was at the end of 2010, and it has attracted users and contributors at a fast pace. Do we need yet another Linux distro? Yes we do. KDE and GNOME both leaped off the deep end and left users in the lurch. KDE4 has matured and is all full of functionality and prettiness, but it's a heavyweight, and GNOME 3 is a radical change from GNOME 2, though considerably easier on system resources than KDE4. And there are all the other good choices for graphical environments such as Xfce, LXDE, Fluxbox (to me, KDE4 is Fluxbox with bales of special effects, as they share the same basic concepts for organizing workflow and desktop functionality), IceWM, Rox, AfterStep, Ratpoison, FVWM, and many more. So what value does Bodhi add? Four words: Enlightenment, minimalism, and user choice.


Enlightenment


The first release of the Enlightenment window manager was way back in the last millennium, in 1997. The current version, E17, has been in development since 2000, which has to be a record. I predict there will never be a final release because that would spoil its legendary status as the oldest beta.


I've always thought of Enlightenment as a flexible, beautiful, lightweight window manager for developers, because my best experiences with it were when it came all nicely set-up in a distro like Elive, PCLinuxOS, Yellow Dog, and MoonOS. When I tried installing it myself I got lost, which is probably some deficiency on my part. Enlightenment is a wonderful window manager that run under big desktops like KDE, and it can run standalone. It runs on multiple operating systems and multiple hardware platforms, and it supports fancy special effects on low-powered systems. Bodhi Linux makes good use of Enlightenment's many excellent abilities.


Bohdi


Bodhi Linux is the creation of Jeff Hoogland, and is now supported by a team of 35+ people. System requirements are absurdly low: 300mhz i386 CPU, 128MB RAM, 1.5GB hard drive space. The minimalist approach extends to installing with a small complement of applications. I suppose some folks might prefer having six of everything to play with, but I've always liked installing what I want, instead of removing a bunch of apps I don't want. There are maybe a dozen applications I use on a regular basis, and a set of perhaps 20-some that I use less often. I don't need a giant heavyweight environment just to launch the same old programs every day. So Bodhi's minimalist approach is appealing.


Bodhi is based on the Ubuntu long-term support releases, and is on a rolling release schedule in between major releases. When the next major release comes out users will probably have to reinstall from scratch, but the goal is for Bodhi to become a true rolling-release distribution that never needs reinstallation.


Killer Feature: Profiles


One particular feature I find brilliant in Bodhi is profiles. Profiles are an Enlightenment feature, and the Bodhi team created their own custom set. First you choose from the prefab profiles: bare, compositing, desktop, fancy, laptop-/netbook, tablet, and tiling. The laptop/netbook profile looks great on my Thinkpad SL410.


The fancy profile greets you with a dozen virtual desktops and a shower of penguins, some of who sadly meet their demise (figure 1.) Then you can customize any of the profiles any way you like with different themes, Gadgets, whatever you want, and quickly switch between them without logging out. So you could have a work and home profile, a single and multi-monitor profile, a travel profile.


No Disruptions


Given all the uproar over KDE4 and GNOME, I asked Jeff Hoogland about the future of Bodhi. He explained that disruptive change is not in the Bodhi roadmap:


"The reason for this is the way in which we utilize E17's "profiles". We recognise that a singular desktop setup is not going to satisfy all users and is far from being suitable for all types of devices. In other words if the Bodhi team sees the need to develop an alternative desktop setup for some reason, it would simply be offered in addition to our current profile selections – not replacing them. User choice is one of our mottoes."


Like KDE4 and Fluxbox, Bodhi supports desktop Gadgets (widgets in KDE4) for displaying things like weather forecast, clock, desktop pager, hardware and system monitors, and various controls. It has its own compositing manager, Ecomorph, which is a port of Compiz. This is an installable option and not included by default because it has problems on some hardware. But if it works on your system it's nice because it doesn't need a mega-super-duper CPU to support a trainload of special effects.


Bodhi comes with the Ubuntu software repositories enabled by default, plus the Bodhi repos. You can manage software with apt-get, Synaptic, or the Bodhi Linux AppCenter for installing apps from a Web page. This requires either the default Midori Web browser or Firefox, because they support the apt:url protocol. The AppCenter has package groups like the Nikhila Application Set, which has one of everything: word processor, audio player, movie editor, and several more. You can get the Bodhi Audio Pack, the Bodhi Image Pack, Bodhi Scientific Publishing, and several more. You're not stuck with the packs, but can install any of the individual applications. Applications are also sorted by category, such as Image Editing, Office Suite, Communication, and such.


Enlightenment has a bit of a learning curve, but the Bodhi folks have written a good Enlightenment Guide, and a lot of other useful documentation. I always like to ask distro maintainers why they lost their minds and decided to create their own Linux distributions. They're not the result of magic, but forethought and planning:


"While the idea to start up a minimalistic, Enlightenment based Ubuntu distro was originally my own I recruited team members before we even released our first disc. We started off with myself, Jason Peel, and Ken LaBuda. Today we have nearly forty people who contribute code and/or documentation to Bodhi, not to mention the countless people who have donated to keep our servers running! I have a rough release schedule posted here — and while that outline is flexible, I would bet it will be fairly close to reality."


Mobile Bodhi


Enlightenment seems like a natural fit for mobile devices, and Mr. Hoogland has plans in that direction as well:


"I would love to get Bodhi working on mobile devices eventually. We have a functional ARM branch that I have successfully booted on the Genesi Smartbook, HP Touchpad, Nokia N900 and ArchOS Gen8 devices to name a few. Sadly though other than then Genesi all these devices lack some functionality due to closed source hardware. We are simply ready and waiting to get our ARM branch with our tablet profile working on a truly open mobile device (if there are any companies out there interested in producing such a device they shouldn't hesitate to contact me)!"


Visit Bodhilinux.com for good documentation, forums, and downloads.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Scientific Linux, the Great Distro With the Wrong Name


Scientific Linux is an unknown gem, one of the best Red Hat Enterprise Linux clones. The name works against it because it's not for scientists; rather it's maintained by science organizations. Let's kick the tires on the latest release and see what makes it special.


Red Hat


Red Hat is one of the oldest Linux distributions, and has long been a fundamental contributor by funding development and making deep inroads into the enterprise. Red Hat is a billion-dollar company (they're expected to make an official announcement at the end of this month) and they did it without tricky licensing that locks up their best products behind proprietary licenses. This graph on Wikipedia shows the reach and influence of Red Hat Linux. (Your browser should have a zoom control to enlarge the image.)


And they did it while competing with high-quality free-of-cost distros, and giving away their own products. Anyone can have Red Hat Enterprise Linux for free by downloading and compiling the source RPMs. Which is harder than downloading an ISO image and burning a CD, but not a whole lot harder. My fellow geezers might remember the wails of protest when Red Hat discontinued the free ISOs, all those non-paying customers who vowed to never be non-paying Red Hat customers again.


Clone Stampede


But these things have a way of working out, and it didn't take long for free RHEL ISOs to appear again. Only they weren't released and supported by Red Hat, but clone distros like CentOS, White Box, Pie Box, CAOS, Yellow Dog, and ClearOS. Clones come, and clones go, and CentOS has been the most popular RHEL clone for several years. I think Scientific Linux would be as popular as CentOS if it had a different name, because most people assume it is aimed at scientists, and is all full of scientific applications. But it's not; it is called Scientific Linux because it is maintained by a cooperative of science labs and universities. Fermi National Accelerator Laboratory and the European Organization for Nuclear Research (CERN) are its primary sponsors. (CERN also funded Tim Berners-Lee when he invented the World Wide Web.)


Scientific Linux calls Red Hat The Upstream Vendor, or TUV. Red Hat told the clones that they could not use any of Red Hat's trademarked materials such as their name, logo, and other artwork. So many of the clones use cutesy nicknames (like TUV) instead.


Developed by Scientists


Scientific Linux was born in 2004, when Connie Sieh released the first prototype. It was based on Fermi Linux, one of the earliest RHEL clones, and it was called HEPL, High Energy Physics Linux because Ms. Sieh and the other original developers worked in physics labs. They shopped it around and solicited feedback and help, and it was renamed Scientific Linux to reflect that the community around it was not just physicists, but also scientists in other fields.


What Does it Do?


So what is Scientific Linux good for? Desktop? Server? Laptop? High-demand server? Yes to all. It is a faithful copy of RHEL, plus some useful additions of its own.


Scientific Linux offers several different download images, from a 161MB boot image for network installations, to live CD and DVD images, to the entire 4.5GB distro on two DVDs. I tried the LiveMiniCD first, which at 482MB isn't all that mini. This is the IceWM version. IceWM is a good lightweight desktop environment, but despite expanding to 1.6GB at installation the LiveMiniCD installation looks barebones. The only visible graphical applications are Firefox and Thunderbird, and Nautilus is installed but it is not in the system menu. There is a big batch of servers and clients, storage clients, networking and system administration tools. IceWM is a nice graphical environment for a server – I know, the old rule is No X Windows On A Server! In real life admins should use whatever makes them the most efficient and gets the job done.


Installing the Desktop group gives you a stripped-down Gnome desktop for the price of a 32MB download, and of course there are many more Gnome packages to choose from, including Compiz for fancy desktop effects. Good old Gnome 2, of course, so Gnome 2 fans, here is your chance to get a good solid Gnome 2 system with long-term support. RHEL has ten-year support cycles, plus extensions. The Scientific Linux team expect to support 6.x until 2017, though this could change.


Additions Not In TUV


Scientific Linux has bundled a nice toolset for making custom spins the easy way. The package group is SL Spin Creation, and you get nice tools like LiveUSB Creator and Revisor (figure 2.) In fact, SL was designed with custom spins in mind; spins have their own unique features, but all of them are compatible. There is even a naming convention: Scientific Linux followed by the name of the institution. For example if I made one I would call it Scientific Linux LCR, for Little Critter Ranch.


You could install yum-autoupdate for scheduled hands-off updates, some extra repositories, and the OpenAFS distributed filesystem. OpenAFS is used a lot in research and education.


The SL_desktop_tweaks package adds a terminal icon to the Gnome kicker, and an "add/remove programs" menu item to KDE. SL_enable_serialconsole is a nice script that configures the console to send output to both the serial port and the screen, and it provides a login prompt. Probably the most useful tweak is SL_password_for_singleuser, which requires a root password for single user mode. That's right, TUV still leaves this unprotected, so anyone with physical access to the machine can boot to single user mode for password-less access. Of course, as the old saying goes "Anyone with physical access to the machine owns it."


Enterprise Features


Red Hat made more than 600 changes and fixes to their version of the Linux kernel, and most of these are for high-demand high-performance workloads. You get all these in Scientific Linux too, along with all the virtualization, high-availability, multiple network storage protocols, clustering, device mapper multipathing, load balancing, and other enterprise goodies.


Support is always a question for the free clones; even when you're an ace admin and know the system inside out, you're still dependent on the vendor for timely security patches and releases. Scientific Linux is consistent at getting their releases out the door about two months after Red Hat, and have a good record with security updates.


So the short story is use Scientific Linux with confidence. The community support is good and mostly free of irritating people, and it's well-maintained and rock-solid.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

The Compiler That Changed the World Turns 25

Last year, Linux celebrated its 20th anniversary. The kernel that Linus Torvalds started as a hobby project helped the Internet bloom, challenged proprietary operating system dominance, and powers hundreds of millions of devices. From hacker toys like the dirt-cheap Raspberry Pi to most of the Top 500 Supercomputers, Linux dominates the computing industry. But it wouldn't have been possible without GCC, which turns 25 today.

Before Torvalds started hacking away on Linux, Richard Stallman and started the GNU (GNU's Not UNIX) project and part of that was the GNU C Compiler (GCC). Eventually that became the GNU Compiler Collection (also GCC) but we're getting a little ahead of the story.

A Short History of GCC and Linux

The origin of GCC actually started in the free C compiler that Stallman started in 1984, based on a multi-platform compiler developed at Lawrence Livermore Lab. Says Stallman, "It supported, and was written in, an extended version of Pascal, designed to be a system-programming language. I added a C front end, and began porting it to the Motorola 68000 computer. But I had to give that up when I discovered that the compiler needed many megabytes of stack space, and the available 68000 Unix system would only allow 64k.

"I then realized that the Pastel compiler functioned by parsing the entire input file into a syntax tree, converting the whole syntax tree into a chain of "instructions", and then generating the whole output file, without ever freeing any storage. At this point, I concluded I would have to write a new compiler from scratch. That new compiler is now known as GCC; none of the Pastel compiler is used in it, but I managed to adapt and use the C front end that I had written."

For many Linux enthusiasts, GCC has always just been. It's part of the landscape, and kind of taken for granted. But it wasn't always this way. Michael Tiemann, now vice president of open source affairs at Red Hat and the founder of Cygnus Solutions, called GCC's intro in 1987 a "bombshell."

"I downloaded it immediately, and I used all the tricks I'd read about in the Emacs and GDB manuals to quickly learn its 110,000 lines of code. Stallman's compiler supported two platforms in its first release: the venerable VAX and the new Sun3 workstation. It handily generated better code on these platforms than the respective vendors' compilers could muster... Compilers, Debuggers, and Editors are the Big 3 tools that programmers use on a day-to-day basis. GCC, GDB, and Emacs were so profoundly better than the proprietary alternatives, I could not help but think about how much money (not to mention economic benefit) there would be in replacing proprietary technology with technology that was not only better, but also getting better faster."

By the time Torvalds started working on Linux, GCC had quite a history of releases. The first release was 0.9, the first beta release, on March 22, 1987. It had 48 releases between the 0.9 release and 1.40, which came out on June 1, 1991. Torvalds announced that he was working on Linux in July 1991, using Minix and GCC 1.40.

The EGCS Years

As Tiemann notes later, GCC development started falling behind a bit. Some folks got a bit impatient with the GCC development process and speed of getting features to users. GCC had a number of forks in the wild with patches that hadn't made it into GCC. That led to the EGCS project, a fork of a GCC development snapshot from August 1997.

Eventually, in April 1999, EGCS became the official GCC, and GCC took on the "GNU Compiler Collection" moniker.

New in GCC 4.7

I won't try to summarize all of the development and features that have been part of GCC from 1999 to now. Suffice it to say, a lot has happened and GCC has been at the heart of more than Linux.

GCC has also been used by FreeBSD, NetBSD, and OpenBSD as well as part of Apple's XCode tools until very recently. (That's another story for another day...) If it's hard to imagine a world without Linux, it's even harder to imagine a world without the compiler that's been used to build Linux for more than 20 years.

So it's appropriate that the 25th anniversary release of GCC brings quite a bit of goodies for developers. According to Richard Guenther, 4.7 is a "major release, containing substantial new functionality not available in GCC 4.6.x or previous GCC releases."

Says Guenther, GCC 4.7 picks up support for newer standards for C++, C, and Fortran. "The C++ compiler supports a bigger subset of the new ISO C++11 standard such as support for atomics and the C++11 memory model, non-static data member initializers, user-defined literals, alias-declarations, delegating constructors, explicit override and extended friend syntax. The C compiler adds support for more features from the new ISO C11 standard. GCC now supports version 3.1 of the OpenMP specification for C, C++ and Fortran."

GCC also expands its hardware support, with new support for Intel's Haswell and AMD's Piledriver (both x86) architectures. It also adds support for the Cortex-A7 (ARM) line of processors, and for a number of others. Check out the full list of change for 4.7 if you're interested.

For fun and added points, check out this video on YouTube that visualizes 20 years of GCC development. It uses Gource to visualize the history of GCC development.

A big tip of the hat to the GCC team, present, past, and future. Without it, where would we be now?


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Wednesday, March 21, 2012

The Ever-Changing Linux Skillset

Just because you had what it takes for a good Linux-related job a decade ago, it doesn't mean that you have what it takes today. The Linux landscape has changed a lot, and the only thing that's really stayed constant is that a love of learning is a requirement.

What employers want from Linux job seekers is a topic I've spent a lot of time thinking about, but this post by Dustin Kirkland got me to thinking about just how drastically things have changed in a very short time. The skills that were adequate for a good Linux gig in 2002 may not be enough to scrape by today.

This isn't only true for Linux administrators, of course. If you're in marketing or PR, for example, you probably should know a great deal about social networks that didn't even exist in 2002. Journalists that used to write for print publications are learning to deal with Web-based publications, which often includes expanding to video and audio production. (Not to mention an ever-shrinking number of newspapers to work for...) Very few skilled jobs have the same requirements today as they did 10 years ago.

But if you look back at the skills needed for Linux admins and developers about ten years ago, and now, it's amazing just how much has changed. Kirkland, who's the chief architect at Gazzang, says that he's hired more than a few Linux folks in that time. Kirkland worked at IBM and Canonical before Gazzang, and says that he's interviewed "hundreds" of candidates for dozens of developer, engineer and intern jobs.

Over the years, Kirkland says that the "poking and prodding of a given candidate's Linux skills have changed a bit." What he's mostly looking for is the "candidate's inquisitive nature" but the actual skills he touches on give some insight as well.

Ten years ago, and Kirkland says that he'd want to see candidates who were familiar with the LAMP stack. Nine years ago, and Kirkland says that he'd look for candidates who "regularly compiled their own upstream kernel, maybe tweaked a few configuration options on or off just for fun." (If you've been using Linux this long, odds are you have compiled your own kernels quite a bit.)

Basically, a decade ago, you were looking at folks working with individual machines. In a lot of environments, you could apply what you knew from working with a few machines at home to a work environment. That didn't last.

Six years ago, Kirkland says that he was looking "for someone who had built their own Beowulf cluster, for fun, over the weekend. If not Beowulf, then some sort of cluster computing. Maybe Condor, or MPICH."

Not long after that, Kirkland says that he was looking for experience with open source virtualization – KVM, Xen, QEMU, etc. Three years ago, and Kirkland says that he wanted developers with Launchpad or GitHub accounts. Note that would have required early adopters on behalf of GitHub, since the site had only launched in 2008. Git itself was first released in 2005, but it took a few years before really catching on.

Two years ago? Clustering and social development alone weren't enough. Kirkland says that he was looking for folks using cloud technologies like Eucalyptus. (He also mentions OpenStack, but two years ago the number of people who'd actually been using OpenStack would have been fairly negligible since it was only announced in the summer of 2010..)

Finally, the most recent addition to the list is "cloud-ready service orchestration," which translates to tools like Puppet, Chef, or Juju.

Even that's not enough. What's next? Kirkland says that he's looking for folks who've rooted their phones, tried out big data and thrown together "a map-reduce Hadoop job or two, just for grins."

Naturally, this is just a snapshot of what one interviewer considers important. Kirkland's list of topics may or may not mirror what you'd get in an interview with any other company, but the odds are that you'll see something similar. His post illustrates just how much the landscape has changed in a short time, and the importance of keeping up with the latest technology.

If you're a job seeker, it means a lot of studying. If you're already employed, it means that you should be keeping up with these trends even if you're not using them in your work environment. If you're an employer, it means you should be investing heavily in Linux training and/or finding ways to help your staff stay current.

Even if you're a big data-crunching, cloud computing, GitHub-using candidate, the odds are that next year you'll need to be looking at even newer technology. From the LAMP stack to OpenStack, and beyond, things are not standing still. The one job skill you'll always need is a love of learning.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

As Data Grows, So Grows Linux


IDC recently announced its numbers for 2011 Q4 servers sales: overall server revenues are up for the year 5.8 percent, and shipments are up 4.2 percent. As The Reg reports, these shipment numbers are back to pre-recession levels.


What’s more interesting, though, is the trends that emerge from the very latest reporting quarter, Q4. Linux was the only operating system that saw a revenue increase in servers Q4, with a 2.2 percent rise. Windows lost 1.5 percent and Unix 10.7 percent.


IDC attributes some of that Linux success to its role in what the analyst firm calls “density-optimized” machines, which are really just white box servers, and are responsible for a lot of the growth in the server market. These machines have gained popularity in a space still squeezed on budget and that continues to be commoditized. But there are other factors at play for Linux’s success over its rivals.


Coming out of the recession, Linux is in a very different position than it was 10 years ago when we emerged from the last bubble. Today it's mature, tried, tested and supported by a global community that makes up the largest collaborative development project in the history of computing.


Our latest survey of the world’s largest enterprise Linux users found that Total Cost of Ownership, technical superiority and security were the top three drivers for Linux adoption. These points support Linux’s maturity and recent success. Everyone is running their data centers with Linux. Stock exchanges, supercomputers, transportation systems and much more are using Linux for mission-critical workloads.


Also helping Linux’s success here is the accelerated pace by which companies are migrating to the cloud. Long a buzzword, the cloud is getting real, right now. While there is still work to do for Linux and the cloud, there is no denying its dominant role in today’s biggest cloud companies: Amazon and Google to name just two.


The mass migration to cloud computing has been quickened due, in part, to the rising level of data: both the amount of data enterprises are dealing with but the also how fast that data is growing. IDC this week predicted that the “Big Data” business will be worth $16.9B in three years. There is a huge opportunity here for Linux vendors. Our Linux Adoption Trends report, shows that 72 percent of the world’s largest Linux users are planning to add more Linux servers in the next 12 months to support the rising level of data in the enterprise. Only 36 percent said they would be adding more Windows servers to support this trend.


The enterprise server market is a strong area for Linux, but it’s an incredibly competitive market. Together we’ll continue to advance Linux to win here. In fact, we’ll be meeting at the NYSE offices in April at our Annual Linux Foundation Enterprise End User Summit where some of the world’s largest companies will talk in depth about exactly the things I’ve touched on here.


Yet again we are seeing market winners are born from collaboration. And we have the numbers to back it up.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Dream Studio 11.10: Upgrade or Hands Off?


Many Linux distributions specialized for multimedia distributions have come and gone. Some were pretty good, but Dream Studio has outshone them all. Musician and maintainer Dick Macinnis has just released Dream Studio 11.10, based on Ubuntu Oneiric Ocelot. Dream Studio 11.04 is a tough act to follow – is it worth upgrading to 11.10?


Chasing Ubuntu


Basing a custom distribution on Ubuntu has a lot of advantages, but it also means chasing a fast-moving target. There are ways to minimize the pain, as Macinnis explains. "The decision to create different versions of Dream Studio is one I had made quite a while ago, and is one of the reasons I decided to get all my packages into PPAs rather than on a personally hosted repo."


PPAs are Personal Package Archives hosted on Canonical's Launchpad. This is a slick way to make third-party package repositories available in a central location. Ever wonder what goes into making your own Linux distribution? Even when you base it on another distro like Ubuntu it's still work.


"When I build the Dream Studio each release cycle, I basically install Ubuntu on a VM, make sure all the packages will install properly, and run a script to add my personal optimizations and such. Then I use Ubuntu Customization Kit to unpack the stock Ubuntu liveCD, run the script I've made on it via chroot, and pack it up again," says Macinnis. "Dealing with changes in Ubuntu from one release to the next is the biggest issue and takes the most time, which is why I don't begin until Ubuntu has been released (as chasing a moving target was driving me nuts a couple releases ago). However, since almost all my packages are now desktop independent (except artwork), making derivatives with different DEs is quite easy."


Sure, it's easy when you know how. Vote for your favorite desktop environment in Macinnis' poll, and be sure to vote for LXDE because that is my favorite. Or E17, which is beautiful and kind to system resources. Or maybe Xfce.


System Requirements


Dream Studio 11.10 is a 2GB ISO that expands to 5.6GB after installation. You can run it from a live CD or USB stick, but given the higher performance requirements of audio and video production you really want to run it from a hard disk. While we're on the subject of hard drives, don't get excited over 6Gb/s SATA hard disk drives. They're not much faster than old-fashioned 3Gb/s or 1.5Gb/s SATA HDDs, and you need a compatible motherboard or PCI-e controller. Put your extra money into a good CPU instead. Audio and video production, and editing photo and image files are CPU-intensive. Bales of RAM never hurts, and a discrete video card, even a lower-end one, gives better performance than cheapo onboard video that uses shared system memory.


My studio PC is powered by a three-core AMD CPU, 4GB RAM, an Nvidia GPU, and a couple of 2TB SATA 3Gb/s hard drives. It's plenty good enough, though someday I'm sure I'm going to muscle it up more. Why? Why not?


You want your hardware running your applications and not getting weighed down driving your operating system. Dream Studio ships with GNOME 2, GNOME 2 with no effects, Unity, and Unity 2D. Just for giggles I compared how each one looked in top, freshly booted and no applications running:

GNOME 2: 440,204k memory, 6% CPUGNOME 2, no effects: 453,640k memory, 3.7% CPUUnity: 592,432k memory, 4.8% CPUUnity 2D: 569,936 1.0% CPU

It's not a big difference, measuring memory usage is not precise in Linux, and your mileage may vary, so use what makes you happy.


What's Inside


Dream Studio installs with a vast array of audio, movie, photography, and graphics applications. It's a great showcase for the richness of multimedia production software on Linux. Audio is probably the biggest pain in the behind, as the Linux audio subsystem can be a real joy* to get sorted out. One of the best things Dream Studio does is box it all up sanely, and on most systems you just fire up your audio apps and get to work. It comes with a low-latency kernel, the JACK (JACK Audio Connection Kit) low-latency sound server and device router, and the pulseaudio-module-jack for integrating PulseAudio with JACK. If you have a single sound card this doesn't give you anything extra, so you're probably better off disabling PulseAudio while JACK is running. This is easy: in Qjackctrl go to Setup -> Options and un-check "Execute script after startup: pulsejack" and "Execute script after shutdown: pulsejackdisconnect". Leave "Execute script on startup: pausepulse" and "Execute script after shutdown: killall jackd" checked.


If you have more than one audio interface PulseAudio gives you some extra device routing options that you don't have with JACK alone. Once upon a time PulseAudio was buggy and annoying because it was new, and it introduced latency. It's stable and reliable now, but it still introduces some latency which is not good for audio production. But when you're capturing and recording audio streams, as long as everything is in sync then latency doesn't matter. Try it for yourself; it is easy and fun.


The creative applications are nicely-organized in both Unity and GNOME 2. Some notable audio apps are Audacity, Ardour, Hydrogen drum kit, DJ Tools, Tuxguitar, batches of special effects, and the excellent Linux Multimedia Studio (LMMS). On the graphics and video side you get FontForge, Luminance HDR, Scribus, Hugin, Stopmotion, Openshot, Blender, Agave, and a whole lot more.


There are a few of the usual productivity apps like Firefox, Libreoffice, Empathy, and Gwibber. And of course you may install anything in Linux-land that your heart desires.


Upgrade or No?


The problems I've run into are mostly Ubuntu glitches. During installation, the partitioning tool only gives a teeny tiny bit of room to show your existing partitions, and it does not resize, so you can't see all of your partitions without figuring out how to make it scroll. (Click on any visible partition and navigate with the arrow keys.) Ubuntu wants to you play audio CDs with Banshee; it wants this so badly it does not have a "play CD with" option. But Banshee doesn't work — it doesn't see the CD. My cure for this was to install VLC. There were some other nits I forget so they couldn't have been all that serious.


The one significant issue I ran into was with mass xruns in JACK. A xrun is a buffer underrun; an interruption or dropout in throughput. This can cause noticeable dropouts in your sound recordings. xruns should not be a problem on a system as powerful as mine, and they never have been. Until now. It could be a kernel problem, or a bug in JACK, it's hard to say. So before you upgrade a good working system test this new release well first.


*If you define joy as head-banging aggravation.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Can Linux Win in Cloud Computing?


Gerrit Huizenga is Cloud Architect at IBM (and fellow Portland-er) and will be speaking at the upcoming Linux Foundation Collaboration Summit in a keynote session titled "The Clouds Are Coming: Are We Ready?" Linux is often heralded as the platform for the cloud, but Huizenga warns that while it is in the best technical position to warrant this title, there is work to do to make this a reality.


Huizenga took a few moments earlier this week to chat with us as he prepares for his controversial presentation at the Summit.


You will be speaking at The Linux Foundation Collaboration Summit about Linux and the cloud. Can you give us a teaser on what we can expect from your talk?


Huizenga: Clouds are on the top of every IT departments list of new and key technologies to invest in. Obviously high on those lists are things like VMware and Amazon EC2. But where is the open source community in terms of comparable solutions which can be easily set up and deployed? Is it possible to build a cloud with just open source technologies? Would that cloud be a "meets min" sort of cloud, or can you build a full fledged, enterprise-grade cloud with open source today? What about using a hybrid of open source and proprietary solutions? Is that possible, or are we locked in to purely proprietary solutions today? Will Open Standards help us? What are some recommendations today for building clouds?


Linux is often applauded as the "platform for the cloud." Do you think this is accurate? If not, what still needs to be done? If so, what is it about Linux that gives it this reputation?


Huizenga: Linux definitely has the potential to be a key platform for the cloud. However, it isn't there yet. There are a few technology inhibitors with respect to Linux as the primary cloud platform, as well as a number of market place challenges. Those challenges can be addressed but there is definitely some work to do in that space.


What are the advantages of Linux for both public and private clouds?


Huizenga: It depends a bit about whether you consider Linux as a guest or virtual server in a cloud, or whether it is the hosting platform of the cloud. The more we enable Linux as a guest within the various hypervisors, and enable Linux to be managed within the cloud, the greater the chance of standardizing on Linux as the "packaging format" for applications.


This increases the overall presence of Linux in the market place and in some ways simplifies ISV's lives in porting applications to clouds. As a hosting platform, one of the biggest advantages for cloud operators is the potential cost/pricing model for Linux and the overall impact on the cost of operating a cloud. And, the level of openness that Linux provides should simplify the ability to support the cloud infrastructure and over time increase the number of services that can be provided by a cloud. But we still have quite a bit of work to do to make Linux a ubiquitous cloud platform.


What is happening at the Linux development level to support the rapidly maturing cloud opportunity? What does the community need from other Linux users and developers to help accelerate its development and address these challenges?


Huizenga: I'll talk about some of the KVM technologies that we need to continue to develop to enable cloud, as well as some of the work on virtual server building & packaging, DevOps, Deployment, and Management. There are plenty of places for the open source community to contribute and several talks at the Collaboration Summit should dive further into the details as well.


What do you make of Microsoft running Linux on Azure?


Huizenga: Anything that lets us run Linux in more places must be good!


More information about Huizenga's talk can be found on The Linux Foundation Collaboration Summit schedule. If you're interested in joining us, you can also request an invitation to attend.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

A Peek Behind the Curtain at Puppet Labs


In this interview, Luke Kanies, CEO and founder of Puppet Labs, explains why the Puppet configuration management tool is a huge hit with sys admins, and tells us what to expect next from the popular open source project.


Linux.com: Puppet was picked by Linux Questions members as the 2011 Configuration Management Tool of the Year. What do you think it is about Puppet that helped make it the winner?


Luke Kanies: Primarily, it's that people who use Puppet love Puppet. The 4,000 people on our mailing list, 500 IRC members, and hundreds of customers don't just use the software to make their lives better; they actually like using it. I've been consistently surprised by how many people seem to be emotionally fond of Puppet, and I get thanked all the time by people for just helping them by having created Puppet.


Linux.com: What makes Puppet different than other configuration management tools?


Luke Kanies: The biggest difference between Puppet and other tools is our focus on great design and user experience. It's obviously not perfect, and there's still a lot of work to do before it's got as good a design as we want, but relative to other configuration management tools, it's easy to set up, easy to use, and easy to train other people on.


Other tools are happy to sacrifice usability for more power, more configurability, or easy access to global unstructured data. These all help make the tools more powerful, but they also make them harder to understand, harder to learn, and harder to spread through your whole organization.


Note, though, that Puppet has not by any means been dumbed down – we just resist adding features until we are confident that the features will actually make the product more useful, not just more powerful.


Linux.com: Are there any new features or improvements in the works for Puppet right now?


Luke Kanies: Yes, quite a lot is being worked on. We're always working on making our open source projects faster, more capable, and easier to use, so you'll see a lot of work from us on that in the near future. For instance, our central databases will be heavily upgraded, supporting far better scalability and really exposing all of the great data Puppet has about your infrastructure. Puppet has seen orders of magnitude increases in performance in the last few years to support the dramatic scaling needs of our customers, like Zynga, who are using the public cloud, and that trend will continue. We're also building more of our infrastructure around MCollective, which provides real-time access to your infrastructure and is becoming our infrastructure middleware for orchestration and communication.


We're also working on a lot of great partnerships with our new investors – VMware, Cisco, and Google Ventures – along with some other interesting projects, like OpenStack.


In our commercial product, Puppet Enterprise is going to take great advantage of that new central database with enhanced reporting, and we're building workflows around it that make the lives of our users a lot easier.


Linux.com: There are several Puppet Camps scheduled for this year. What are those like? Who should attend, sponsor, or host these camps?


Luke Kanies: Puppet Camps are by our community, for our community. We only run them where there is enough of a presence on the ground to really have a great conference, and our regional Puppet Camps are seeing larger attendance than our competitors' worldwide conferences.


You should attend Puppet Camp if you want to learn about configuration management in general or Puppet in particular, work with other Puppet users, and meet other great sys admins in your region. We always provide training at the camps, in addition to all the great talks and open sessions, and opportunities to just hang out with Puppet people. They're also great events if you're a company looking to reach the best sys admins and the most advanced devops practitioners out there.


Anyone interested in sponsoring or attending a Puppet Camp you can find more information on our Community page.


Linux.com: PuppetConf 2011 was held in September. Has planning started for PuppetConf 2012?


Luke Kanies: Yes, PuppetConf 2012 is set for September 27th and 28th in San Francisco, California. A call for participation, location information, and ticket sales will start around the end of March. 2011 had over 1,200 participants in person and streaming. We're aiming to triple that number this year. In the works are the first administered Puppet Certification tests, The Puppet Labs, which are large-scale demo environments for attendees to play with. You'll also see a new product release, a great speaker and attendee list, and much, much more.


Linux.com: Anything else we should know about Puppet or Puppet Labs?


Luke Kanies: Puppet Labs is fundamentally about enhancing sys admin productivity, as a means of enhancing organizational agility, increasing adoption of technology, and converting operations from a cost center to a competitive advantage. Our software is by sys admins, for sys admins.


Linux.com: Thanks to Luke Kanies for taking the time to update us on Puppet.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Patents, Legal Collaboration and our Legal Summit


Unfortunately legal issues, specially patents lawsuits, are much in the news. From Yahoo suing Facebook to the ongoing battles surrounding Apple and other mobile device providers, my RSS and social media feeds seem to have more and more articles about legal issues everyday.


Wired published a great article today from an ex-Yahoo developer on how his work was weaponized for a patent war.


He writes: "I thought I was giving them a shield, but turns out I gave them a missile with my name permanently engraved on it." This case, among other similar ones, points out the need quite urgently for reform of our software patent system. When companies struggle, especially large ones, it's often easier to litigate than innovate.


But amid the patent wars there has been some good news. OIN last week announced they are expanding their patent pool to cover other important projects such as KVM, Git and others projects.


As SVN writes: "Patents owned by Open Invention Network are available royalty-free to any company, institution or individual that agrees not to assert its patents against the OIN’s broad Linux Definitions." Keith Bergelt of OIN will be speaking at our upcoming CollaborationSummit on this Linux definition. Keith will take people through the changes in the definitions, as well as the updating of the 1000s of packages already included in their coverage. This is important stuff and I'm very happy to feature Keith as a speaker.


We are continuing our active role in the legal landscape by marshaling the power of collaboration with our members. Before next month's Linux Foundation Collaboration Summit, we will be holding our Linux Foundation Legal Summit, where counsels and attorneys from our members come together with our legal experts and others from around the industry to plot the best defense for Linux and free software. There is power in collaboration; certainly with software but also with legal issues. It's a core part of our mission to enable this legal collaboration and spear head programs, like our Open Compliance program, that simplify and improve legal matters in our community. And as mentioned above, we also have a track on legal and compliance issues at the Collaboration Summit. This year Bradley Kuhn was kind enough to assist me in creating the track and I'm happy to say we have a who's who of leaders in the open source legal industry.


We are featuring
-- Aaron Williamson of the SFLC on the Evolving Form of Free Software Organization
-- Bradley from the Software Freedom Conservancy on GPL Compliance
-- Richard Fontana from REd Hat will talk about the Decline of the GPL and what to do about it
-- Karen Sandler from the GNOME Foundation will talk about real world trademark management for free software projects


And on day one of Collaboration Summit we will have a keynote on the SPDX project, one of the best examples of collaborative legal issues. You can read details about full the schedule of Collab Summit.



I hope to see many of you there.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Weekend Project: Take a Look at Cron Replacement Whenjobs


The cron scheduler has been a useful tool for Linux and Unix admins for decades, but it just might be time to retire cron in favor of a more modern design. One replacement, still in heavy development, is whenjobs. This weekend, let's take a look at whenjobs and see what the future of scheduling on Linux might look like.


The default cron version for most Linux distributions these days is derived from Paul Vixie's cron. On Ubuntu, you'll find the Vixie/ISC version of cron. On Fedora and Red Hat releases you'll find cronie, which was forked from Vixie cron in 2007.


You can also find variants like fcron, but none of the variants that are around today have really advanced cron very much. You can't tell cron "when this happens, run this job." For instance, if you want to be notified when you start to run low on disk space, or if the load on a machine is above a certain threshold. You could write scripts that check those things, and then run them frequently. But it'd be better if you could just count on the scheduler to do that for you.


There's also the less than friendly formatting to cron. When I've worked in hosting, I found plenty of cron jobs that were either running too often because the owner mis-formatted the time fields, or that didn't run as often as expected for the same reason. Back up jobs that should have been running daily were running once a month. Oops.


Getting whenjobs


Red Hat's Richard W.M. Jones has been working on whenjobs as "a powerful but simple cron replacement." The most recent source tarball is 0.5, so we expect that it's starting to be usable but still has some bugs. (And it does, more on that in a moment.)


A slight word of warning, you're probably going to need a lot of dependencies to build whenjobs on your own. It doesn't look to be packaged upstream by any of the distros, including Fedora. It also requires a few newer packages that you're not going to find in Fedora 16. I ended up trying it out on Fedora 17, and installing a slew of OCaml packages.


To get whenjobs, you can either grab the most recent tarball or go for the most recent code out of the git repository. You can get the latest by using git clone git://git.annexia.org/git/whenjobs.git but I'd recommend going with the tarball.


Jones recommends just building whenjobs with rpmbuild -ta whenjobs-*.tar.gz. (Replace the * with the version you have, of course.) If you have all the dependencies needed, you should wind up with an RPM that you can install after it's done compiling.


Using whenjobs


To use whenjobs, you'll need to start the daemon. Note that you don't start this as root, you want to run it as your normal user.


According to the documentation, you want to start the daemon with whenjobs --daemon, but Jones tells me this isn't implemented just yet. Instead, you'll want to run /usr/sbin/whenjobsd to start the daemon. You can verify that it's running by using pgrep whenjobsd. (Eventually you'll be able to use whenjobs --daemon-status.)


To start adding jobs to your queue, use whenjobs -e. This should drop you into your jobs script and let you start adding jobs. The format is markedly different than cron's, so let's look at what we've got from the sample whenjobs scripts.

every 10 minutes :<< # Get free blocks in /home free=`stat -f -c %b /home` # Set the variable 'free_space' whenjobs --type int --set free_space $free>>when changes free_space && free_space < 100000 :<< mail -s "ALERT: only $free_space blocks left on /home" $LOGNAME >

Jobs start with a periodic statement or a when statement. If you want a job to run at selected intervals no matter what, use the every period statement. The period can be something like every day or every 2 weeks or even every 2 millenia. I think Jones is being a wee bit optimistic with that one, but on the plus side – if whenjobs is still in use 1,000 years from now and it doesn't run your job, Jones probably won't have to deal with the bug report...


Otherwise, you can use the when statement to evaluate an expression, and then perform a job if the statement is true. See the man page online for all of the possible when statements that are supported.


As you can see, you can also set variables for whenjobs. It accepts several types of variables, such as ints, strings, booleans, and so forth. You can see and set variables using whenjobs --variables or whenjobs --set variable.


The script for the job is placed between brackets (<< >>). Here you can use normal shell scripts. The scripts are evaluated with the $SHELL variable or /bin/sh if that's not set. Presumably you could use tcsh or zsh instead if you prefer those.


If you want to see the jobs script without having to pop it open for editing, use whenjobs -l. You can use whenjobs --jobs to see jobs that are actually running. If you need to cancel one, use whenjobs --cancel serial where the serial number is the one given by the whenjobs --jobs command.


A cron Replacement?


I don't think whenjobs is quite ready to be rolled out as a cron replacement just yet, but it's definitely worth taking a look at if you've ever felt frustrated with cron's limitations. It will probably be a while before whenjobs starts making its way into the major distros, but if you're feeling a little adventurous, why not start working with it this weekend.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Weekend Project: Take a Look at Wine 1.4


The Wine project has released stable version 1.4 of its Windows-compatibility service for Linux (and other non-Microsoft OSes), the culmination of 20 months' worth of development. The new release adds a host of new features, including new graphics, video, and audio subsystems, tighter integration with existing Linux, and improvements to 3D, font support, and scripting languages.


In the old days, WINE was capitalized as an homage to its ultimate goal: Wine Is Not an Emulator. Rather, it is an application framework that provides the Windows APIs on other OSes — primarily Linux and Mac OS X, but others, too (including, if you so desire, other versions of Windows). The point is that you can run applications compiled for Windows directly in another OS. Unlike virtual machine solutions, you do not also have to purchase a copy of Windows to do so, and the lower overhead of a VM-free environment gives better performance, too.


Version 1.4 was released on March 7, about two years since the previous major update, Wine 1.2. Microsoft did not roll out a new version of Windows in the interim, so understandably the Wine project has had time to undertake plenty of changes of its own.


New: Multimedia, Input, and System Integration


The biggest change is a true "device independent bitmaps" (DIB) graphics engine. This engine performs software graphics rendering for the Windows Graphics Device Interface (GDI) API, which Wine has never supported before. Note, however, that Windows offers lots of different graphics APIs. Wine already had support for the better-known Direct3D and DirectDraw APIs, which also received a speedup. Still, the result of the new engine is considerably faster rendering for a variety of applications. The DIB code was actually developed by Transgaming, one of the commercial vendors that makes use of Wine in its own product line.


Windows also offers multiple audio APIs, the newest of which is MMDevAPI, which was introduced in Windows Vista. Wine 1.4 sports a rewritten audio architecture that implements this new API, and supports several audio back-ends: ALSA, CoreAudio, and OSS 4. Support for older back-ends like OSS 3 was dropped, as was support for AudioIO and JACK. PulseAudio, which is now the default in many desktop Linux systems, is supported through its existing ALSA compatibility. The project would like to restore JACK functionality, but it needs developers to help do so; similarly there is a side project to write direct support for PulseAudio, but it has yet to make it into the mainline code base.


Wine also uses the GStreamer multimedia engine for audio and video decoding, which gives Wine apps automatic support for every file format known to GStreamer. There are X improvements in 1.4 as well, most importantly support for XInput 2. XI2 should fix cursor-movement problems with full-screen Windows programs, as well as support animated cursors and mapping joystick actions. Finally, the new release uses XRender to speed up rendering gradients.


The new release also better integrates the Windows "system tray" (which many applications running under Wine will expect to exist) with Linux system trays, and even supports pop-up notifications. Wine also has support for applications that use several additional programming languages (including VBScript and JavaScript bytcodes), plus support for both reading and creating .CAB "cabinet" archives and .MSI installers. There is also a built-in Gecko-based HTML rendering engine (which should make HTML-and-CSS-based programs more reliable than the Internet Explorer-based renderer found in Windows...).


Enhanced: 3D, Text, Printing, and Installing


As mentioned above, Wine's support for Direct3D and DirectDraw both received enhancement for 1.4. OpenGL is used as the back-end for both; a more detailed look at the capabilities and benchmarks can be found on Phoronix.


The font subsystem received a substantial make-over for 1.4, too. The big news is full support for bi-directional text (including shifting the location of menus, dialog box contents, and other UI elements when using right-to-left text), and full support for vertical text (such as Japanese). Rotated text is also supported, and the rendering engine is improved.


The project claims full support for Unicode 6.0's writing systems, although that definitely does not mean that every language in Unicode can be used in Wine. However, there have been significant improvements to the localization effort — more locales are supported, and every UI resource (from strings to dialogs to menu entries) is now in a gettext .po file. So if your language is not supported yet, now is the time to get busy translating!


Printing improvements include directly accessing CUPS as the printing system (previous versions of Wine piped print jobs through lpr) and a revamped PostScript interpreter. The software installer has also been beefed up; it can now install patch-sets (which are very important for applying security updates) and roll-back previous installations. Considering that most Wine users deploy the framework for a smallish set of specific applications, this installer functionality is very handy.


Finally, the new release adds some "experimental" features that not everyone will find to be their cup of tea, but show promise in the long run. First, there is support for some new C++ classes from Microsoft, as well as expanded XML support, and Wine's first implementation of OpenCL (which allows developers to write code that runs on both CPUs and GPUs). Wine also compiles on the ARM architecture for the first time. Windows is not a major player on ARM, but the tablet version of Windows 8 is poised to make a push on ARM this year, so having Wine ready is an important step for the project.


Application Support, Caveats, and Where We Go Next


Naturally enough, Wine for ARM is not as thoroughly tested as Wine on x86. But the new release does still support many more applications than did Wine 1.2. You can find the complete list on the Wine app database — just be aware that individual user reports are the source of most of the compatibility information. It is not as simple as running a static test to see whether or not application X is supported. That said, one of the major bullet points of Wine 1.4 is robust support for Microsoft Office 2010, so if you run Wine to ensure document compatibility with Windows-based friends, you are in good hands.


There are two caveats to be noted with this release. First, although ALSA-over-PulseAudio is a supported audio back-end (and indeed is probably the most common configuration), you should check your version numbers. Some users have reported iffy sound behavior when running pre-1.0 PulseAudio and pre-1.0.25 alsa-plugins.


Second, although standard hardware devices like keyboards, mice, and external peripherals work fine, Wine still does not have support for installing and using Windows USB drivers. This only applies to unusual hardware that requires a separately-installed Windows driver, so it probably does not affect you, but be forewarned. On the other hand, if you have a weird barcode scanner or infrared USB laser that you must use with Windows drivers, there is an external patch set available.


On the same front, if the application you need Wine support for is causing problems, there are some external tools available that help you locate and install add-ons to round out the experience. Many of these add-on options are not free software, so they cannot be shipped with Wine itself, but if you are in dire straits, you might want to check out PlayOnLinux and WineTricks.


Interestingly enough, openSUSE community manager Jos Poortvliet just published a detailed report on using Wine for gaming, in which he discusses both add-on tools. Primarily, however, the emphasis is on how well Wine works on a modern Linux distribution, and the verdict is pretty good. Considering that 27 of the top 30 entries in the Wine compatibility matrix are games, providing a good experience is critical. Although there are probably some people who use Windows for other tasks, too, if you know where to look.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Greg KH Readies for Collaboration Summit, Talks Raspberry Pi


Linux kernel maintainer and Linux Foundation Fellow Greg Kroah-Hartman will be moderating the highly-anticipated Linux kernel panel at the Collaboration Summit in a couple short weeks. He was generous enough to take a few moments recently to answer some questions about what we might hear from the Linux kernel panel, as well as some details on his recent work and projects. Oh, and we couldn't resist asking him about the new Raspberry Pi.


You will be moderating the Linux Kernel panel at the upcoming Linux Foundation Collaboration Summit. These are big attractions for attendees. What do you anticipate will be on the kernel panel's mind during that  first week in April?


Kroah-Hartman: Odds are we will all be relaxing after the big merge window for the 3.4-rc1 kernel. Also, the Filesystem and Memory management meetings will have just happened, so lots of good ideas will have come out of that.


This panel moderation role comes after two Q&A-style keynote sessions with Linus last year to celebrate 20 years of Linux. How does moderating a panel of developers differ from interviewing Linus on stage?


Kroah-Hartman: I will need to bring more than just one bottle of whisky :)


Seriously, it's much the same, but instead of just one person answering questions, there are 3 different viewpoints being offered, which can result in the conversation leading places you never expect. An example of this would be the kernel panel that happened last year at LinuxCon Japan, where the developers on stage got into a big technical argument with the kernel developers in the audience, much to the amusement of the rest of the audience. If done well, it can show the range of ideas the the kernel developer community has, and how while we don't always agree with each other, we work together to create something that works well for everyone.


You recently released Linux kernel 2.6.32.58 but cautioned that you would no longer be maintaining version 2.6.32 and recommended folks switch to Linux 3.0. Is there anything else you'd like to say about people moving to Linux 3.0?


Kroah-Hartman: For a longer discussion on the history of the 2.6.32 kernel, please see the article I posted recently. Almost no end user will be building their own kernel and need to know the differences here; their distro handles this for them automatically. But, for the technical user, they know how to build their own kernels, and moving to the 3.0 kernel release should provide no problem at all. If it does, please contact the kernel developers on the linux-kernel mailing list with their problems and we will be glad to work through it with them.


Can you give us some updates on the Device Driver Project and/or LTSI?


Kroah-Hartman: There's nothing new going on with the Device Driver project other than we are continuing to create drivers for companies that ask for them.  I know of at least two new drivers going into the 3.4 kernel release that came from this process, and if any company has a need for a Linux driver, they should contact us to make this happen.


LTSI is continuing forward as well. Our kernel tree is public, and starting to receive submissions for areas that users are asking for. I've been working with a number of different companies and groups after meeting with them at ELC 2012 to refine how LTSI can best work for their users. There will be a report at LinuxCon Japan 2012 in June about what is happening with LTSI since the last public report at ELC.


Have you seen the Raspberry Pi? Sold out in a day. Any chance you've gotten your hands on one? If so, what's your reaction?


Kroah-Hartman: I have not seen one in person, but will be trying to get one (I signed up for one as soon as it went on sale, but was too late.) It looks like a great project, much like the BeagleBone and Pandaboard, both of which I have here and use for kernel testing. Hopefully the Raspberry Pi developers can get their kernel patches into the mainline kernel.org release soon, so that it is easier for users to take advantage of their hardware.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

How to Kickstart an Open Source Music Revolution with CASH Music

On February 10, 2012, CASH Music launched a Kickstarter campaign and raised more than 70% of their $30,000 goal in about 24 hours. What is CASH Music? And why does it already have vocal support from musicians, Firefox, and even Neil Gaiman? Jesse von Doom, Co-Executive Director of CASH Music, explains the inspiration behind the project and the big role Linux plays in it.

On February 10, 2012, CASH Music launched a Kickstarter campaign and raised more than 70% of their $30,000 goal in about 24 hours. What is CASH Music? And why does it already have vocal support from musicians, Firefox, and even Neil Gaiman? Jesse von Doom, Co-Executive Director of CASH Music, explains the inspiration behind the project and the big role Linux plays in it.

Linux.com: What inspired you to start CASH Music?

Jesse von Doom: The real inspiration is necessity. It started six years ago as a conversation about sustainability between musicians Kristin Hersh and Donita Sparks. After some exploration and thinking, we realized what the music world needed wasn't another tech startup, but an open foundation on which to build. In came additional code that had been written for (now co-Executive Director) Maggie Vail while she was VP at Kill Rock Stars, and we began the process of forming a proper nonprofit and adding to the code.

Since then, we had been building project after project to spec with artists, managers, publicists, and labels, learning what was needed and how to shape it. In time, we separated the platform itself from the functional elements built on top of it and a year ago we started putting out heads down in earnest. Now we've got a platform that tries to model the needs of an artist as they've described them to us, and allow those pieces to come together as easily embedded elements that can slot into pure PHP or any PHP-based CMS. The end goal is to make embedding rich functionality as easy as embedding a YouTube clip.

Linux.com: Your About page says "What CASH is ultimately trying to do is give musicians support in an environment that often feels alienating and hostile." How are you offering support for musicians?

Jesse von Doom: We're building a free and open-source platform for musicians to use in distributing, promoting, and selling their music. As the Internet becomes the backbone of the music economy, it's important that all artists have access to basic tools and resources, and that's what we're hoping to accomplish.

Our initial releases of an installable version of the platform contain things like email collection, downloads, fan club tools, and tour date display. We'll soon be releasing direct digital commerce, easy playlist embedding and management, and more social and media tools. In the future we also plan to spend significant time with education and outreach so we not only offer tools, but help using them.

Linux.com: How is CASH Music different than other music services or projects? Is there anything similar out there?

Jesse von Doom: The most obvious difference is our nonprofit/open-source structure. Too many musicians have been burned by Web sites shutting down or having their hard work trapped in a proprietary box. One of our board members, a musician named Jonathan Coulton, likes to put it in context of all the lost time, energy, and even money spent on MySpace pages that are now basically useless.

I also think that our slow-cooking approach is paying off. By getting the help of artists in figuring out the shape of this thing we're building, it's helped us make major improvements to the platform itself. We certainly didn't take a minimum viable product approach, and while that makes sense most of the time, I think going slow and communicating was vital to what we've done.

Really our biggest job is facilitating communication between musicians and developers, so building something both groups can understand gives common ground. My hope is that ground means more musicians learning about web making and more developers learning about how hard musicians need to work and where that work turns to income.

As a nonprofit, we can foster collaboration over competition, meaning artists don't have to choose our platform over another service; we can just integrate with whatever's best for artists.

Also we use more pictures of dogs and rainbows than any other service.

Linux.com: You raised $30,000 on Kickstarter within 72 hours. How will the funds be used to help the project?

Jesse von Doom: It's a pretty unexciting list. $30,000 is close to the total amount of funding we've seen in the last five years, but even if we raise five times that number, we're still working with a budget that's a fraction of most startups. All of the Kickstarter money will be going toward building the hosted version of the platform – 100% API and data compatible with the current distributed version, but without any of the setup. The goal is full data portability from hosted to distributed, so artists can start easily and move their data to their own sites, or even a different service built on top of our own hosted images.

We'll put some of the money to outside help where we can't find volunteers, in documentation or in carving out a specific subset of development. Similarly, in supporting focused hack events with developers and musicians working together. We're looking for a hosting partner willing to give us a nonprofit discount, but we definitely need to cover initial hosting costs and the other usual costs of running a site. Basically, this is giving us room to do a major push toward that free hosted service that we see as a huge part of the answer for musicians. Any additional funds will be put directly back into that effort, so we plan on pushing hard to get that Kickstarter number as high as we can.

Linux.com: What's your road map and timeline look like right now?

Jesse von Doom: We've already released an initial distributed version of the platform. It's an early version 1, but we're in a rapid release cycle and quickly smoothing out the rough edges. It's a challenge because we're using as close to bare bones PHP 5.2 as we can, relying only on PDO and mod-rewrite for some of the trickier things we're doing. We're not using any other external dependencies, which has forced structure and interesting solutions to challenges.

The idea was to support low-end hosts where most artists host their sites. We built a single-file installer that pulls down files from a GitHub repo. We install to a SQLite database then migrate that to MySQL when the user is ready. Basically, everything we can do to make the experience easier and less daunting we've done, and every release has to get easier or it's not ready to ship.

Currently we support email capture, email for download, tour date listings and archives, Tumblr and Twitter integrations, and some early fan club pieces. Right around the corner is digital commerce and playlists.

We're still shuffling the roadmap to get to hosted as quickly as possible, but that will be done soon and include things like private streaming, enhanced fan club elements, major improvements to the admin, more supported service integrations, and more.

Linux.com: What are the technical challenges going on under the hood of CASH Music?

Jesse von Doom: Building a distributed version of the platform first was a crazy-person idea, but very necessary to communicate the ideals of OSS to a non-technical user base. There's nothing like that first time someone changes a piece of code and suddenly feels ownership of it; we're trying to instill that same feeling of ownership in musicians.

Personally, I'm actually looking forward to the challenges of a large-scale, centrally hosted project... it means the environment will be a known commodity. Right now, we're trying to support just about any LAMP configuration under the sun. The host that still turns off fopen wrappers? Yup, we work around that. No access above the web root? Time zone not set? You get the picture.

So we've worked hard to set up patterns that won't interfere with other scripts as much as possible. We didn't build a traditional CMS, rather an engine for embedding pieces of content with a single line.

The minimalist approach to not using extensions certainly makes this a challenge, but Duke [Jonathan “Duke” Leto, Developer] set up a robust test suite that we've been filling with unit tests to up coverage, we've been running our installer and the platform in different environments and on different hosts, and testing in as many situations as you can imagine. It's the opposite of most devops challenges: We only know the code, not the environment it needs to run in.

Linux.com: What else can members of the open source community do to help you with the project?

Jesse von Doom: Like every project, we're looking for good developers. But we're also keenly aware of the non-development help we need as well. The better our docs are, the easier it will be for new developers to engage. And we're passionate about enabling even the least technical musicians to get comfortable on the Web so that means walk-throughs, careful copywriting, and better support.

We set up these pages for specifics: how you can help development, and how you can help non-development.

Linux.com: Thanks much for your time, and good luck with the project! Note that if you'd like to support the project, the Kickstarter drive continues through March 9th. Kickstarter pledges start at $5, with tiers up to $5,000 or more.

]]>
This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Thunderbird 11 Released: Lots of Fixes, Frustratingly Few New Features

Though it gets much less attention than its browser sibling, Thunderbird is still plugging away. Since it's on the rapid release cycle, it's also pushing out releases pretty regularly, with fewer new features per release. With Thunderbird 11, it's a very short list of new goodies but a long list of fixes.

Check out the buglist for Thunderbird 11 and it's clear that the Thunderbird team has been busy. But the new features list for 11 consists of a very short list.

With 11, Thunderbird is using a new version of the Gecko layout engine. This should provide better and faster rendering of Web content.

The other big new feature is a change in the way tabs are displayed. In previous releases of Thunderbird, the menu toolbar with buttons for "Get Mail," "Write," and "Address Book" was displayed above the tabs.

This is a modest improvement over the old UI, but I'm still not convinced that Thunderbird has hit on the right design yet. Putting the "Reply," "Forward," and other buttons just above the message display rather than at the top of the Thunderbird UI has never really worked well for me. I wind up customizing the toolbar to add them up top, and removing them from the bottom toolbar.

If you've not been keeping up with Thunderbird, the 10.0 release added the ability to search the Web from the toolbar in Thunderbird and improvements in searching emails. That was out on January 31st of this year.

The Thunderbird 9.0 release, on December 20th added Gecko 9, a feature for sending performance and usability data back to Mozilla, and some minor features related to Personas.

Thunderbird 8.0 was a similarly lackluster release with a handful of shortcuts added, some UI changes, and a switch to Gecko 8.

More Please

I'm glad that Mozilla is still working on Thunderbird, but it really doesn't seem to be getting very much love – especially relative to the work that's going on with Firefox.

The change to rapid releases is welcome, but it's been a very long time since Thunderbird really had any substantial improvements.

Thunderbird's IMAP performance, for example, could use a bit of improvement. I find that it's pretty laggy in accessing or working with IMAP folders of any size. I also get, intermittently, errors about accessing my GMail account where Thunderbird reports a wrong password. Unfortunately I can't tell if this is an error with Thunderbird or GMail. As soon as I can reproduce it again, though, I plan to file a bug report.

Sending mail is also a bit sluggish. This may or may not be Thunderbird's fault, but they could background the window when sending so the user isn't staring at a dialog for sending mail for so long.

Overall, the Thunderbird 11 release is OK, but lackluster. I keep hoping that Mozilla will put a little more oomph into Thunderbird development, but as things stand it doesn't look promising.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

What's New in Linux 3.3?

Sunday, Linus Torvalds released the 3.3 Linux kernel. In the latest installment of the continuing saga of kernel development, we've got more progress towards Android in the kernel, EFI boot support, Open vSwitch, and improvements that should help with the problem of Bufferbloat.

Is it just me, or is it still a little weird to be talking about 3.x kernels? It's been about eight months since the official bump to 3.0, but that's compared to more than seven years with the 2.6.x series.

At any rate, here we are. Let's take a look at some of the changes in Linux 3.3!

Everybody was Bufferbloat Fighting!

The Android patches are likely to get the most attention in 3.3, but the thing that I'm most excited by? More work going on to solve the Bufferbloat problem.

In a nutshell, Bufferbloat is a symptom of a lot of small problems that creates "a huge drag on Internet performance, ironically, by previous attempts to make it work better. Or the one-sentence summary, "bloated buffers lead to network-crippling latency spikes."

It's not a problem that's going to be solved all in one go, or in one area. But the Linux kernel is one of the pieces that needs addressing. In the 3.3 release, we've got the ability to set byte queue limits in the kernel.

Driver Goodies

Check out the list of drivers that have made it out of staging. Specifically, the gma500 driver is out of staging. This means the infamous Poulsbo chipset should be supported in the mainline kernel finally.

This release also includes the NVM Express driver (NVMe) which supports solid state disks attached to the PCI-Express bus. Most SSDs are SATA, Fibre Channel or SAS drives. The work was done by Intel's Matthew Wilcox, which isn't surprising since the NVM Express standard is also supported by Intel and a number of other companies. Would love to get my hands on one of these drives to test the 3.3 kernel out...

Want to tether your Linux box to your brand new iPhone? The iPhone USB Ethernet Driver (ipeth) module has been updated to add support for the iPhone 4S.

The 3.3 kernel also picks up some drivers for third generation Wacom Bamboo tablets and Cintiq 24HD, and initial driver support for the Intuos4.

Open vSwitch

Another biggie in 3.3? The Open vSwitch project is merging into the kernel tree. It's not new – it's been around for some time – but it's finally making its way into the mainline kernel. (This seems to be a frequent theme, doesn't it?)

Basically, Open vSwitch is a virtual switch for complex virtualized server deployments. Given the ever-growing popularity of virtualized servers and cloud deployments, this is something in high demand. As the Open vSwitch page says, "Open vSwitch can operate both as a soft switch running within the hypervisor, and as the control stack for switching silicon. It has been ported to multiple virtualization platforms and switching chipsets. It is the default switch in XenServer 6.0, the Xen Cloud Platform and also supports Xen, KVM, Proxmox VE and VirtualBox. It has also been integrated into many virtual management systems including OpenStack, openQRM, and OpenNebula."

No doubt, you'll be reading more about Open vSwitch on Linux.com in the near future.

Android Comes Closer

Last, but not least, the 3.3 kernel includes nearly complete support for Android. This is good news all around, but isn't really a surprise. The kernel folks have been working on this for a long time.

Now the question is, will we start seeing Android apps on top of normal distributions? Will we start seeing standard Linux apps running on Android? Will mod communities, like CyanogenMod, start using the mainline kernel? Should be an interesting year. Then again, when isn't it an interesting year when Linux is involved?

As usual, the release includes lots more fixes, new drivers, and so forth. Check out the Kernel Newbies page for more. The merge window for 3.4 is now open, with the traditional two-week cutoff for pull requests. Looking forward to what 3.4 brings!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.