Custom Search

Tuesday, March 27, 2012

Disorganized? Get Tracks on Linux


Feeling a bit disorganized? Looking to take control of your projects? Take a look at Tracks, an open source Web-based application built with Ruby on Rails. Tracks can help you get organized and Get Things Done (GTD) in no time.


Getting Tracks


You can get Tracks in a couple of ways. Some distributions may offer Tracks packages, and the Tracks project has downloads you can use to set it up.


The easiest way to get Tracks, though, is the BitNami Tracks stack which includes all the dependencies and just takes a few minutes to download and set up. No need to try to set up a database, Ruby on Rails, or anything. (If you're already using Ruby on Rails for something else, you might want to go for the project downloads, of course.)


I'm running the BitNami stack on one of my Linux systems on my home intranet. BitNami provides installers for Linux, Windows, and Mac OS X. They also provide pre-built VMs with Tracks pre-configured, so you could fire it up in VirtualBox or VMware if you prefer.


What Tracks Does


Tracks is good for single-user project management. It's based around the GTD methodology, and helps users organize their to-do list using GTD.


You can use Tracks without employing GTD, but Tracks is an "opinionated" tool. That is, it's structured around GTD, so it has some artifacts that might not fit into other methodologies quite as well.


Like any other to-do list or organizer, Tracks lets you set up actions. These have a description, notes, tags, due dates, and dependencies.


Dependencies are actions that have to be done before you can complete the current action. So, for instance, if you have a to-do item to deploy Tracks, it might be dependent on installing Ruby on Rails first.


Tracks also has contexts and projects. Projects are just what they sound like. For example, I use separate projects for each site that I write for (Linux.com, ReadWriteWeb, etc.), and also for home, hobbies, etc.


Contexts, on the other hand, are sort of nebulous but can be best defined as groups of actions that can be performed at the same time. You might have a "phone" context for all of the phone calls you need to make, regardless of whether you're going to be making calls for work or personal reasons. So if you're already making a few phone calls for one project, you might decide to just get all of your phone calls out of the way rather than switching context and being less effective.


Likewise, you might have a "system administration" context or "errands" context. If you're already logged into a client's server, you might want to take care of all the system administration tasks in one sitting rather than switching back and forth between phone calls and system administration. If you're going to run an errand to buy a new backup drive for a desktop system, you might also go ahead and do grocery shopping while you're out instead of making two trips.


There's also a special tickler context that allows you to throw in action items that don't really relate to other contexts. This is good for items that don't have specific due dates, or might be "things I kind of want to do, but have no specific plans for yet." I use the "tickler" context for article/post ideas that I don't have scheduled or assigned anywhere, as well as for things I want to do someday but aren't on the immediate horizon. Tickler items can eventually be put into other contexts as they become more important, or you might (yeah, right) finish all your other work and decide to tackle tickler items.


The Tracks Web-based interface is pretty straightforward. You have a Home page with each context and its action items. On the right-hand side you have a form for adding actions.


One thing you'll like right away is that Tracks will auto-complete actions, contexts and so forth that you already have in the system. If you have a "writing" context, for example, you just have to type a few letters and it'll offer to auto-complete it for you.


It also will create contexts and projects for you automatically the first time you use them, rather than requiring you to create them first and then use them.


Tracks is easy to get started with and use. It's not entirely perfect, but it's darn good if you want a single-user organization system. Give it a shot and see if it works for you!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Bodhi Linux, the Beautiful Configurable Lightweight Linux


Bodhi Linux is gorgeous, functional, and very customizable. It just so happens that's what grumpy old Linux nerds like me think Linux is always supposed to be. Let's take this Linux newcomer for a spin and learn what sets it apart from the zillions of other Linux distributions.


Like Sand Through the Hourglass


Bodhi Linux is fast-growing newcomer to the Linux distro scene. The first release was at the end of 2010, and it has attracted users and contributors at a fast pace. Do we need yet another Linux distro? Yes we do. KDE and GNOME both leaped off the deep end and left users in the lurch. KDE4 has matured and is all full of functionality and prettiness, but it's a heavyweight, and GNOME 3 is a radical change from GNOME 2, though considerably easier on system resources than KDE4. And there are all the other good choices for graphical environments such as Xfce, LXDE, Fluxbox (to me, KDE4 is Fluxbox with bales of special effects, as they share the same basic concepts for organizing workflow and desktop functionality), IceWM, Rox, AfterStep, Ratpoison, FVWM, and many more. So what value does Bodhi add? Four words: Enlightenment, minimalism, and user choice.


Enlightenment


The first release of the Enlightenment window manager was way back in the last millennium, in 1997. The current version, E17, has been in development since 2000, which has to be a record. I predict there will never be a final release because that would spoil its legendary status as the oldest beta.


I've always thought of Enlightenment as a flexible, beautiful, lightweight window manager for developers, because my best experiences with it were when it came all nicely set-up in a distro like Elive, PCLinuxOS, Yellow Dog, and MoonOS. When I tried installing it myself I got lost, which is probably some deficiency on my part. Enlightenment is a wonderful window manager that run under big desktops like KDE, and it can run standalone. It runs on multiple operating systems and multiple hardware platforms, and it supports fancy special effects on low-powered systems. Bodhi Linux makes good use of Enlightenment's many excellent abilities.


Bohdi


Bodhi Linux is the creation of Jeff Hoogland, and is now supported by a team of 35+ people. System requirements are absurdly low: 300mhz i386 CPU, 128MB RAM, 1.5GB hard drive space. The minimalist approach extends to installing with a small complement of applications. I suppose some folks might prefer having six of everything to play with, but I've always liked installing what I want, instead of removing a bunch of apps I don't want. There are maybe a dozen applications I use on a regular basis, and a set of perhaps 20-some that I use less often. I don't need a giant heavyweight environment just to launch the same old programs every day. So Bodhi's minimalist approach is appealing.


Bodhi is based on the Ubuntu long-term support releases, and is on a rolling release schedule in between major releases. When the next major release comes out users will probably have to reinstall from scratch, but the goal is for Bodhi to become a true rolling-release distribution that never needs reinstallation.


Killer Feature: Profiles


One particular feature I find brilliant in Bodhi is profiles. Profiles are an Enlightenment feature, and the Bodhi team created their own custom set. First you choose from the prefab profiles: bare, compositing, desktop, fancy, laptop-/netbook, tablet, and tiling. The laptop/netbook profile looks great on my Thinkpad SL410.


The fancy profile greets you with a dozen virtual desktops and a shower of penguins, some of who sadly meet their demise (figure 1.) Then you can customize any of the profiles any way you like with different themes, Gadgets, whatever you want, and quickly switch between them without logging out. So you could have a work and home profile, a single and multi-monitor profile, a travel profile.


No Disruptions


Given all the uproar over KDE4 and GNOME, I asked Jeff Hoogland about the future of Bodhi. He explained that disruptive change is not in the Bodhi roadmap:


"The reason for this is the way in which we utilize E17's "profiles". We recognise that a singular desktop setup is not going to satisfy all users and is far from being suitable for all types of devices. In other words if the Bodhi team sees the need to develop an alternative desktop setup for some reason, it would simply be offered in addition to our current profile selections – not replacing them. User choice is one of our mottoes."


Like KDE4 and Fluxbox, Bodhi supports desktop Gadgets (widgets in KDE4) for displaying things like weather forecast, clock, desktop pager, hardware and system monitors, and various controls. It has its own compositing manager, Ecomorph, which is a port of Compiz. This is an installable option and not included by default because it has problems on some hardware. But if it works on your system it's nice because it doesn't need a mega-super-duper CPU to support a trainload of special effects.


Bodhi comes with the Ubuntu software repositories enabled by default, plus the Bodhi repos. You can manage software with apt-get, Synaptic, or the Bodhi Linux AppCenter for installing apps from a Web page. This requires either the default Midori Web browser or Firefox, because they support the apt:url protocol. The AppCenter has package groups like the Nikhila Application Set, which has one of everything: word processor, audio player, movie editor, and several more. You can get the Bodhi Audio Pack, the Bodhi Image Pack, Bodhi Scientific Publishing, and several more. You're not stuck with the packs, but can install any of the individual applications. Applications are also sorted by category, such as Image Editing, Office Suite, Communication, and such.


Enlightenment has a bit of a learning curve, but the Bodhi folks have written a good Enlightenment Guide, and a lot of other useful documentation. I always like to ask distro maintainers why they lost their minds and decided to create their own Linux distributions. They're not the result of magic, but forethought and planning:


"While the idea to start up a minimalistic, Enlightenment based Ubuntu distro was originally my own I recruited team members before we even released our first disc. We started off with myself, Jason Peel, and Ken LaBuda. Today we have nearly forty people who contribute code and/or documentation to Bodhi, not to mention the countless people who have donated to keep our servers running! I have a rough release schedule posted here — and while that outline is flexible, I would bet it will be fairly close to reality."


Mobile Bodhi


Enlightenment seems like a natural fit for mobile devices, and Mr. Hoogland has plans in that direction as well:


"I would love to get Bodhi working on mobile devices eventually. We have a functional ARM branch that I have successfully booted on the Genesi Smartbook, HP Touchpad, Nokia N900 and ArchOS Gen8 devices to name a few. Sadly though other than then Genesi all these devices lack some functionality due to closed source hardware. We are simply ready and waiting to get our ARM branch with our tablet profile working on a truly open mobile device (if there are any companies out there interested in producing such a device they shouldn't hesitate to contact me)!"


Visit Bodhilinux.com for good documentation, forums, and downloads.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Scientific Linux, the Great Distro With the Wrong Name


Scientific Linux is an unknown gem, one of the best Red Hat Enterprise Linux clones. The name works against it because it's not for scientists; rather it's maintained by science organizations. Let's kick the tires on the latest release and see what makes it special.


Red Hat


Red Hat is one of the oldest Linux distributions, and has long been a fundamental contributor by funding development and making deep inroads into the enterprise. Red Hat is a billion-dollar company (they're expected to make an official announcement at the end of this month) and they did it without tricky licensing that locks up their best products behind proprietary licenses. This graph on Wikipedia shows the reach and influence of Red Hat Linux. (Your browser should have a zoom control to enlarge the image.)


And they did it while competing with high-quality free-of-cost distros, and giving away their own products. Anyone can have Red Hat Enterprise Linux for free by downloading and compiling the source RPMs. Which is harder than downloading an ISO image and burning a CD, but not a whole lot harder. My fellow geezers might remember the wails of protest when Red Hat discontinued the free ISOs, all those non-paying customers who vowed to never be non-paying Red Hat customers again.


Clone Stampede


But these things have a way of working out, and it didn't take long for free RHEL ISOs to appear again. Only they weren't released and supported by Red Hat, but clone distros like CentOS, White Box, Pie Box, CAOS, Yellow Dog, and ClearOS. Clones come, and clones go, and CentOS has been the most popular RHEL clone for several years. I think Scientific Linux would be as popular as CentOS if it had a different name, because most people assume it is aimed at scientists, and is all full of scientific applications. But it's not; it is called Scientific Linux because it is maintained by a cooperative of science labs and universities. Fermi National Accelerator Laboratory and the European Organization for Nuclear Research (CERN) are its primary sponsors. (CERN also funded Tim Berners-Lee when he invented the World Wide Web.)


Scientific Linux calls Red Hat The Upstream Vendor, or TUV. Red Hat told the clones that they could not use any of Red Hat's trademarked materials such as their name, logo, and other artwork. So many of the clones use cutesy nicknames (like TUV) instead.


Developed by Scientists


Scientific Linux was born in 2004, when Connie Sieh released the first prototype. It was based on Fermi Linux, one of the earliest RHEL clones, and it was called HEPL, High Energy Physics Linux because Ms. Sieh and the other original developers worked in physics labs. They shopped it around and solicited feedback and help, and it was renamed Scientific Linux to reflect that the community around it was not just physicists, but also scientists in other fields.


What Does it Do?


So what is Scientific Linux good for? Desktop? Server? Laptop? High-demand server? Yes to all. It is a faithful copy of RHEL, plus some useful additions of its own.


Scientific Linux offers several different download images, from a 161MB boot image for network installations, to live CD and DVD images, to the entire 4.5GB distro on two DVDs. I tried the LiveMiniCD first, which at 482MB isn't all that mini. This is the IceWM version. IceWM is a good lightweight desktop environment, but despite expanding to 1.6GB at installation the LiveMiniCD installation looks barebones. The only visible graphical applications are Firefox and Thunderbird, and Nautilus is installed but it is not in the system menu. There is a big batch of servers and clients, storage clients, networking and system administration tools. IceWM is a nice graphical environment for a server – I know, the old rule is No X Windows On A Server! In real life admins should use whatever makes them the most efficient and gets the job done.


Installing the Desktop group gives you a stripped-down Gnome desktop for the price of a 32MB download, and of course there are many more Gnome packages to choose from, including Compiz for fancy desktop effects. Good old Gnome 2, of course, so Gnome 2 fans, here is your chance to get a good solid Gnome 2 system with long-term support. RHEL has ten-year support cycles, plus extensions. The Scientific Linux team expect to support 6.x until 2017, though this could change.


Additions Not In TUV


Scientific Linux has bundled a nice toolset for making custom spins the easy way. The package group is SL Spin Creation, and you get nice tools like LiveUSB Creator and Revisor (figure 2.) In fact, SL was designed with custom spins in mind; spins have their own unique features, but all of them are compatible. There is even a naming convention: Scientific Linux followed by the name of the institution. For example if I made one I would call it Scientific Linux LCR, for Little Critter Ranch.


You could install yum-autoupdate for scheduled hands-off updates, some extra repositories, and the OpenAFS distributed filesystem. OpenAFS is used a lot in research and education.


The SL_desktop_tweaks package adds a terminal icon to the Gnome kicker, and an "add/remove programs" menu item to KDE. SL_enable_serialconsole is a nice script that configures the console to send output to both the serial port and the screen, and it provides a login prompt. Probably the most useful tweak is SL_password_for_singleuser, which requires a root password for single user mode. That's right, TUV still leaves this unprotected, so anyone with physical access to the machine can boot to single user mode for password-less access. Of course, as the old saying goes "Anyone with physical access to the machine owns it."


Enterprise Features


Red Hat made more than 600 changes and fixes to their version of the Linux kernel, and most of these are for high-demand high-performance workloads. You get all these in Scientific Linux too, along with all the virtualization, high-availability, multiple network storage protocols, clustering, device mapper multipathing, load balancing, and other enterprise goodies.


Support is always a question for the free clones; even when you're an ace admin and know the system inside out, you're still dependent on the vendor for timely security patches and releases. Scientific Linux is consistent at getting their releases out the door about two months after Red Hat, and have a good record with security updates.


So the short story is use Scientific Linux with confidence. The community support is good and mostly free of irritating people, and it's well-maintained and rock-solid.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

The Compiler That Changed the World Turns 25

Last year, Linux celebrated its 20th anniversary. The kernel that Linus Torvalds started as a hobby project helped the Internet bloom, challenged proprietary operating system dominance, and powers hundreds of millions of devices. From hacker toys like the dirt-cheap Raspberry Pi to most of the Top 500 Supercomputers, Linux dominates the computing industry. But it wouldn't have been possible without GCC, which turns 25 today.

Before Torvalds started hacking away on Linux, Richard Stallman and started the GNU (GNU's Not UNIX) project and part of that was the GNU C Compiler (GCC). Eventually that became the GNU Compiler Collection (also GCC) but we're getting a little ahead of the story.

A Short History of GCC and Linux

The origin of GCC actually started in the free C compiler that Stallman started in 1984, based on a multi-platform compiler developed at Lawrence Livermore Lab. Says Stallman, "It supported, and was written in, an extended version of Pascal, designed to be a system-programming language. I added a C front end, and began porting it to the Motorola 68000 computer. But I had to give that up when I discovered that the compiler needed many megabytes of stack space, and the available 68000 Unix system would only allow 64k.

"I then realized that the Pastel compiler functioned by parsing the entire input file into a syntax tree, converting the whole syntax tree into a chain of "instructions", and then generating the whole output file, without ever freeing any storage. At this point, I concluded I would have to write a new compiler from scratch. That new compiler is now known as GCC; none of the Pastel compiler is used in it, but I managed to adapt and use the C front end that I had written."

For many Linux enthusiasts, GCC has always just been. It's part of the landscape, and kind of taken for granted. But it wasn't always this way. Michael Tiemann, now vice president of open source affairs at Red Hat and the founder of Cygnus Solutions, called GCC's intro in 1987 a "bombshell."

"I downloaded it immediately, and I used all the tricks I'd read about in the Emacs and GDB manuals to quickly learn its 110,000 lines of code. Stallman's compiler supported two platforms in its first release: the venerable VAX and the new Sun3 workstation. It handily generated better code on these platforms than the respective vendors' compilers could muster... Compilers, Debuggers, and Editors are the Big 3 tools that programmers use on a day-to-day basis. GCC, GDB, and Emacs were so profoundly better than the proprietary alternatives, I could not help but think about how much money (not to mention economic benefit) there would be in replacing proprietary technology with technology that was not only better, but also getting better faster."

By the time Torvalds started working on Linux, GCC had quite a history of releases. The first release was 0.9, the first beta release, on March 22, 1987. It had 48 releases between the 0.9 release and 1.40, which came out on June 1, 1991. Torvalds announced that he was working on Linux in July 1991, using Minix and GCC 1.40.

The EGCS Years

As Tiemann notes later, GCC development started falling behind a bit. Some folks got a bit impatient with the GCC development process and speed of getting features to users. GCC had a number of forks in the wild with patches that hadn't made it into GCC. That led to the EGCS project, a fork of a GCC development snapshot from August 1997.

Eventually, in April 1999, EGCS became the official GCC, and GCC took on the "GNU Compiler Collection" moniker.

New in GCC 4.7

I won't try to summarize all of the development and features that have been part of GCC from 1999 to now. Suffice it to say, a lot has happened and GCC has been at the heart of more than Linux.

GCC has also been used by FreeBSD, NetBSD, and OpenBSD as well as part of Apple's XCode tools until very recently. (That's another story for another day...) If it's hard to imagine a world without Linux, it's even harder to imagine a world without the compiler that's been used to build Linux for more than 20 years.

So it's appropriate that the 25th anniversary release of GCC brings quite a bit of goodies for developers. According to Richard Guenther, 4.7 is a "major release, containing substantial new functionality not available in GCC 4.6.x or previous GCC releases."

Says Guenther, GCC 4.7 picks up support for newer standards for C++, C, and Fortran. "The C++ compiler supports a bigger subset of the new ISO C++11 standard such as support for atomics and the C++11 memory model, non-static data member initializers, user-defined literals, alias-declarations, delegating constructors, explicit override and extended friend syntax. The C compiler adds support for more features from the new ISO C11 standard. GCC now supports version 3.1 of the OpenMP specification for C, C++ and Fortran."

GCC also expands its hardware support, with new support for Intel's Haswell and AMD's Piledriver (both x86) architectures. It also adds support for the Cortex-A7 (ARM) line of processors, and for a number of others. Check out the full list of change for 4.7 if you're interested.

For fun and added points, check out this video on YouTube that visualizes 20 years of GCC development. It uses Gource to visualize the history of GCC development.

A big tip of the hat to the GCC team, present, past, and future. Without it, where would we be now?


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Wednesday, March 21, 2012

The Ever-Changing Linux Skillset

Just because you had what it takes for a good Linux-related job a decade ago, it doesn't mean that you have what it takes today. The Linux landscape has changed a lot, and the only thing that's really stayed constant is that a love of learning is a requirement.

What employers want from Linux job seekers is a topic I've spent a lot of time thinking about, but this post by Dustin Kirkland got me to thinking about just how drastically things have changed in a very short time. The skills that were adequate for a good Linux gig in 2002 may not be enough to scrape by today.

This isn't only true for Linux administrators, of course. If you're in marketing or PR, for example, you probably should know a great deal about social networks that didn't even exist in 2002. Journalists that used to write for print publications are learning to deal with Web-based publications, which often includes expanding to video and audio production. (Not to mention an ever-shrinking number of newspapers to work for...) Very few skilled jobs have the same requirements today as they did 10 years ago.

But if you look back at the skills needed for Linux admins and developers about ten years ago, and now, it's amazing just how much has changed. Kirkland, who's the chief architect at Gazzang, says that he's hired more than a few Linux folks in that time. Kirkland worked at IBM and Canonical before Gazzang, and says that he's interviewed "hundreds" of candidates for dozens of developer, engineer and intern jobs.

Over the years, Kirkland says that the "poking and prodding of a given candidate's Linux skills have changed a bit." What he's mostly looking for is the "candidate's inquisitive nature" but the actual skills he touches on give some insight as well.

Ten years ago, and Kirkland says that he'd want to see candidates who were familiar with the LAMP stack. Nine years ago, and Kirkland says that he'd look for candidates who "regularly compiled their own upstream kernel, maybe tweaked a few configuration options on or off just for fun." (If you've been using Linux this long, odds are you have compiled your own kernels quite a bit.)

Basically, a decade ago, you were looking at folks working with individual machines. In a lot of environments, you could apply what you knew from working with a few machines at home to a work environment. That didn't last.

Six years ago, Kirkland says that he was looking "for someone who had built their own Beowulf cluster, for fun, over the weekend. If not Beowulf, then some sort of cluster computing. Maybe Condor, or MPICH."

Not long after that, Kirkland says that he was looking for experience with open source virtualization – KVM, Xen, QEMU, etc. Three years ago, and Kirkland says that he wanted developers with Launchpad or GitHub accounts. Note that would have required early adopters on behalf of GitHub, since the site had only launched in 2008. Git itself was first released in 2005, but it took a few years before really catching on.

Two years ago? Clustering and social development alone weren't enough. Kirkland says that he was looking for folks using cloud technologies like Eucalyptus. (He also mentions OpenStack, but two years ago the number of people who'd actually been using OpenStack would have been fairly negligible since it was only announced in the summer of 2010..)

Finally, the most recent addition to the list is "cloud-ready service orchestration," which translates to tools like Puppet, Chef, or Juju.

Even that's not enough. What's next? Kirkland says that he's looking for folks who've rooted their phones, tried out big data and thrown together "a map-reduce Hadoop job or two, just for grins."

Naturally, this is just a snapshot of what one interviewer considers important. Kirkland's list of topics may or may not mirror what you'd get in an interview with any other company, but the odds are that you'll see something similar. His post illustrates just how much the landscape has changed in a short time, and the importance of keeping up with the latest technology.

If you're a job seeker, it means a lot of studying. If you're already employed, it means that you should be keeping up with these trends even if you're not using them in your work environment. If you're an employer, it means you should be investing heavily in Linux training and/or finding ways to help your staff stay current.

Even if you're a big data-crunching, cloud computing, GitHub-using candidate, the odds are that next year you'll need to be looking at even newer technology. From the LAMP stack to OpenStack, and beyond, things are not standing still. The one job skill you'll always need is a love of learning.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

As Data Grows, So Grows Linux


IDC recently announced its numbers for 2011 Q4 servers sales: overall server revenues are up for the year 5.8 percent, and shipments are up 4.2 percent. As The Reg reports, these shipment numbers are back to pre-recession levels.


What’s more interesting, though, is the trends that emerge from the very latest reporting quarter, Q4. Linux was the only operating system that saw a revenue increase in servers Q4, with a 2.2 percent rise. Windows lost 1.5 percent and Unix 10.7 percent.


IDC attributes some of that Linux success to its role in what the analyst firm calls “density-optimized” machines, which are really just white box servers, and are responsible for a lot of the growth in the server market. These machines have gained popularity in a space still squeezed on budget and that continues to be commoditized. But there are other factors at play for Linux’s success over its rivals.


Coming out of the recession, Linux is in a very different position than it was 10 years ago when we emerged from the last bubble. Today it's mature, tried, tested and supported by a global community that makes up the largest collaborative development project in the history of computing.


Our latest survey of the world’s largest enterprise Linux users found that Total Cost of Ownership, technical superiority and security were the top three drivers for Linux adoption. These points support Linux’s maturity and recent success. Everyone is running their data centers with Linux. Stock exchanges, supercomputers, transportation systems and much more are using Linux for mission-critical workloads.


Also helping Linux’s success here is the accelerated pace by which companies are migrating to the cloud. Long a buzzword, the cloud is getting real, right now. While there is still work to do for Linux and the cloud, there is no denying its dominant role in today’s biggest cloud companies: Amazon and Google to name just two.


The mass migration to cloud computing has been quickened due, in part, to the rising level of data: both the amount of data enterprises are dealing with but the also how fast that data is growing. IDC this week predicted that the “Big Data” business will be worth $16.9B in three years. There is a huge opportunity here for Linux vendors. Our Linux Adoption Trends report, shows that 72 percent of the world’s largest Linux users are planning to add more Linux servers in the next 12 months to support the rising level of data in the enterprise. Only 36 percent said they would be adding more Windows servers to support this trend.


The enterprise server market is a strong area for Linux, but it’s an incredibly competitive market. Together we’ll continue to advance Linux to win here. In fact, we’ll be meeting at the NYSE offices in April at our Annual Linux Foundation Enterprise End User Summit where some of the world’s largest companies will talk in depth about exactly the things I’ve touched on here.


Yet again we are seeing market winners are born from collaboration. And we have the numbers to back it up.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Dream Studio 11.10: Upgrade or Hands Off?


Many Linux distributions specialized for multimedia distributions have come and gone. Some were pretty good, but Dream Studio has outshone them all. Musician and maintainer Dick Macinnis has just released Dream Studio 11.10, based on Ubuntu Oneiric Ocelot. Dream Studio 11.04 is a tough act to follow – is it worth upgrading to 11.10?


Chasing Ubuntu


Basing a custom distribution on Ubuntu has a lot of advantages, but it also means chasing a fast-moving target. There are ways to minimize the pain, as Macinnis explains. "The decision to create different versions of Dream Studio is one I had made quite a while ago, and is one of the reasons I decided to get all my packages into PPAs rather than on a personally hosted repo."


PPAs are Personal Package Archives hosted on Canonical's Launchpad. This is a slick way to make third-party package repositories available in a central location. Ever wonder what goes into making your own Linux distribution? Even when you base it on another distro like Ubuntu it's still work.


"When I build the Dream Studio each release cycle, I basically install Ubuntu on a VM, make sure all the packages will install properly, and run a script to add my personal optimizations and such. Then I use Ubuntu Customization Kit to unpack the stock Ubuntu liveCD, run the script I've made on it via chroot, and pack it up again," says Macinnis. "Dealing with changes in Ubuntu from one release to the next is the biggest issue and takes the most time, which is why I don't begin until Ubuntu has been released (as chasing a moving target was driving me nuts a couple releases ago). However, since almost all my packages are now desktop independent (except artwork), making derivatives with different DEs is quite easy."


Sure, it's easy when you know how. Vote for your favorite desktop environment in Macinnis' poll, and be sure to vote for LXDE because that is my favorite. Or E17, which is beautiful and kind to system resources. Or maybe Xfce.


System Requirements


Dream Studio 11.10 is a 2GB ISO that expands to 5.6GB after installation. You can run it from a live CD or USB stick, but given the higher performance requirements of audio and video production you really want to run it from a hard disk. While we're on the subject of hard drives, don't get excited over 6Gb/s SATA hard disk drives. They're not much faster than old-fashioned 3Gb/s or 1.5Gb/s SATA HDDs, and you need a compatible motherboard or PCI-e controller. Put your extra money into a good CPU instead. Audio and video production, and editing photo and image files are CPU-intensive. Bales of RAM never hurts, and a discrete video card, even a lower-end one, gives better performance than cheapo onboard video that uses shared system memory.


My studio PC is powered by a three-core AMD CPU, 4GB RAM, an Nvidia GPU, and a couple of 2TB SATA 3Gb/s hard drives. It's plenty good enough, though someday I'm sure I'm going to muscle it up more. Why? Why not?


You want your hardware running your applications and not getting weighed down driving your operating system. Dream Studio ships with GNOME 2, GNOME 2 with no effects, Unity, and Unity 2D. Just for giggles I compared how each one looked in top, freshly booted and no applications running:

GNOME 2: 440,204k memory, 6% CPUGNOME 2, no effects: 453,640k memory, 3.7% CPUUnity: 592,432k memory, 4.8% CPUUnity 2D: 569,936 1.0% CPU

It's not a big difference, measuring memory usage is not precise in Linux, and your mileage may vary, so use what makes you happy.


What's Inside


Dream Studio installs with a vast array of audio, movie, photography, and graphics applications. It's a great showcase for the richness of multimedia production software on Linux. Audio is probably the biggest pain in the behind, as the Linux audio subsystem can be a real joy* to get sorted out. One of the best things Dream Studio does is box it all up sanely, and on most systems you just fire up your audio apps and get to work. It comes with a low-latency kernel, the JACK (JACK Audio Connection Kit) low-latency sound server and device router, and the pulseaudio-module-jack for integrating PulseAudio with JACK. If you have a single sound card this doesn't give you anything extra, so you're probably better off disabling PulseAudio while JACK is running. This is easy: in Qjackctrl go to Setup -> Options and un-check "Execute script after startup: pulsejack" and "Execute script after shutdown: pulsejackdisconnect". Leave "Execute script on startup: pausepulse" and "Execute script after shutdown: killall jackd" checked.


If you have more than one audio interface PulseAudio gives you some extra device routing options that you don't have with JACK alone. Once upon a time PulseAudio was buggy and annoying because it was new, and it introduced latency. It's stable and reliable now, but it still introduces some latency which is not good for audio production. But when you're capturing and recording audio streams, as long as everything is in sync then latency doesn't matter. Try it for yourself; it is easy and fun.


The creative applications are nicely-organized in both Unity and GNOME 2. Some notable audio apps are Audacity, Ardour, Hydrogen drum kit, DJ Tools, Tuxguitar, batches of special effects, and the excellent Linux Multimedia Studio (LMMS). On the graphics and video side you get FontForge, Luminance HDR, Scribus, Hugin, Stopmotion, Openshot, Blender, Agave, and a whole lot more.


There are a few of the usual productivity apps like Firefox, Libreoffice, Empathy, and Gwibber. And of course you may install anything in Linux-land that your heart desires.


Upgrade or No?


The problems I've run into are mostly Ubuntu glitches. During installation, the partitioning tool only gives a teeny tiny bit of room to show your existing partitions, and it does not resize, so you can't see all of your partitions without figuring out how to make it scroll. (Click on any visible partition and navigate with the arrow keys.) Ubuntu wants to you play audio CDs with Banshee; it wants this so badly it does not have a "play CD with" option. But Banshee doesn't work — it doesn't see the CD. My cure for this was to install VLC. There were some other nits I forget so they couldn't have been all that serious.


The one significant issue I ran into was with mass xruns in JACK. A xrun is a buffer underrun; an interruption or dropout in throughput. This can cause noticeable dropouts in your sound recordings. xruns should not be a problem on a system as powerful as mine, and they never have been. Until now. It could be a kernel problem, or a bug in JACK, it's hard to say. So before you upgrade a good working system test this new release well first.


*If you define joy as head-banging aggravation.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.