Custom Search
Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Tuesday, March 27, 2012

Disorganized? Get Tracks on Linux


Feeling a bit disorganized? Looking to take control of your projects? Take a look at Tracks, an open source Web-based application built with Ruby on Rails. Tracks can help you get organized and Get Things Done (GTD) in no time.


Getting Tracks


You can get Tracks in a couple of ways. Some distributions may offer Tracks packages, and the Tracks project has downloads you can use to set it up.


The easiest way to get Tracks, though, is the BitNami Tracks stack which includes all the dependencies and just takes a few minutes to download and set up. No need to try to set up a database, Ruby on Rails, or anything. (If you're already using Ruby on Rails for something else, you might want to go for the project downloads, of course.)


I'm running the BitNami stack on one of my Linux systems on my home intranet. BitNami provides installers for Linux, Windows, and Mac OS X. They also provide pre-built VMs with Tracks pre-configured, so you could fire it up in VirtualBox or VMware if you prefer.


What Tracks Does


Tracks is good for single-user project management. It's based around the GTD methodology, and helps users organize their to-do list using GTD.


You can use Tracks without employing GTD, but Tracks is an "opinionated" tool. That is, it's structured around GTD, so it has some artifacts that might not fit into other methodologies quite as well.


Like any other to-do list or organizer, Tracks lets you set up actions. These have a description, notes, tags, due dates, and dependencies.


Dependencies are actions that have to be done before you can complete the current action. So, for instance, if you have a to-do item to deploy Tracks, it might be dependent on installing Ruby on Rails first.


Tracks also has contexts and projects. Projects are just what they sound like. For example, I use separate projects for each site that I write for (Linux.com, ReadWriteWeb, etc.), and also for home, hobbies, etc.


Contexts, on the other hand, are sort of nebulous but can be best defined as groups of actions that can be performed at the same time. You might have a "phone" context for all of the phone calls you need to make, regardless of whether you're going to be making calls for work or personal reasons. So if you're already making a few phone calls for one project, you might decide to just get all of your phone calls out of the way rather than switching context and being less effective.


Likewise, you might have a "system administration" context or "errands" context. If you're already logged into a client's server, you might want to take care of all the system administration tasks in one sitting rather than switching back and forth between phone calls and system administration. If you're going to run an errand to buy a new backup drive for a desktop system, you might also go ahead and do grocery shopping while you're out instead of making two trips.


There's also a special tickler context that allows you to throw in action items that don't really relate to other contexts. This is good for items that don't have specific due dates, or might be "things I kind of want to do, but have no specific plans for yet." I use the "tickler" context for article/post ideas that I don't have scheduled or assigned anywhere, as well as for things I want to do someday but aren't on the immediate horizon. Tickler items can eventually be put into other contexts as they become more important, or you might (yeah, right) finish all your other work and decide to tackle tickler items.


The Tracks Web-based interface is pretty straightforward. You have a Home page with each context and its action items. On the right-hand side you have a form for adding actions.


One thing you'll like right away is that Tracks will auto-complete actions, contexts and so forth that you already have in the system. If you have a "writing" context, for example, you just have to type a few letters and it'll offer to auto-complete it for you.


It also will create contexts and projects for you automatically the first time you use them, rather than requiring you to create them first and then use them.


Tracks is easy to get started with and use. It's not entirely perfect, but it's darn good if you want a single-user organization system. Give it a shot and see if it works for you!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Bodhi Linux, the Beautiful Configurable Lightweight Linux


Bodhi Linux is gorgeous, functional, and very customizable. It just so happens that's what grumpy old Linux nerds like me think Linux is always supposed to be. Let's take this Linux newcomer for a spin and learn what sets it apart from the zillions of other Linux distributions.


Like Sand Through the Hourglass


Bodhi Linux is fast-growing newcomer to the Linux distro scene. The first release was at the end of 2010, and it has attracted users and contributors at a fast pace. Do we need yet another Linux distro? Yes we do. KDE and GNOME both leaped off the deep end and left users in the lurch. KDE4 has matured and is all full of functionality and prettiness, but it's a heavyweight, and GNOME 3 is a radical change from GNOME 2, though considerably easier on system resources than KDE4. And there are all the other good choices for graphical environments such as Xfce, LXDE, Fluxbox (to me, KDE4 is Fluxbox with bales of special effects, as they share the same basic concepts for organizing workflow and desktop functionality), IceWM, Rox, AfterStep, Ratpoison, FVWM, and many more. So what value does Bodhi add? Four words: Enlightenment, minimalism, and user choice.


Enlightenment


The first release of the Enlightenment window manager was way back in the last millennium, in 1997. The current version, E17, has been in development since 2000, which has to be a record. I predict there will never be a final release because that would spoil its legendary status as the oldest beta.


I've always thought of Enlightenment as a flexible, beautiful, lightweight window manager for developers, because my best experiences with it were when it came all nicely set-up in a distro like Elive, PCLinuxOS, Yellow Dog, and MoonOS. When I tried installing it myself I got lost, which is probably some deficiency on my part. Enlightenment is a wonderful window manager that run under big desktops like KDE, and it can run standalone. It runs on multiple operating systems and multiple hardware platforms, and it supports fancy special effects on low-powered systems. Bodhi Linux makes good use of Enlightenment's many excellent abilities.


Bohdi


Bodhi Linux is the creation of Jeff Hoogland, and is now supported by a team of 35+ people. System requirements are absurdly low: 300mhz i386 CPU, 128MB RAM, 1.5GB hard drive space. The minimalist approach extends to installing with a small complement of applications. I suppose some folks might prefer having six of everything to play with, but I've always liked installing what I want, instead of removing a bunch of apps I don't want. There are maybe a dozen applications I use on a regular basis, and a set of perhaps 20-some that I use less often. I don't need a giant heavyweight environment just to launch the same old programs every day. So Bodhi's minimalist approach is appealing.


Bodhi is based on the Ubuntu long-term support releases, and is on a rolling release schedule in between major releases. When the next major release comes out users will probably have to reinstall from scratch, but the goal is for Bodhi to become a true rolling-release distribution that never needs reinstallation.


Killer Feature: Profiles


One particular feature I find brilliant in Bodhi is profiles. Profiles are an Enlightenment feature, and the Bodhi team created their own custom set. First you choose from the prefab profiles: bare, compositing, desktop, fancy, laptop-/netbook, tablet, and tiling. The laptop/netbook profile looks great on my Thinkpad SL410.


The fancy profile greets you with a dozen virtual desktops and a shower of penguins, some of who sadly meet their demise (figure 1.) Then you can customize any of the profiles any way you like with different themes, Gadgets, whatever you want, and quickly switch between them without logging out. So you could have a work and home profile, a single and multi-monitor profile, a travel profile.


No Disruptions


Given all the uproar over KDE4 and GNOME, I asked Jeff Hoogland about the future of Bodhi. He explained that disruptive change is not in the Bodhi roadmap:


"The reason for this is the way in which we utilize E17's "profiles". We recognise that a singular desktop setup is not going to satisfy all users and is far from being suitable for all types of devices. In other words if the Bodhi team sees the need to develop an alternative desktop setup for some reason, it would simply be offered in addition to our current profile selections – not replacing them. User choice is one of our mottoes."


Like KDE4 and Fluxbox, Bodhi supports desktop Gadgets (widgets in KDE4) for displaying things like weather forecast, clock, desktop pager, hardware and system monitors, and various controls. It has its own compositing manager, Ecomorph, which is a port of Compiz. This is an installable option and not included by default because it has problems on some hardware. But if it works on your system it's nice because it doesn't need a mega-super-duper CPU to support a trainload of special effects.


Bodhi comes with the Ubuntu software repositories enabled by default, plus the Bodhi repos. You can manage software with apt-get, Synaptic, or the Bodhi Linux AppCenter for installing apps from a Web page. This requires either the default Midori Web browser or Firefox, because they support the apt:url protocol. The AppCenter has package groups like the Nikhila Application Set, which has one of everything: word processor, audio player, movie editor, and several more. You can get the Bodhi Audio Pack, the Bodhi Image Pack, Bodhi Scientific Publishing, and several more. You're not stuck with the packs, but can install any of the individual applications. Applications are also sorted by category, such as Image Editing, Office Suite, Communication, and such.


Enlightenment has a bit of a learning curve, but the Bodhi folks have written a good Enlightenment Guide, and a lot of other useful documentation. I always like to ask distro maintainers why they lost their minds and decided to create their own Linux distributions. They're not the result of magic, but forethought and planning:


"While the idea to start up a minimalistic, Enlightenment based Ubuntu distro was originally my own I recruited team members before we even released our first disc. We started off with myself, Jason Peel, and Ken LaBuda. Today we have nearly forty people who contribute code and/or documentation to Bodhi, not to mention the countless people who have donated to keep our servers running! I have a rough release schedule posted here — and while that outline is flexible, I would bet it will be fairly close to reality."


Mobile Bodhi


Enlightenment seems like a natural fit for mobile devices, and Mr. Hoogland has plans in that direction as well:


"I would love to get Bodhi working on mobile devices eventually. We have a functional ARM branch that I have successfully booted on the Genesi Smartbook, HP Touchpad, Nokia N900 and ArchOS Gen8 devices to name a few. Sadly though other than then Genesi all these devices lack some functionality due to closed source hardware. We are simply ready and waiting to get our ARM branch with our tablet profile working on a truly open mobile device (if there are any companies out there interested in producing such a device they shouldn't hesitate to contact me)!"


Visit Bodhilinux.com for good documentation, forums, and downloads.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Scientific Linux, the Great Distro With the Wrong Name


Scientific Linux is an unknown gem, one of the best Red Hat Enterprise Linux clones. The name works against it because it's not for scientists; rather it's maintained by science organizations. Let's kick the tires on the latest release and see what makes it special.


Red Hat


Red Hat is one of the oldest Linux distributions, and has long been a fundamental contributor by funding development and making deep inroads into the enterprise. Red Hat is a billion-dollar company (they're expected to make an official announcement at the end of this month) and they did it without tricky licensing that locks up their best products behind proprietary licenses. This graph on Wikipedia shows the reach and influence of Red Hat Linux. (Your browser should have a zoom control to enlarge the image.)


And they did it while competing with high-quality free-of-cost distros, and giving away their own products. Anyone can have Red Hat Enterprise Linux for free by downloading and compiling the source RPMs. Which is harder than downloading an ISO image and burning a CD, but not a whole lot harder. My fellow geezers might remember the wails of protest when Red Hat discontinued the free ISOs, all those non-paying customers who vowed to never be non-paying Red Hat customers again.


Clone Stampede


But these things have a way of working out, and it didn't take long for free RHEL ISOs to appear again. Only they weren't released and supported by Red Hat, but clone distros like CentOS, White Box, Pie Box, CAOS, Yellow Dog, and ClearOS. Clones come, and clones go, and CentOS has been the most popular RHEL clone for several years. I think Scientific Linux would be as popular as CentOS if it had a different name, because most people assume it is aimed at scientists, and is all full of scientific applications. But it's not; it is called Scientific Linux because it is maintained by a cooperative of science labs and universities. Fermi National Accelerator Laboratory and the European Organization for Nuclear Research (CERN) are its primary sponsors. (CERN also funded Tim Berners-Lee when he invented the World Wide Web.)


Scientific Linux calls Red Hat The Upstream Vendor, or TUV. Red Hat told the clones that they could not use any of Red Hat's trademarked materials such as their name, logo, and other artwork. So many of the clones use cutesy nicknames (like TUV) instead.


Developed by Scientists


Scientific Linux was born in 2004, when Connie Sieh released the first prototype. It was based on Fermi Linux, one of the earliest RHEL clones, and it was called HEPL, High Energy Physics Linux because Ms. Sieh and the other original developers worked in physics labs. They shopped it around and solicited feedback and help, and it was renamed Scientific Linux to reflect that the community around it was not just physicists, but also scientists in other fields.


What Does it Do?


So what is Scientific Linux good for? Desktop? Server? Laptop? High-demand server? Yes to all. It is a faithful copy of RHEL, plus some useful additions of its own.


Scientific Linux offers several different download images, from a 161MB boot image for network installations, to live CD and DVD images, to the entire 4.5GB distro on two DVDs. I tried the LiveMiniCD first, which at 482MB isn't all that mini. This is the IceWM version. IceWM is a good lightweight desktop environment, but despite expanding to 1.6GB at installation the LiveMiniCD installation looks barebones. The only visible graphical applications are Firefox and Thunderbird, and Nautilus is installed but it is not in the system menu. There is a big batch of servers and clients, storage clients, networking and system administration tools. IceWM is a nice graphical environment for a server – I know, the old rule is No X Windows On A Server! In real life admins should use whatever makes them the most efficient and gets the job done.


Installing the Desktop group gives you a stripped-down Gnome desktop for the price of a 32MB download, and of course there are many more Gnome packages to choose from, including Compiz for fancy desktop effects. Good old Gnome 2, of course, so Gnome 2 fans, here is your chance to get a good solid Gnome 2 system with long-term support. RHEL has ten-year support cycles, plus extensions. The Scientific Linux team expect to support 6.x until 2017, though this could change.


Additions Not In TUV


Scientific Linux has bundled a nice toolset for making custom spins the easy way. The package group is SL Spin Creation, and you get nice tools like LiveUSB Creator and Revisor (figure 2.) In fact, SL was designed with custom spins in mind; spins have their own unique features, but all of them are compatible. There is even a naming convention: Scientific Linux followed by the name of the institution. For example if I made one I would call it Scientific Linux LCR, for Little Critter Ranch.


You could install yum-autoupdate for scheduled hands-off updates, some extra repositories, and the OpenAFS distributed filesystem. OpenAFS is used a lot in research and education.


The SL_desktop_tweaks package adds a terminal icon to the Gnome kicker, and an "add/remove programs" menu item to KDE. SL_enable_serialconsole is a nice script that configures the console to send output to both the serial port and the screen, and it provides a login prompt. Probably the most useful tweak is SL_password_for_singleuser, which requires a root password for single user mode. That's right, TUV still leaves this unprotected, so anyone with physical access to the machine can boot to single user mode for password-less access. Of course, as the old saying goes "Anyone with physical access to the machine owns it."


Enterprise Features


Red Hat made more than 600 changes and fixes to their version of the Linux kernel, and most of these are for high-demand high-performance workloads. You get all these in Scientific Linux too, along with all the virtualization, high-availability, multiple network storage protocols, clustering, device mapper multipathing, load balancing, and other enterprise goodies.


Support is always a question for the free clones; even when you're an ace admin and know the system inside out, you're still dependent on the vendor for timely security patches and releases. Scientific Linux is consistent at getting their releases out the door about two months after Red Hat, and have a good record with security updates.


So the short story is use Scientific Linux with confidence. The community support is good and mostly free of irritating people, and it's well-maintained and rock-solid.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Wednesday, March 21, 2012

The Ever-Changing Linux Skillset

Just because you had what it takes for a good Linux-related job a decade ago, it doesn't mean that you have what it takes today. The Linux landscape has changed a lot, and the only thing that's really stayed constant is that a love of learning is a requirement.

What employers want from Linux job seekers is a topic I've spent a lot of time thinking about, but this post by Dustin Kirkland got me to thinking about just how drastically things have changed in a very short time. The skills that were adequate for a good Linux gig in 2002 may not be enough to scrape by today.

This isn't only true for Linux administrators, of course. If you're in marketing or PR, for example, you probably should know a great deal about social networks that didn't even exist in 2002. Journalists that used to write for print publications are learning to deal with Web-based publications, which often includes expanding to video and audio production. (Not to mention an ever-shrinking number of newspapers to work for...) Very few skilled jobs have the same requirements today as they did 10 years ago.

But if you look back at the skills needed for Linux admins and developers about ten years ago, and now, it's amazing just how much has changed. Kirkland, who's the chief architect at Gazzang, says that he's hired more than a few Linux folks in that time. Kirkland worked at IBM and Canonical before Gazzang, and says that he's interviewed "hundreds" of candidates for dozens of developer, engineer and intern jobs.

Over the years, Kirkland says that the "poking and prodding of a given candidate's Linux skills have changed a bit." What he's mostly looking for is the "candidate's inquisitive nature" but the actual skills he touches on give some insight as well.

Ten years ago, and Kirkland says that he'd want to see candidates who were familiar with the LAMP stack. Nine years ago, and Kirkland says that he'd look for candidates who "regularly compiled their own upstream kernel, maybe tweaked a few configuration options on or off just for fun." (If you've been using Linux this long, odds are you have compiled your own kernels quite a bit.)

Basically, a decade ago, you were looking at folks working with individual machines. In a lot of environments, you could apply what you knew from working with a few machines at home to a work environment. That didn't last.

Six years ago, Kirkland says that he was looking "for someone who had built their own Beowulf cluster, for fun, over the weekend. If not Beowulf, then some sort of cluster computing. Maybe Condor, or MPICH."

Not long after that, Kirkland says that he was looking for experience with open source virtualization – KVM, Xen, QEMU, etc. Three years ago, and Kirkland says that he wanted developers with Launchpad or GitHub accounts. Note that would have required early adopters on behalf of GitHub, since the site had only launched in 2008. Git itself was first released in 2005, but it took a few years before really catching on.

Two years ago? Clustering and social development alone weren't enough. Kirkland says that he was looking for folks using cloud technologies like Eucalyptus. (He also mentions OpenStack, but two years ago the number of people who'd actually been using OpenStack would have been fairly negligible since it was only announced in the summer of 2010..)

Finally, the most recent addition to the list is "cloud-ready service orchestration," which translates to tools like Puppet, Chef, or Juju.

Even that's not enough. What's next? Kirkland says that he's looking for folks who've rooted their phones, tried out big data and thrown together "a map-reduce Hadoop job or two, just for grins."

Naturally, this is just a snapshot of what one interviewer considers important. Kirkland's list of topics may or may not mirror what you'd get in an interview with any other company, but the odds are that you'll see something similar. His post illustrates just how much the landscape has changed in a short time, and the importance of keeping up with the latest technology.

If you're a job seeker, it means a lot of studying. If you're already employed, it means that you should be keeping up with these trends even if you're not using them in your work environment. If you're an employer, it means you should be investing heavily in Linux training and/or finding ways to help your staff stay current.

Even if you're a big data-crunching, cloud computing, GitHub-using candidate, the odds are that next year you'll need to be looking at even newer technology. From the LAMP stack to OpenStack, and beyond, things are not standing still. The one job skill you'll always need is a love of learning.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

As Data Grows, So Grows Linux


IDC recently announced its numbers for 2011 Q4 servers sales: overall server revenues are up for the year 5.8 percent, and shipments are up 4.2 percent. As The Reg reports, these shipment numbers are back to pre-recession levels.


What’s more interesting, though, is the trends that emerge from the very latest reporting quarter, Q4. Linux was the only operating system that saw a revenue increase in servers Q4, with a 2.2 percent rise. Windows lost 1.5 percent and Unix 10.7 percent.


IDC attributes some of that Linux success to its role in what the analyst firm calls “density-optimized” machines, which are really just white box servers, and are responsible for a lot of the growth in the server market. These machines have gained popularity in a space still squeezed on budget and that continues to be commoditized. But there are other factors at play for Linux’s success over its rivals.


Coming out of the recession, Linux is in a very different position than it was 10 years ago when we emerged from the last bubble. Today it's mature, tried, tested and supported by a global community that makes up the largest collaborative development project in the history of computing.


Our latest survey of the world’s largest enterprise Linux users found that Total Cost of Ownership, technical superiority and security were the top three drivers for Linux adoption. These points support Linux’s maturity and recent success. Everyone is running their data centers with Linux. Stock exchanges, supercomputers, transportation systems and much more are using Linux for mission-critical workloads.


Also helping Linux’s success here is the accelerated pace by which companies are migrating to the cloud. Long a buzzword, the cloud is getting real, right now. While there is still work to do for Linux and the cloud, there is no denying its dominant role in today’s biggest cloud companies: Amazon and Google to name just two.


The mass migration to cloud computing has been quickened due, in part, to the rising level of data: both the amount of data enterprises are dealing with but the also how fast that data is growing. IDC this week predicted that the “Big Data” business will be worth $16.9B in three years. There is a huge opportunity here for Linux vendors. Our Linux Adoption Trends report, shows that 72 percent of the world’s largest Linux users are planning to add more Linux servers in the next 12 months to support the rising level of data in the enterprise. Only 36 percent said they would be adding more Windows servers to support this trend.


The enterprise server market is a strong area for Linux, but it’s an incredibly competitive market. Together we’ll continue to advance Linux to win here. In fact, we’ll be meeting at the NYSE offices in April at our Annual Linux Foundation Enterprise End User Summit where some of the world’s largest companies will talk in depth about exactly the things I’ve touched on here.


Yet again we are seeing market winners are born from collaboration. And we have the numbers to back it up.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Can Linux Win in Cloud Computing?


Gerrit Huizenga is Cloud Architect at IBM (and fellow Portland-er) and will be speaking at the upcoming Linux Foundation Collaboration Summit in a keynote session titled "The Clouds Are Coming: Are We Ready?" Linux is often heralded as the platform for the cloud, but Huizenga warns that while it is in the best technical position to warrant this title, there is work to do to make this a reality.


Huizenga took a few moments earlier this week to chat with us as he prepares for his controversial presentation at the Summit.


You will be speaking at The Linux Foundation Collaboration Summit about Linux and the cloud. Can you give us a teaser on what we can expect from your talk?


Huizenga: Clouds are on the top of every IT departments list of new and key technologies to invest in. Obviously high on those lists are things like VMware and Amazon EC2. But where is the open source community in terms of comparable solutions which can be easily set up and deployed? Is it possible to build a cloud with just open source technologies? Would that cloud be a "meets min" sort of cloud, or can you build a full fledged, enterprise-grade cloud with open source today? What about using a hybrid of open source and proprietary solutions? Is that possible, or are we locked in to purely proprietary solutions today? Will Open Standards help us? What are some recommendations today for building clouds?


Linux is often applauded as the "platform for the cloud." Do you think this is accurate? If not, what still needs to be done? If so, what is it about Linux that gives it this reputation?


Huizenga: Linux definitely has the potential to be a key platform for the cloud. However, it isn't there yet. There are a few technology inhibitors with respect to Linux as the primary cloud platform, as well as a number of market place challenges. Those challenges can be addressed but there is definitely some work to do in that space.


What are the advantages of Linux for both public and private clouds?


Huizenga: It depends a bit about whether you consider Linux as a guest or virtual server in a cloud, or whether it is the hosting platform of the cloud. The more we enable Linux as a guest within the various hypervisors, and enable Linux to be managed within the cloud, the greater the chance of standardizing on Linux as the "packaging format" for applications.


This increases the overall presence of Linux in the market place and in some ways simplifies ISV's lives in porting applications to clouds. As a hosting platform, one of the biggest advantages for cloud operators is the potential cost/pricing model for Linux and the overall impact on the cost of operating a cloud. And, the level of openness that Linux provides should simplify the ability to support the cloud infrastructure and over time increase the number of services that can be provided by a cloud. But we still have quite a bit of work to do to make Linux a ubiquitous cloud platform.


What is happening at the Linux development level to support the rapidly maturing cloud opportunity? What does the community need from other Linux users and developers to help accelerate its development and address these challenges?


Huizenga: I'll talk about some of the KVM technologies that we need to continue to develop to enable cloud, as well as some of the work on virtual server building & packaging, DevOps, Deployment, and Management. There are plenty of places for the open source community to contribute and several talks at the Collaboration Summit should dive further into the details as well.


What do you make of Microsoft running Linux on Azure?


Huizenga: Anything that lets us run Linux in more places must be good!


More information about Huizenga's talk can be found on The Linux Foundation Collaboration Summit schedule. If you're interested in joining us, you can also request an invitation to attend.


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

What's New in Linux 3.3?

Sunday, Linus Torvalds released the 3.3 Linux kernel. In the latest installment of the continuing saga of kernel development, we've got more progress towards Android in the kernel, EFI boot support, Open vSwitch, and improvements that should help with the problem of Bufferbloat.

Is it just me, or is it still a little weird to be talking about 3.x kernels? It's been about eight months since the official bump to 3.0, but that's compared to more than seven years with the 2.6.x series.

At any rate, here we are. Let's take a look at some of the changes in Linux 3.3!

Everybody was Bufferbloat Fighting!

The Android patches are likely to get the most attention in 3.3, but the thing that I'm most excited by? More work going on to solve the Bufferbloat problem.

In a nutshell, Bufferbloat is a symptom of a lot of small problems that creates "a huge drag on Internet performance, ironically, by previous attempts to make it work better. Or the one-sentence summary, "bloated buffers lead to network-crippling latency spikes."

It's not a problem that's going to be solved all in one go, or in one area. But the Linux kernel is one of the pieces that needs addressing. In the 3.3 release, we've got the ability to set byte queue limits in the kernel.

Driver Goodies

Check out the list of drivers that have made it out of staging. Specifically, the gma500 driver is out of staging. This means the infamous Poulsbo chipset should be supported in the mainline kernel finally.

This release also includes the NVM Express driver (NVMe) which supports solid state disks attached to the PCI-Express bus. Most SSDs are SATA, Fibre Channel or SAS drives. The work was done by Intel's Matthew Wilcox, which isn't surprising since the NVM Express standard is also supported by Intel and a number of other companies. Would love to get my hands on one of these drives to test the 3.3 kernel out...

Want to tether your Linux box to your brand new iPhone? The iPhone USB Ethernet Driver (ipeth) module has been updated to add support for the iPhone 4S.

The 3.3 kernel also picks up some drivers for third generation Wacom Bamboo tablets and Cintiq 24HD, and initial driver support for the Intuos4.

Open vSwitch

Another biggie in 3.3? The Open vSwitch project is merging into the kernel tree. It's not new – it's been around for some time – but it's finally making its way into the mainline kernel. (This seems to be a frequent theme, doesn't it?)

Basically, Open vSwitch is a virtual switch for complex virtualized server deployments. Given the ever-growing popularity of virtualized servers and cloud deployments, this is something in high demand. As the Open vSwitch page says, "Open vSwitch can operate both as a soft switch running within the hypervisor, and as the control stack for switching silicon. It has been ported to multiple virtualization platforms and switching chipsets. It is the default switch in XenServer 6.0, the Xen Cloud Platform and also supports Xen, KVM, Proxmox VE and VirtualBox. It has also been integrated into many virtual management systems including OpenStack, openQRM, and OpenNebula."

No doubt, you'll be reading more about Open vSwitch on Linux.com in the near future.

Android Comes Closer

Last, but not least, the 3.3 kernel includes nearly complete support for Android. This is good news all around, but isn't really a surprise. The kernel folks have been working on this for a long time.

Now the question is, will we start seeing Android apps on top of normal distributions? Will we start seeing standard Linux apps running on Android? Will mod communities, like CyanogenMod, start using the mainline kernel? Should be an interesting year. Then again, when isn't it an interesting year when Linux is involved?

As usual, the release includes lots more fixes, new drivers, and so forth. Check out the Kernel Newbies page for more. The merge window for 3.4 is now open, with the traditional two-week cutoff for pull requests. Looking forward to what 3.4 brings!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Wednesday, March 14, 2012

Random Linux Tips: Making KDE4 Behave, Thwacking Those Weirdo U3 Partitions on USB Sticks

Sometimes, we have little tips and tricks that make life easier – but don't quite take up a full article. So today I've bundled a few practices that many Linux.com readers might find helpful. You'll learn how to control window behavior in KDE4, and make Nepomuk and Strigi be useful; and remove those silly proprietary U3 partitions from USB sticks.


 


Sometimes, we have little tips and tricks that make life easier – but don't quite take up a full article. So today I've bundled a few practices that many Linux.com readers might find helpful. You'll learn how to control window behavior in KDE4, and make Nepomuk and Strigi be useful; and remove those silly proprietary U3 partitions from USB sticks.


 

Making KDE4 Behave

KDE4 comes with eleventy-eight hundred and fifteen fancy special effects, and the one that are turned on by default seem rather random. With a little digging you can figure out how to control them. The one that drives me around the bend is when dragging a window by its title bar maximizes it. This is entertaining when all you want to do is move it. How many ways do we need to maximize windows? Apparently a whole lot of them. But KDE4 has controls for everything, so you can turn this particular feature on or off in System Settings > Workspace Behavior. Go to the Screen Edges screen, and uncheck "Maximize windows by dragging them to the top of the screen." Click the Apply button and you're done.


Underneath that is "Tile windows by dragging them to the side of the screen." What I really want is a one-click tile all windows, like in Windows 3.1 and some Linux desktop environments. But this feature is fairly useful once you figure it out; when you drag a window to the top right or left edge it shrinks to one-quarter of the screen. Drag it to the bottom left or right it shrinks to half-screen size. Drag it away to restore to its previous size.


Do you get tired of swatting KDE's oversize information pop-ups out of the way? Go to the Workspace screen and un-check "Show informational tips" and then click Apply.


The fine KDE4 folks have invested a lot of energy into the semantic desktop and semantic search. You've probably heard some of the wailing against Nepomuk and Strigi because they crash or bog down the whole system. Together they index files, file metadata, comments, tags, ratings, and file contents.


Nepomuk and Strigi do bog down your system on their first run. If you can leave your computer running until they complete that initial indexing, then after that you won't even notice them. For example, on my system with a nearly full 2TB hard drive, the first run took about two days, and it used a fair whack of CPU cycles. Nepomuk and Strigi indexed over 80,000 files, and the Nepomuk database is now over 700MB. Since that first run I leave them on all the time, and they barely use 1% of memory and CPU.


You can configure Nepomuk and Strigi in System Settings > Desktop Search. On the Basic Settings tab you can turn them on and off, and the Details link tells you the Nepomuk data store statistics. Use the Desktop Query tab to choose which files and directories are indexed by Strigi. By default temporary files, core dumps, and other non-data files are not indexed. The Advanced Settings tab lets you control how much memory Nepomuk uses.


Once you've let Nepomuk and Strigi search and index your files, then what? Here is one example. Figure 1 shows the results of a search for "Dick Macinnis" (author of the Dream Studio distribution) from inside the Dolphin file manager. It found emails we had exchanged, and articles I had written that mentioned him.


You can fine-tune your search in a number of ways: Filename, Content, From Here (the current directory), Everywhere, and on the bottom right you can select Documents, Audio, Video, and other file types. Then right-click on any item found in the search to choose which application to open it, open it in a new window with the full file path displayed, and several other useful actions. So even though this semantic stuff is still a baby, it's already useful.Figure 1: A Strigi search dives deeply into your system.

Getting Rid of Weirdo U3 Partitions on USB Sticks

I have an 8GB Sandisk Cruzer that comes with a special partition loaded with the U3 Launchpad, which is a portable environment for running applications from the USB stick on Windows. There is an actual U3 specification, and compliant applications can write to the Windows registry and load files on a Windows PC. Then when the device is removed the files and registry entries are also removed, and any application settings are stored on the USB stick. This all sounds cool, except it has a hidden partition that you cannot remove by ordinary means and can't use for data storage, and it won't let you create a nice bootable Linux USB stick.


The hidden partition presents itself as a USB hub with a CD drive attached. When you plug one of these into a Linux PC it looks like a normal partitioned block device to fdisk and Gparted, but in Figure 2 we see how it looks to the KDE4 Device Notifier: it sees the U3 partition as an optical device, and can mount it and read the files.


On my system it is mounted as /media/U3 System/. The ls command, like so many old Unix-y utilities, doesn't like spaces in filenames. But not to worry, because we can still list the files with the help of some quotation marks:

$ ls "/media/U3 System/"autorun.inf LaunchPad.zip LaunchU3.exe

You can't delete these files the usual way. One way is to search your vendor's site for a delete utility, which most likely will run only on Windows. We hardy Linux users have our own special tool, and that is u3-tool. If it's not in your distro repo fetch it from Sourceforge. Run it with no options to see an option summary:

$ u3-tool Not enough argumentsu3-tool 0.3 - U3 USB stick managerUsage: u3-tool [options] Options: -c Change password -d Disable device security -D Dump all raw info(for debug) -e Enable device security -h Print this help message -i Display device info -l Load CD image into device -p Repartition device -R Reset device security, destroying private data -u Unlock device -v Use verbose output -V Print version informationFor the device name use: '/dev/sda0', '/dev/sg3'

Try running sudo u3-tool -i /dev/sdc, using your own device name of course, to see the device's partitions:


 

$ sudo u3-tool -i /dev/sdcTotal device size: 7.51 GB (8065646592 bytes)CD size: 7.69 MB (8060928 bytes)Data partition size: 7.50 GB (8057520128 bytes)

 


Use the -D option to dump complete device information. This reveals a number of interesting things, such as "Max. pass. try: 5". If it is password-protected that means you get five attempts to enter the correct password. If you fail then the device locks and the only way to get back in is to reset the password, which deletes all the data. You'll also see the serial number, manufacturer, and exact partition sizes.Figure 2: KDE4's Device Notifier sees the special U3 partition.


Sometimes, though not always, you can read these devices on Linux even when they are password-protected. They'll mount like a normal USB stick with no password. Sometimes Linux won't even be able to create the block device and you'll see a ton of these messages in dmesg:


 

[ 5773.262417] sd 8:0:0:0: [sdc] No Caching mode page present[ 5773.262422] sd 8:0:0:0: [sdc] Assuming drive cache: write through

 


If that's the case all you can do is open it on a Windows machine and reset the password. This wipes all the data, but at least you can still use the device.


Now we come to the fun part: eliminating the U3 partition entirely. Stuff it into your Linux box and run this command, using your own device name:


 

$ sudo u3-tool -p 0 /dev/sdcWARNING: Loading a new cd image causes the whole device to be whiped. This INCLUDES the data partition.I repeat: ANY EXCISTING DATA WILL BE LOST!Are you sure you want to continue? [yn] y

 


And you now have a nice normal universal USB stick without any tricksy Windows-only guff. Enjoy!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Sunday, March 11, 2012

Linux Training Opportunities at Linux Foundation Collaboration Summit

The Linux Foundation's Collaboration Summit is a great time to, well, collaborate. But it's also a really good opportunity to learn.

We're offering three courses at this year's Collaboration Summit, each in a different area, to help build skills while rubbing elbows with other top kernel developers.

Advanced Linux Performance Tuning is a deep dive into proven tools and methods used to identify and resolve performance problems, resulting in system that is better optimized for specific workloads.  This is particularly for those who write or use applications that have unusual characteristics, that behave differently than kernel performance heuristics anticipate.  It is a hands-on course that assume some familiarity with basic performance tools.  This course is offered on Monday, April 2nd.

Overview of Open Source Compliance End-to-End Process
is for any company that is redistributing Linux or other open source code.  It provides a thorough discussion of the processes that should be in place to ensure that all open source code is being tracked and that licensing obligations are being met.  This is a very practical course designed to give your company the ability to design your own internal process.  This course is offered on Sunday, April 1st.

Practical Guide to Open Source Development is not a course on coding.  Rather, it is about maximizing the effectiveness of your contributions.  It is structured to give you a thorough understanding of the characteristics that make the open source model work well for corporate develoment organizations, and covers best practices when joining an external open source project, when launching your own, and when open sourcing proprietary code.  This course is offered on Monday, April 2nd.

All of these courses are available for registered invitees now.  If you've already registered for Collaboration Summit, you can modify your conference registration and add these courses.

See you there!


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Tuesday, March 6, 2012

How Pinweel Uses Linux to Power Group Photo Sharing

Pinweel, a group photo sharing service, launched in February 2012. Lead back-end developer Michael De Lorenzo explains how Pinweel is different than other photo sharing services and how Linux and open source are built into the backend.

Pinweel, a group photo sharing service, launched in February 2012. Lead back-end developer Michael De Lorenzo explains how Pinweel is different than other photo sharing services and how Linux and open source are built into the backend.

Linux.com: How did the Pinweel project start?Pinweel Logo

Michael De Lorenzo: The Pinweel project started almost two years ago when Pinweel co-founders James Tillinghast and Rich Bulman set out to develop a better way for groups of people to easily and instantly share photos with one another. I joined the team as the lead back-end developer, and Tony Amoyal took the lead on the front-end. Richard Paul Guy has also played a key role in mobile development.

Linux.com: What does the road map for the project look like right now?

Michael De Lorenzo: Our development focus continues to be on making it as easy and satisfying as possible for people to share photos among groups. The release that we launched with in the App Store does that really well, but we see lots of opportunities to continue to make photo sharing a better experience. Things like location, hashtagging, video, and search functions are all interesting to us as ways to enhance the sharing process on the Pinweel platform. We're also focused on making Pinweel available across as many mobile platforms and devices as possible.

Linux.com: How do you expect Pinweel to be used?

Michael De Lorenzo: Some of the most powerful features of Pinweel are the immediacy with which you can share photos with your social graph, the ease with which you can organize groups around a shared photo album, and the way you can define privacy levels for each shared album. So Pinweel is going to appeal to users who are attracted to those solutions. If you don't like having to wait to see your friends' photos, Pinweel lets you see them instantly, as they're taken. If you have different groups of friends with whom you want to share different types of photos, Pinweel makes it incredibly fast and easy to create and invite a group. And if you don't like having to always post your photos in a forum as public as a social network, Pinweel lets you pick just who gets to see your photos. This combination of solutions really isn't available in any other photo sharing service.

Linux.com: How is the project funded, and how will it make money?

Michael De Lorenzo: Our funding to date has come from the founders and one outside round of angel funding. Our business model is based on building a large photo-based community that can eventually offer brands an ideal platform for two-way communications with consumers using photos as a medium.

Linux.com: What about privacy? What information are you collecting on your users, and who will you be sharing it with?

Michael De Lorenzo: We're putting privacy front and center as a primary value proposition with our application. It's a fine line to walk — making sharing easy while at the same time maintaining user trust with how their photos are shared. We're also taking care to not cross any lines when it comes to the privacy of user data.

We all saw the backlash that Path – and others – endured over the uploading and storage of address book data without explicit user consent. Pinweel doesn't do that, and we won't do that. We're listening very closely to our early adopters with regard to things they like and don't like in terms of how information and content is shared and making updates to our application(s) as quickly as possible.

Linux.com: Can we peek under the hood and hear about all the open source and specific Linux technologies you are using and how they are being used?

Michael De Lorenzo: We made the decision really early on to use a lot of open source technologies, Linux, of course, being front and center. We chose to standardize our servers with Ubuntu; all our servers now run on Ubuntu. We've started with very bare installations of Ubuntu and only added the packages each server needed to perform its function(s).

Our Web application and APIs are built using Ruby, Rails, Twitter Bootstrap, and jQuery, are served with Passenger and Apache, and make heavy use of REST. We store our data in MongoDB and utilize Memcached for our caching layers.

Linux.com: Why did you choose Linux? And what code will Pinweel contribute back to the open source community?

Michael De Lorenzo: We selected Linux for a few very important reasons. First, we were more comfortable with managing our infrastructure on Ubuntu versus Windows. Second, our programming stack, in our opinion, just works better in the Linux environment. Third, and this one probably could be higher considering we're boostrapping Pinweel at the moment, is that it is more cost-effective with Amazon Web Services.

It's still early and we haven't yet needed to alter any of the open source technologies we use, but if we had to and it was something that had value to the open source community Pinweel would most certainly contribute back. In fact, we're looking forward to being successful enough that we could add some real value back to the projects that helped us build Pinweel.

Linux.com: Pinwheel also announced a new photo sharing service in February. How do you plan to differentiate your service and minimize confusion about the nearly identical names?

Michael De Lorenzo: The issue with Pinwheel is really a legal one that will be settled by the lawyers. Our public position has been made clear with respect to our clearly established trademark rights.

We're already seeing really exciting worldwide download and usage numbers for Pinweel. Users around the world are enthusiastically using Pinweel to communicate with photos, and we think that's the result of us having done good job of creating an easy-to-use solution for sharing photos more effectively.

Linux.com: Thanks for explaining Pinweel to us, and good luck on the new project!

]]>
This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Friday, March 2, 2012

Unknown Bash Tips and Tricks For Linux

Familiarity breeds ennui, and even though Bash is the default Linux command shell used daily by hordes of contented users, it contains a wealth of interesting and useful features that don't get much attention. Today we shall learn about Bash builtins and killing potential.

Familiarity breeds ennui, and even though Bash is the default Linux command shell used daily by hordes of contented users, it contains a wealth of interesting and useful features that don't get much attention. Today we shall learn about Bash builtins and killing potential.

Bash Builtins

Bash has a bunch of built-in commands, and some of them are stripped-down versions of their external GNU coreutils cousins. So why use them? You probably already do, because of the order of command execution in Bash:

Bash aliases Bash keywords Bash functions Bash builtins Scripts and executable programs that are in your PATH

So when you run echo, kill, printf, pwd, or test most likely you're using the Bash builtins rather than the GNU coreutils commands. How do you know? By using one of the Bash builtins to tell you, the command command:

 

$ command -V echoecho is a shell builtin$ command -V pingping is /bin/ping

 

The Bash builtins do not have man pages, but they do have a backwards help builtin command that displays syntax and options:

 

$ help echoecho: echo [-neE] [arg ...] Write arguments to the standard output. Display the ARGs on the standard output followed by a newline. Options: -n do not append a newline -e enable interpretation of the following backslash escapes[...]

 

I call it backwards because most Linux commands use a syntax of commandname --help, where help is a command option instead of a command.

The type command looks a lot like the command builtin, but it does more:

 

$ type -a catcat is /bin/cat$ type -t catfile$ type llll is aliased to `ls -alF'$ type -a echoecho is a shell builtinecho is /bin/echo$ type -t grepalias

 

The type utility identifies builtin commands, functions, aliases, keywords (also called reserved words), and also binary executables and scripts, which it calls file. At this point, if you are like me, you are grumbling "How about showing me a LIST of the darned things." I hear and obey, for you can find these delightfully documented in the The GNU Bash Reference Manual indexes. Don't be afraid, because unlike most software documention this isn't a scary mythical creature like Sasquatch, but a real live complete command reference.

The point of this little exercise is so you know what you're really using when you type a command into the Bash shell, and so you know how it looks to Bash. There is one more overlapping Bash builtin, and that is the time keyword:

 

$ type -t timekeyword

 

So why would you want to use Bash builtins instead of their GNU cousins? Builtins may execute a little faster than the external commands, because external commands have to fork an extra process. I doubt this is much of an issue on modern computers because we have horsepower to burn, unlike the olden days when all we had were tiny little nanohertzes, but when you're tweaking performance it's one thing to look at. When you want to use the GNU command instead of the Bash builtin use its whole path, which you can find with command, type, or the good old not-Bash command which:

 

$ which echo/bin/echo$ which which/usr/bin/which

 Bash Functions

Run declare -F to see a list of Bash's builtin function names. declare -f prints out the complete functions, and declare -f [function-name] prints the named function. type won't find list functions, but once you know a function name it will also print it:

$ type quotequote is a functionquote () { echo \'${1//\'/\'\\\'\'}\'}

This even works for your own functions that you create, like this simple example testfunc that does one thing: changes to the /etc directory:

$ function testfunc> {> cd /etc> }

Now you can use declare and type to list and view your new function just like the builtins.

Bash's Violent Side

Don't be fooled by Bash's calm, obedient exterior, because it is capable of killing. There have been a lot of changes to how Linux manages processes, in some cases making them more difficult to stop, so knowing how to kill runaway processes is still an important bit of knowledge. Fortunately, despite all this newfangled "progress" the reliable old killers still work.

I've had some troubles with bleeding-edge releases of KMail; it hangs and doesn't want to close by normal means. It spawns a single process, which we can see with the ps command:

ps axf|grep kmail 2489 ? Sl 1:44 /usr/bin/kmail -caption KMail

You can start out gently and try this:

$ kill 2489

This sends the default SIGTERM (signal terminate) signal, which is similar to the SIGINT (signal interrupt) sent from the keyboard with Ctrl+c. So what if this doesn't work? Then you amp up your stopping power and use SIGKILL, like this:

$ kill -9 2489

This is the nuclear option and it will work. As the relevant section of the GNU C manual says: "The SIGKILL signal is used to cause immediate program termination. It cannot be handled or ignored, and is therefore always fatal. It is also not possible to block this signal." This is different from SIGTERM and SIGINT and other signals that politely ask processes to terminate. They can be trapped and handled in different ways, and even blocked, so the response you get to a SIGTERM depends on how the program you're trying to kill has been programmed to handle signals. In an ideal world a program responds to SIGTERM by tidying up before exiting, like finishing disk writes and deleting temporary files. SIGKILL knocks it out and doesn't give it a chance to do any cleanup. (See man 7 signal for a complete description of all signals.)

 

So what's special about Bash kill over GNU /bin/kill? My favorite is how it looks when you invoke the online help summary:

$ help kill

Another advantage is it can use job control numbers in addition to PIDs. In this modern era of tabbed terminal emulators job control isn't the big deal it used to be, but the option is there if you want it. The biggest advantage is you can kill processes even if they have gone berserk and maxed out your system's process number limit, which would prevent you from launching /bin/kill. Yes, there is a limit, and you can see what it is by querying /proc:

$ cat /proc/sys/kernel/threads-max61985

With Bash kill there are several ways to specify which signal you want to use. These are all the same:

$ kill 2489$ kill -s TERM 2489$ kill -s SIGTERM 2489$ kill -n 1 2489

kill -l lists all supported signals.

If you spend a little quality time with man bash and the GNU Bash Manual I daresay you will learn more valuable tasks that Bash can do for you.

]]>
This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Tuesday, February 21, 2012

The Ever-Changing Linux Skillset

Just because you had what it takes for a good Linux-related job a decade ago, it doesn't mean that you have what it takes today. The Linux landscape has changed a lot, and the only thing that's really stayed constant is that a love of learning is a requirement.

What employers want from Linux job seekers is a topic I've spent a lot of time thinking about, but this post by Dustin Kirkland got me to thinking about just how drastically things have changed in a very short time. The skills that were adequate for a good Linux gig in 2002 may not be enough to scrape by today.

This isn't only true for Linux administrators, of course. If you're in marketing or PR, for example, you probably should know a great deal about social networks that didn't even exist in 2002. Journalists that used to write for print publications are learning to deal with Web-based publications, which often includes expanding to video and audio production. (Not to mention an ever-shrinking number of newspapers to work for...) Very few skilled jobs have the same requirements today as they did 10 years ago.

But if you look back at the skills needed for Linux admins and developers about ten years ago, and now, it's amazing just how much has changed. Kirkland, who's the chief architect at Gazzang, says that he's hired more than a few Linux folks in that time. Kirkland worked at IBM and Canonical before Gazzang, and says that he's interviewed "hundreds" of candidates for dozens of developer, engineer and intern jobs.

Over the years, Kirkland says that the "poking and prodding of a given candidate's Linux skills have changed a bit." What he's mostly looking for is the "candidate's inquisitive nature" but the actual skills he touches on give some insight as well.

Ten years ago, and Kirkland says that he'd want to see candidates who were familiar with the LAMP stack. Nine years ago, and Kirkland says that he'd look for candidates who "regularly compiled their own upstream kernel, maybe tweaked a few configuration options on or off just for fun." (If you've been using Linux this long, odds are you have compiled your own kernels quite a bit.)

Basically, a decade ago, you were looking at folks working with individual machines. In a lot of environments, you could apply what you knew from working with a few machines at home to a work environment. That didn't last.

Six years ago, Kirkland says that he was looking "for someone who had built their own Beowulf cluster, for fun, over the weekend. If not Beowulf, then some sort of cluster computing. Maybe Condor, or MPICH."

Not long after that, Kirkland says that he was looking for experience with open source virtualization – KVM, Xen, QEMU, etc. Three years ago, and Kirkland says that he wanted developers with Launchpad or GitHub accounts. Note that would have required early adopters on behalf of GitHub, since the site had only launched in 2008. Git itself was first released in 2005, but it took a few years before really catching on.

Two years ago? Clustering and social development alone weren't enough. Kirkland says that he was looking for folks using cloud technologies like Eucalyptus. (He also mentions OpenStack, but two years ago the number of people who'd actually been using OpenStack would have been fairly negligible since it was only announced in the summer of 2010..)

Finally, the most recent addition to the list is "cloud-ready service orchestration," which translates to tools like Puppet, Chef, or Juju.

Even that's not enough. What's next? Kirkland says that he's looking for folks who've rooted their phones, tried out big data and thrown together "a map-reduce Hadoop job or two, just for grins."

Naturally, this is just a snapshot of what one interviewer considers important. Kirkland's list of topics may or may not mirror what you'd get in an interview with any other company, but the odds are that you'll see something similar. His post illustrates just how much the landscape has changed in a short time, and the importance of keeping up with the latest technology.

If you're a job seeker, it means a lot of studying. If you're already employed, it means that you should be keeping up with these trends even if you're not using them in your work environment. If you're an employer, it means you should be investing heavily in Linux training and/or finding ways to help your staff stay current.

Even if you're a big data-crunching, cloud computing, GitHub-using candidate, the odds are that next year you'll need to be looking at even newer technology. From the LAMP stack to OpenStack, and beyond, things are not standing still. The one job skill you'll always need is a love of learning.

Friday, February 17, 2012

Accessibility Leaders in Linux

Accessibility to computers for people with vision, hearing, or physical impairments needs to be a part of fundamental design, and not an afterthought. Progress in the proprietary world is slow, and even slower in the Linux/FOSS world. But thanks to some dedicated people some significant work has been accomplished, and the groundwork laid for a common platform for all Linux distributions to build on.

Accessibility to computers for people with vision, hearing, or physical impairments needs to be a part of fundamental design, and not an afterthought. Progress in the proprietary world is slow, and even slower in the Linux/FOSS world. But thanks to some dedicated people some significant work has been accomplished, and the groundwork laid for a common platform for all Linux distributions to build on.

What is Accessibility?

A lot of people have rather negative attitudes towards accessible design, and dismiss it as a lot of bother for a tiny percentage of users. Some are even hostile to the whole concept, like it's conferring special privileges on the undeserving. I had a rather spirited discussion with a building contractor once upon a time who went out of his way to park his equipment on the handicap parking spaces when there was abundant regular parking closer to the work site. (The older I get the less I understand humanity.)

It's expensive to retrofit old buildings, sidewalks, public parks, and other structures to be friendlier to people who can't navigate heavy doors, stairs, narrow crowded store aisles, or walk any distance. My number one peeve is tiny restroom stalls with doors that open inwards. Not everyone is agile enough to climb on the darned seat in order to open the door. Car design is pretty much unchanged from the Model T. Public transit is often a contest of the fittest.

I can't think of a single good reason to write off our fellow humans and say "Well they'll just have to accept limitations" when so many limitations can be conquered because they're just engineering problems. Something I run into all the time is "Oh but there aren't many disabled people, why should I go out of my way for one or two percent of the population." Not only is this heartless, it's blind to reality because something like 50 million Americans have some kind of physical impairment, and the number is growing.

A bonus of accessible design is it's good design for everyone. It's absurd that houses, to give one prominent example, are still built with the same inadequate floor plans as they always have. Narrow doors and hallways are unfriendly to everyone, not just wheelchair users, but to anyone who has to move furniture, or carry laundry baskets or bags of groceries. Bathrooms with grab-bars and sit-down showers are safer for everybody, and kitchens always suck because they are designed by people who never use them. Electric outlets are placed as though we will plug things in only once and will never change our minds, so it's all right to make them inconvenient. What does it take to open a window? Can you arrange furniture sensibly and with good flow? And so on...take a look around your own house; chances are it suffers from a legacy of decades of bad designs mindlessly repeated.

There may not be a lot that FOSS can do about such problems, but one arena where we rule is computing, and that is the essential starting point. Being computer-literate and connected means tapping into a universe of information and essential services, and being able to program and design devices. We have a golden opportunity in FOSS to change people's live simply by writing code.

Who Will Write It?

Let's think about this a for a moment. The most-misused saying in FOSS is "Scratch your own itch," and the context is usually some variation on "Users are annoying and I don't want to think about them." Here is the quotation in context from Eric Raymond's The Cathedral and the Bazaar:

"1. Every good work of software starts by scratching a developer's personal itch.

Perhaps this should have been obvious (it's long been proverbial that "Necessity is the mother of invention") but too often software developers spend their days grinding away for pay at programs they neither need nor love."

Raymond also said, "I sent chatty announcements to the beta list whenever I released, encouraging people to participate. And I listened to my beta-testers, polling them about design decisions and stroking them whenever they sent in patches and feedback.

So a personal itch isn't just something that matters only to the programmer, but rather is something he or she has a strong interest in, and is interested in having actual real people use it. My interest in accessible technologies is fueled by two things: I've always been for the underdog, and some people close to me have had unnecessarily difficult lives because of a lack of helpful technology and rational design, and a lack of interest in developing any.

It is a large and complicated problem: impaired hearing, impaired vision, can't type, reading disabilities, can't use a mouse, low stamina for sitting at a computer, and any combination of these. How to translate images and audio, how to compose music or edit photos or draw pictures? Maybe someday we'll have something like Braille displays that display images for people who can't see, and a way to translate music into a form for people who can't hear, and tools for people with various impairments to have both input and output in whatever forms they want, and to invent new ones.

The proprietary world is ahead of FOSS in both software and hardware, and it's not all that advanced, but is stuck at turning plain text to speech, and speech to text. Even that is more challenging than it seems, as this interview with some of the GNOME Accessibility developers illustrates.

Accessible FOSS Leaders

The GNOME 2 project has long provided the best accessibility support: complete keyboard control of the desktop, screen magnification, onscreen keyboard, and the GNOME team even invented an open accessibility architecture, the AT-SPI (Assistive Technology Service Provider Interface), because they recognized that it needs to be fundamental part of GNOME's architecture. GNOME 3 accessibility support is incomplete, so stick with GNOME 2. KDE4 aims to support AT-SPI someday, but it's not there yet. KDE4 has its own sets of assistive technologies, but they only work in KDE4.

The Fedora Accessibility Guide makes the case for Linux and FOSS as the logical accessibility leaders:

"While the Graphical User Interface (GUI) is convenient for sighted users, it is often inhibiting to those with visual impairments because of the difficulty speech synthesizers have interpreting graphics. Linux is a great operating system for users with visual limitations because the GUI is an option, not a requirement. Most modern tools including email, news, web browsers, calendars, calculators, and much more can run on Linux without the GUI. The working environment can also be customized to meet the hardware or software needs of the user."

The Fedora Accessibility Guide has a lot of good information and is updated with nearly every Fedora release. Fedora always bundles a lot of accessibility applications.

Vinux, based on Ubuntu 10.10 Maverick Meerkat, is a complete live Linux distribution optimized for blind and visually impaired users. It bundles screen readers, full-screen magnifiers, built-in support for USB Braille displays, and optimized fonts and colors.

The Braille display support is a big deal because it's ready at bootup. It also comes with Compiz, which has some nice magnifying and navigating features using the mouse. Vinux can be installed to a hard drive, but you'll need someone who can see to do it. Vinux is the best and easiest way to get the best Linux accessibility applications working out of the box. Whatever you do, do not ever upgrade Vinux unless the maintainers say it is OK, because an upgrade will probably break the applications you depend on.

The Orca screen reader is the most fully-featured Linux screen reader. It supports multiple speech synthesizers and Braille displays. You need GNOME 2 for Orca to work because there are many glitches in GNOME 3. As GNOME and KDE continue to present moving targets stick with Vinux for best performance and least hassles.

Linux has several text-to-speech readers such as eSpeak and Festival that work fairly well, and Emacspeak, like Emacs, is a self-contained universe that is amazingly functional — inside of Emacs. The only good speech-to-text application for Linux (that I know of) is Julius. Julius is maintained by the Nagoya Institute of Technology and Kyoto University, and is quite sophisticated, though a fair bit of work to set up.

There is much untapped potential in Linux and FOSS for leading the way in accessible and assistive technologies. Perhaps someday a desktop roadmap will look like a journey, a succession of improvements that build on what came before, instead of a series of cliffs and do-overs.

]]>

Accessibility Leaders in Linux

Accessibility to computers for people with vision, hearing, or physical impairments needs to be a part of fundamental design, and not an afterthought. Progress in the proprietary world is slow, and even slower in the Linux/FOSS world. But thanks to some dedicated people some significant work has been accomplished, and the groundwork laid for a common platform for all Linux distributions to build on.

What is Accessibility?

OrcaA lot of people have rather negative attitudes towards accessible design, and dismiss it as a lot of bother for a tiny percentage of users. Some are even hostile to the whole concept, like it's conferring special privileges on the undeserving. I had a rather spirited discussion with a building contractor once upon a time who went out of his way to park his equipment on the handicap parking spaces when there was abundant regular parking closer to the work site. (The older I get the less I understand humanity.)

It's expensive to retrofit old buildings, sidewalks, public parks, and other structures to be friendlier to people who can't navigate heavy doors, stairs, narrow crowded store aisles, or walk any distance. My number one peeve is tiny restroom stalls with doors that open inwards. Not everyone is agile enough to climb on the darned seat in order to open the door. Car design is pretty much unchanged from the Model T. Public transit is often a contest of the fittest.

I can't think of a single good reason to write off our fellow humans and say "Well they'll just have to accept limitations" when so many limitations can be conquered because they're just engineering problems. Something I run into all the time is "Oh but there aren't many disabled people, why should I go out of my way for one or two percent of the population." Not only is this heartless, it's blind to reality because something like 50 million Americans have some kind of physical impairment, and the number is growing.

A bonus of accessible design is it's good design for everyone. It's absurd that houses, to give one prominent example, are still built with the same inadequate floor plans as they always have. Narrow doors and hallways are unfriendly to everyone, not just wheelchair users, but to anyone who has to move furniture, or carry laundry baskets or bags of groceries. Bathrooms with grab-bars and sit-down showers are safer for everybody, and kitchens always suck because they are designed by people who never use them. Electric outlets are placed as though we will plug things in only once and will never change our minds, so it's all right to make them inconvenient. What does it take to open a window? Can you arrange furniture sensibly and with good flow? And so on...take a look around your own house; chances are it suffers from a legacy of decades of bad designs mindlessly repeated.

There may not be a lot that FOSS can do about such problems, but one arena where we rule is computing, and that is the essential starting point. Being computer-literate and connected means tapping into a universe of information and essential services, and being able to program and design devices. We have a golden opportunity in FOSS to change people's live simply by writing code.

Who Will Write It?

Let's think about this a for a moment. The most-misused saying in FOSS is "Scratch your own itch," and the context is usually some variation on "Users are annoying and I don't want to think about them." Here is the quotation in context from Eric Raymond's The Cathedral and the Bazaar:

"1. Every good work of software starts by scratching a developer's personal itch.

Perhaps this should have been obvious (it's long been proverbial that "Necessity is the mother of invention") but too often software developers spend their days grinding away for pay at programs they neither need nor love."

Raymond also said, "I sent chatty announcements to the beta list whenever I released, encouraging people to participate. And I listened to my beta-testers, polling them about design decisions and stroking them whenever they sent in patches and feedback.

So a personal itch isn't just something that matters only to the programmer, but rather is something he or she has a strong interest in, and is interested in having actual real people use it. My interest in accessible technologies is fueled by two things: I've always been for the underdog, and some people close to me have had unnecessarily difficult lives because of a lack of helpful technology and rational design, and a lack of interest in developing any.

It is a large and complicated problem: impaired hearing, impaired vision, can't type, reading disabilities, can't use a mouse, low stamina for sitting at a computer, and any combination of these. How to translate images and audio, how to compose music or edit photos or draw pictures? Maybe someday we'll have something like Braille displays that display images for people who can't see, and a way to translate music into a form for people who can't hear, and tools for people with various impairments to have both input and output in whatever forms they want, and to invent new ones.

The proprietary world is ahead of FOSS in both software and hardware, and it's not all that advanced, but is stuck at turning plain text to speech, and speech to text. Even that is more challenging than it seems, as this interview with some of the GNOME Accessibility developers illustrates.

Accessible FOSS Leaders

The GNOME 2 project has long provided the best accessibility support: complete keyboard control of the desktop, screen magnification, onscreen keyboard, and the GNOME team even invented an open accessibility architecture, the AT-SPI (Assistive Technology Service Provider Interface), because they recognized that it needs to be fundamental part of GNOME's architecture. GNOME 3 accessibility support is incomplete, so stick with GNOME 2. KDE4 aims to support AT-SPI someday, but it's not there yet. KDE4 has its own sets of assistive technologies, but they only work in KDE4.

The Fedora Accessibility Guide makes the case for Linux and FOSS as the logical accessibility leaders:

"While the Graphical User Interface (GUI) is convenient for sighted users, it is often inhibiting to those with visual impairments because of the difficulty speech synthesizers have interpreting graphics. Linux is a great operating system for users with visual limitations because the GUI is an option, not a requirement. Most modern tools including email, news, web browsers, calendars, calculators, and much more can run on Linux without the GUI. The working environment can also be customized to meet the hardware or software needs of the user."

The Fedora Accessibility Guide has a lot of good information and is updated with nearly every Fedora release. Fedora always bundles a lot of accessibility applications.

Vinux, based on Ubuntu 10.10 Maverick Meerkat, is a complete live Linux distribution optimized for blind and visually impaired users. It bundles screen readers, full-screen magnifiers, built-in support for USB Braille displays, and optimized fonts and colors.

The Braille display support is a big deal because it's ready at bootup. It also comes with Compiz, which has some nice magnifying and navigating features using the mouse. Vinux can be installed to a hard drive, but you'll need someone who can see to do it. Vinux is the best and easiest way to get the best Linux accessibility applications working out of the box. Whatever you do, do not ever upgrade Vinux unless the maintainers say it is OK, because an upgrade will probably break the applications you depend on.

The Orca screen reader is the most fully-featured Linux screen reader. It supports multiple speech synthesizers and Braille displays. You need GNOME 2 for Orca to work because there are many glitches in GNOME 3. As GNOME and KDE continue to present moving targets stick with Vinux for best performance and least hassles.

Linux has several text-to-speech readers such as eSpeak and Festival that work fairly well, and Emacspeak, like Emacs, is a self-contained universe that is amazingly functional — inside of Emacs. The only good speech-to-text application for Linux (that I know of) is Julius. Julius is maintained by the Nagoya Institute of Technology and Kyoto University, and is quite sophisticated, though a fair bit of work to set up.

There is much untapped potential in Linux and FOSS for leading the way in accessible and assistive technologies. Perhaps someday a desktop roadmap will look like a journey, a succession of improvements that build on what came before, instead of a series of cliffs and do-overs.



busy

Thursday, February 9, 2012

Unsung Heroes of Linux, Part One

Everyone knows and loves Linus Torvalds, the creator of Linux. Mark Shuttleworth, the creator of Ubuntu Linux, is pretty famous. Richard Stallman, the founder of the Free Software Foundation and creator of the GPL, is equal parts famous and infamous. But surely there is more to Linux and Free/Open Source software than these three. And indeed there are thousands upon thousands of people toiling away fueling the mighty FOSS engine; here is a small sampling of these important contributors who make the FOSS world go 'round.

Lady Ada, Adafruit Industries

Lady Ada is Limor Fried, electronics engineer and founder of Adafruit Industries. My fellow crusty old-timers remember way way back when Radio Shack was actually about do-it-yourself electronics hacking instead of the passive brain-decay of cell phones and big-screen TVs.

Adafruit Industries is a welcome replacement for us weirdos who like to take things apart and figure out how they work. Adafruit Industries sells Arduino boards, kits, and related parts and tools. Even more valuable is the wealth of well-illustrated tutorials. You can start from scratch, with no electronics knowledge, and get a solid fundamental education in a few days' of reading and hands-on hacking.

Dr. Tony Sales, Vinux

Linux and FOSS should be leading the way in pioneering accessibility for Linux users with disabilities, because good design for disabled people is good design for everyone. One of the best accessibility projects is the Vinux distribution, which aims for out-of-the-box accessibility for visually impaired Linux users, including installation. This is a lot harder than it sounds — try it for yourself.

If you are looking for a way to make a significant contribution to Linux and to tech, consider the field of accessibility. None of us are getting any younger or healthier.

Dick MacInnis, Dream Studio

Dick MacInnis is a musician, composer, and all-around nerd. He created and maintains Dream Studio, a sleek multi-media Ubuntu spinoff for musicians, photographers, movie makers, and all creative artists. It's a super-nice customization that stays out of your way and lets you get down to business.

Akkana Peck, Rennaissance Nerd

Akkana is one of my favorite people. She used to race cars and motorcyles, flies little radio-controlled airplanes, is into astronomy, mountain biking, kayaking, photography, and all kinds of fun stuff.

Akkana is a versatile and talented coder who has worked at cool-sounding places like Silicon Graphics and Netscape, and currently works for a startup doing embedded Linux and Android work. Akkana wrote the excellent Beginning GIMP book and a bunch of first-rate Linux howtos for Linux Planet. She also writes all kinds of amazing technical articles on her Shallow Sky blog. What earned Akkana a place on this list is her generosity in sharing knowledge and helping other Linux users. Learning, doing, and sharing – isn't that what it's all about?

John Linville, Linux Wireless

The Linux Wireless project is a model that more FOSS projects should emulate. Back around 2006 or so kernel developer John Linville and his team took on the task of overhauling the Linux wireless stack. It was a mess of multiple wireless subsystems (Wavelan, Orinoco, and MadWifi). Drivers were all over the map in what functions they handled, sometimes conflicting with the kernel.

In just a couple of years, without fanfare, it was all significantly streamlined and improved, with a common driver base (mac80211) and assistance for vendors and end users. There are still some odds and ends to be worked out, but it's at the stage where most wireless network interfaces have plug-and-play native Linux support.

Jean Tourillhes, Wireless Tools for Linux

Jean Tourillhes was the core maintainer and primary documenter of the old Linux WLAN drivers and userspace tools. If it were not for Mr. Tourillhes wi-fi on Linux would have been brutish and nasty. (WLAN and wireless-tools have been replaced by the new Linux Wireless project.)

JACK

JACK is not a person, but the JACK Audio Connection Kit for Linux. JACK is a professional-level audio server for connecting audio software and hardware, like a switchboard, and brings professional low-latency audio production to Linux. Paul Davis was JACK's original author, and Jack O'Quin, Stephane Letz, Taybin Rutkin, and many other contributors have all added essential features and supported JACK in multiple important ways.

Jon Kuniholm, The Open Prosthetics Project

Jon Kuniholm, an Iraq war veteran who lost part of his arm in the service, is also a biomechanical engineer devoting his talents and open source methods to improving prosthetic limbs, which have advanced far more in cost than in functionality. Decades-old technology shouldn't be priced like it's cutting edge; the project aims to improve functionality and appearance, and make advanced designs available to anyone who wants them.

Linux OEM Vendors

There are doubtless more than the few that I know about, so please feel free to plug your own favorite independent Linux vendor in the comments. System76 and ZaReason are my favorites because they are true independent mom-and-pop shops that sell desktop Linux PCs without drama or excuses, they offer first-rate customer service and customizations without whining, and don't need a year to retool for a new Linux release.

Some other notable Linux OEMs:

Greg Kroah-Hartman, Linux Driver Project

Greg Kroah-Hartman launched the Linux Driver Project a few years ago to help vendors get drivers for their devices into the mainline kernel. The project has been a huge success, demonstrating yet again (as with Linux Wireless) that lending a friendly, helpful hand works better than yelling.

Denise Paolucci and Mark Smith, Dreamwidth

Dreamwidth Studios is a fork of LiveJournal by former LiveJournal staffers Denise Paolucci and Mark Smith. It is unusual for a FOSS project as it has a majority of women developers, and the whole community is known for being friendly and helpful to newcomers.

OpenTox, Cast of Thousands

The OpenTox project, led by coordinator Barry Hardy, is a global data-collection and analysis framework that aims to replace animal testing for chemical interactions and toxicity with predictive computer analysis.

Ken Starks, the Helios Initiative

Ken Starks does the kind of hard, hands-on advocacy that delivers the best results: rehabbing computers with Linux and giving them to children who can't afford to buy their own computers. Since the Helios Project moved into spiffy new quarters in Taylor, Texas they've expanded to building a computer lab and teaching classes.

Walter Bender, Sugar

Walter Bender was one of the chief designers of Sugar, the computer interface for young children that was originally created for the One Laptop per Child XO-1 netbook. When OLPC allowed as how they were maybe going to allow Windows XP on OLPC netbooks, Mr. Bender is credited by some for saving Sugar by leaving OLPC and founding Sugar Labs to continue its development independently. Sugar is included in a number of Linux distributions including Fedora, Debian, and Mint, and Sugar on a Stick is a complete bootable on a USB stick.

Yes, There is a Moral

There is a moral to this story, and that is that Linux is more than giant wealthy companies, or glamorous celebrity geeks, or an unruly rabble. (Three cheers for unruly rabble!). It is fundamental building blocks that anyone can learn to use to make the world a little bit better.

We know that there's more than a few unsung heroes and heroines of Linux and free software, though. Who do you consider a hero, and why? Stay tuned, we'll have more soon.



busy