Custom Search
Showing posts with label Project:. Show all posts
Showing posts with label Project:. Show all posts

Friday, February 17, 2012

Weekend Project: Get Started with Tahoe-LAFS Storage Grids

Here's an axiom for every major organization to memorize: if you have data, then you have a data storage problem. But while the cloud service players are happy to compete for your business, you may not need to purchase a solution. A number of open source projects offer flexible ways to build your own distributed, fault-tolerant storage network. This weekend, let's take a look at one of the most intriguing offerings: Tahoe LAFS.

Here's an axiom for every major organization to memorize: if you have data, then you have a data storage problem. But while the cloud service players are happy to compete for your business, you may not need to purchase a solution. A number of open source projects offer flexible ways to build your own distributed, fault-tolerant storage network. This weekend, let's take a look at one of the most intriguing offerings: Tahoe LAFS.

Tahoe is a "Least Authority File System" — the LAFS you often see in concert with its name. The LAFS design is an homage to the security world's "principle of least privilege": simply put, Tahoe uses cryptography and access control to protect access to your data. Specifically, the host OS on a Tahoe node never has read or write access to any of the data it stores: only authenticated clients can collect and assemble the correct chunks from across the distributed nodes, and decrypt the files.

Beyond that, though, Tahoe offers peer-to-peer distributed data storage with adjustable levels of redundancy. You can tune your "grid" for performance, fault-tolerance, or strike a balance in between, and you can use heterogeneous hardware and service providers to make up your nodes, providing you with a second layer of protection. Furthermore, although you can use Tahoe-LAFS as a simple distributed filesystem, you can also run web and (S)FTP services directly from your Tahoe grid.

Installation and Testing

The most recent Tahoe release is version 1.9.1, from January 2012. The project provides tarballs for download only, but many Linux distributions now offer the package as well, so check with your package management system first. Tahoe is written in Python, and uses the Twisted framework, as well as an assortment of auxiliary Python libraries (for cryptography and other functions). None are particularly unusual, but if you are installing from source, be sure to double check your dependencies.

Once you have unpacked the source package, execute python ./setup.py build to generate the Tahoe command line tools, then run python ./setup.py test to run the installer's sanity check suite. I found that the Ubuntu package failed to install python-mock, but Tahoe's error messages caught that mistake and allowed me to install the correct library without any additional trouble.

Now that you have the Tahoe tools built, you can connect to the public test grid to get a feel for how the storage system works. This grid is maintained by the project, and is variously referred to as pubgrid or Test Grid. You can experiment with the Tahoe client apps on pubgrid — however, because it is a testbed only, its uptime is not guaranteed, and the maintainers may periodically wipe and rebuild it.

First, run tahoe create-client. This creates a local client-only node on your machine (meaning that it does not offer storage space to the grid), which you will connect to pubgrid by editing the configuration file in ~/.tahoe/tahoe.cfg. Open the tahoe.cfg file and edit the nickname = and introducer.furl = lines.

The nickname is any moniker you choose for your node. During this testing phase, the name makes no difference, but when deploying a grid, useful names can help you keep better tabs on your nodes' performance and uptime. The "introducer" is Tahoe lingo for the manager node that oversees a grid — keeping track of the participating nodes in a publish/subscribe "hub" fashion. The pubgrid's current "FURL" address is pb://tin57bdenwkigkujmh6rwgztcoh7ya7t@pubgrid.tahoe-lafs.org:50528/introducer — but check the Tahoe wiki before entering it in the configuration file, in case it has changed.

Save your configuration file, then run ./tahoe start at the command line. You're now connected! By default, Tahoe offers a web-based interface running at http://127.0.0.1:3456 ... open that address in your web browser, and you will see both a status page for pubgrid (including the grid IDs of nearby peers), and the controls you need to create your own directories and upload test files.

File Storage and Other Front-Ends

Part of Tahoe's LAFS security model is that the directories owned by other nodes are not searchable or discoverable. When you create a directory (on pubgrid or on any other grid), a unique pseudorandom identifier is generated that you must bookmark or scrawl down someplace where you won't forget it. The project has created a shared public directory on pubgrid at this long, unwieldy URI, which gives you an idea of the hash function used.

You can add directories or files in the shared public directory, or create new directories and upload files of your own. But whenever you do, it is up to you to keep track of the URIs Tahoe generates. You can share Tahoe files with other users by sending them the URIs directly. Note also that whenever you upload a file, you have the option to check a box labeled "mutable." This is another security feature: files created as immutable are write-protected, and can never be altered.

In the default setup, your client-only node is not contributing any local disk space to the grid's shared pool. That setting is controlled in the [storage] stanza of the tahoe.cfg file. Bear in mind that the objects stored on your contribution to the shared pool are encrypted chunks of files from around the grid; you will not be able to inspect their contents. For a public grid, that is something worth thinking about, although when running a grid for your own business or project, it is less of a concern.

If all you need to do store static content and be assured that it is securely replicated off-site, then the default configuration as used by pubgrid may be all that you need. Obviously, you will want to run your own storage nodes, connected through your own introducer — but the vanilla file and directory structure as exposed through the web GUI will suffice. There are other options, however.

Tahoe includes REST API in tandem to the human-accessible web front-end. This allows you to use a Tahoe LAFS grid as the storage engine to a web server. The API exposes standard GET, PUT, POST, and DELETE methods, and supports JSON and HTML output. The API is customized to make Tahoe's long, human-unreadable URIs easier to work with, and provides utilities to more easily work with operations (such as search) than can take longer on a distributed grid than they would on a static HTTP server.

There is also an FTP-like front-end, which support SSL-encrypted SFTP operations, and a command-line client useful for server environments or remote grid administration. Finally, a "drop upload" option is available in the latest builds, which allows Tahoe to monitor an upload directory, and automatically copy any new files into the grid.

Running Your Own Grid

The pubgrid is certainly a useful resource for exploring how Tahoe and its various front-ends function, but for any real benefit you need to deploy your own grid. Designing a grid is a matter of planning for the number of storage nodes you will need, determining how to tune the encoding parameters for speed and redundancy, and configuring the special nodes that manage logging, metadata, and other helper utilities.

The storage encoding parameters include shares.needed, shares.total, and shares.happy (all of which are configurable in the tahoe.cfg file). A file uploaded to the grid is divided into shares.needed chunks, to be distributed across the nodes. Tahoe will replicate a total of shares.total chunks, so total must be greater-than-or-equal-to needed. If they are equal, there is no redundancy.

The third parameter, shares.happy, defines the minimum number of nodes the chunks of any individual file must be spread across. Setting this value too low sacrifices the benefits of redundancy. By default, Tahoe is designed to be tolerant of nodes whose availability comes and goes, not only to cope with failure, but to allow for a truly distributed design where some nodes can be disconnected and not damage the grid as a whole. There is a lot to consider when designing your grid parameters; a good introduction to the trade-offs is hosted on the project's wiki.

You can run services — such as the (S)FTP and HTTP front-ends discussed above — on any storage node. But you will also need at least one special node, the introducer node required for clients to make an initial connection to the grid. Introducers maintain a state table keeping track of the nodes in a grid; they look for basic configuration in the [node] section of tahoe.cfg, but ignore all other directives.

To start yours, create a separate directory for it (say, .tahoe-my-introducer/, change into the directory, and run tahoe create-introducer . followed by tahoe start .. When launched, the new introducer creates a file named introducer.furl; this holds the "FURL" address that you must paste into the configuration file on all of your other nodes.

You can also (optionally) create helper, key-generator, and stats-gatherer nodes for your grid, to offload some common tasks onto separate machines. A helper node simply assists the chunk replication process, which can be slow if many duplicate chunks are required. You can designate a node as a helper by setting enabled = true under the [helper] stanza in tahoe.cfg.

The setup process key-generators and stats-gatherers is akin to that for introducers: run tahoe create-key-generator . or tahoe create-stats-gatherer . in a separate directory, followed by tahoe start .. Stats gatherers are responsible for logging and maintaining data on the grid, while key generators simply speed up the computationally-expensive process of creating cryptographic tokens. Neither is required, but using them can improve performance.

Setting up a Tahoe LAFS grid to serve your company or project is not a step to be taken lightly — you need to consider the maintenance requirements as well as how the competing speed and security features add up for your use case. But the process is simple enough that you can undertake it in just a few days, and even run multi-node experiments, all with a minimum of fuss.

]]>

Weekend Project: Get Started with Tahoe-LAFS Storage Grids

Here's an axiom for every major organization to memorize: if you have data, then you have a data storage problem. But while the cloud service players are happy to compete for your business, you may not need to purchase a solution. A number of open source projects offer flexible ways to build your own distributed, fault-tolerant storage network. This weekend, let's take a look at one of the most intriguing offerings: Tahoe LAFS.

Tahoe is a "Least Authority File System" — the LAFS you often see in concert with its name. The LAFS design is an homage to the security world's "principle of least privilege": simply put, Tahoe uses cryptography and access control to protect access to your data. Specifically, the host OS on a Tahoe node never has read or write access to any of the data it stores: only authenticated clients can collect and assemble the correct chunks from across the distributed nodes, and decrypt the files.

Beyond that, though, Tahoe offers peer-to-peer distributed data storage with adjustable levels of redundancy. You can tune your "grid" for performance, fault-tolerance, or strike a balance in between, and you can use heterogeneous hardware and service providers to make up your nodes, providing you with a second layer of protection. Furthermore, although you can use Tahoe-LAFS as a simple distributed filesystem, you can also run web and (S)FTP services directly from your Tahoe grid.

Installation and Testing

The most recent Tahoe release is version 1.9.1, from January 2012. The project provides tarballs for download only, but many Linux distributions now offer the package as well, so check with your package management system first. Tahoe is written in Python, and uses the Twisted framework, as well as an assortment of auxiliary Python libraries (for cryptography and other functions). None are particularly unusual, but if you are installing from source, be sure to double check your dependencies.

Once you have unpacked the source package, execute python ./setup.py build to generate the Tahoe command line tools, then run python ./setup.py test to run the installer's sanity check suite. I found that the Ubuntu package failed to install python-mock, but Tahoe's error messages caught that mistake and allowed me to install the correct library without any additional trouble.

Now that you have the Tahoe tools built, you can connect to the public test grid to get a feel for how the storage system works. This grid is maintained by the project, and is variously referred to as pubgrid or Test Grid. You can experiment with the Tahoe client apps on pubgrid — however, because it is a testbed only, its uptime is not guaranteed, and the maintainers may periodically wipe and rebuild it.

First, run tahoe create-client. This creates a local client-only node on your machine (meaning that it does not offer storage space to the grid), which you will connect to pubgrid by editing the configuration file in ~/.tahoe/tahoe.cfg. Open the tahoe.cfg file and edit the nickname = and introducer.furl = lines.

The nickname is any moniker you choose for your node. During this testing phase, the name makes no difference, but when deploying a grid, useful names can help you keep better tabs on your nodes' performance and uptime. The "introducer" is Tahoe lingo for the manager node that oversees a grid — keeping track of the participating nodes in a publish/subscribe "hub" fashion. The pubgrid's current "FURL" address is pb:// This e-mail address is being protected from spambots. You need JavaScript enabled to view it :50528/introducer — but check the Tahoe wiki before entering it in the configuration file, in case it has changed.

Save your configuration file, then run ./tahoe start at the command line. You're now connected! By default, Tahoe offers a web-based interface running at http://127.0.0.1:3456 ... open that address in your web browser, and you will see both a status page for pubgrid (including the grid IDs of nearby peers), and the controls you need to create your own directories and upload test files.

File Storage and Other Front-Ends

Part of Tahoe's LAFS security model is that the directories owned by other nodes are not searchable or discoverable. When you create a directory (on pubgrid or on any other grid), a unique pseudorandom identifier is generated that you must bookmark or scrawl down someplace where you won't forget it. The project has created a shared public directory on pubgrid at this long, unwieldy URI, which gives you an idea of the hash function used.

You can add directories or files in the shared public directory, or create new directories and upload files of your own. But whenever you do, it is up to you to keep track of the URIs Tahoe generates. You can share Tahoe files with other users by sending them the URIs directly. Note also that whenever you upload a file, you have the option to check a box labeled "mutable." This is another security feature: files created as immutable are write-protected, and can never be altered.

Tahoe WebGUIIn the default setup, your client-only node is not contributing any local disk space to the grid's shared pool. That setting is controlled in the [storage] stanza of the tahoe.cfg file. Bear in mind that the objects stored on your contribution to the shared pool are encrypted chunks of files from around the grid; you will not be able to inspect their contents. For a public grid, that is something worth thinking about, although when running a grid for your own business or project, it is less of a concern.

If all you need to do store static content and be assured that it is securely replicated off-site, then the default configuration as used by pubgrid may be all that you need. Obviously, you will want to run your own storage nodes, connected through your own introducer — but the vanilla file and directory structure as exposed through the web GUI will suffice. There are other options, however.

Tahoe includes REST API in tandem to the human-accessible web front-end. This allows you to use a Tahoe LAFS grid as the storage engine to a web server. The API exposes standard GET, PUT, POST, and DELETE methods, and supports JSON and HTML output. The API is customized to make Tahoe's long, human-unreadable URIs easier to work with, and provides utilities to more easily work with operations (such as search) than can take longer on a distributed grid than they would on a static HTTP server.

There is also an FTP-like front-end, which support SSL-encrypted SFTP operations, and a command-line client useful for server environments or remote grid administration. Finally, a "drop upload" option is available in the latest builds, which allows Tahoe to monitor an upload directory, and automatically copy any new files into the grid.

Running Your Own Grid

The pubgrid is certainly a useful resource for exploring how Tahoe and its various front-ends function, but for any real benefit you need to deploy your own grid. Designing a grid is a matter of planning for the number of storage nodes you will need, determining how to tune the encoding parameters for speed and redundancy, and configuring the special nodes that manage logging, metadata, and other helper utilities.

The storage encoding parameters include shares.needed, shares.total, and shares.happy (all of which are configurable in the tahoe.cfg file). A file uploaded to the grid is divided into shares.needed chunks, to be distributed across the nodes. Tahoe will replicate a total of shares.total chunks, so total must be greater-than-or-equal-to needed. If they are equal, there is no redundancy.

The third parameter, shares.happy, defines the minimum number of nodes the chunks of any individual file must be spread across. Setting this value too low sacrifices the benefits of redundancy. By default, Tahoe is designed to be tolerant of nodes whose availability comes and goes, not only to cope with failure, but to allow for a truly distributed design where some nodes can be disconnected and not damage the grid as a whole. There is a lot to consider when designing your grid parameters; a good introduction to the trade-offs is hosted on the project's wiki.

You can run services — such as the (S)FTP and HTTP front-ends discussed above — on any storage node. But you will also need at least one special node, the introducer node required for clients to make an initial connection to the grid. Introducers maintain a state table keeping track of the nodes in a grid; they look for basic configuration in the [node] section of tahoe.cfg, but ignore all other directives.

To start yours, create a separate directory for it (say, .tahoe-my-introducer/, change into the directory, and run tahoe create-introducer . followed by tahoe start .. When launched, the new introducer creates a file named introducer.furl; this holds the "FURL" address that you must paste into the configuration file on all of your other nodes.

You can also (optionally) create helper, key-generator, and stats-gatherer nodes for your grid, to offload some common tasks onto separate machines. A helper node simply assists the chunk replication process, which can be slow if many duplicate chunks are required. You can designate a node as a helper by setting enabled = true under the [helper] stanza in tahoe.cfg.

The setup process key-generators and stats-gatherers is akin to that for introducers: run tahoe create-key-generator . or tahoe create-stats-gatherer . in a separate directory, followed by tahoe start .. Stats gatherers are responsible for logging and maintaining data on the grid, while key generators simply speed up the computationally-expensive process of creating cryptographic tokens. Neither is required, but using them can improve performance.

Setting up a Tahoe LAFS grid to serve your company or project is not a step to be taken lightly — you need to consider the maintenance requirements as well as how the competing speed and security features add up for your use case. But the process is simple enough that you can undertake it in just a few days, and even run multi-node experiments, all with a minimum of fuss.



busy

Friday, February 3, 2012

Weekend project: Zap Your Coworkers' Minds with Multi-Pointer X

Care to rethink your desktop user experience? It may be simpler than you think. Chances are you own more than one pointing device, but for years you've never been able to take full advantage of that hardware, because whenever you plugged in a second hardware mouse, it simply shared control of the same cursor. But with X, it doesn't have to: you can have any many distinct, independent cursors on screen as you have devices to control them. So grab a spare from the parts box in the closet, and get ready for the ol' one-two punch.

Care to rethink your desktop user experience? It may be simpler than you think. Chances are you own more than one pointing device, but for years you've never been able to take full advantage of that hardware, because whenever you plugged in a second hardware mouse, it simply shared control of the same cursor. But with X, it doesn't have to: you can have any many distinct, independent cursors on screen as you have devices to control them. So grab a spare from the parts box in the closet, and get ready for the ol' one-two punch.

What, you don't have a spare mouse? There a good chance you do have a second pointing device, though — even if it's a mouse and the built-in trackpad on your laptop. But this process will work for any pointing hardware recognized by X, including a trackball and a mouse, or even a mouse and a pen tablet. To X, they are all the same. You will need to ensure that you are running the X.org X server 1.7 or later to use the feature in questions, Multi-Pointer X (MPX). You almost certainly already are — X.org 1.7 was finalized in 2009, and all mainstream distributions follow X.org closely. Bring up your distribution's package manager and install the libxi and libxcursor development packages as well; they will come in handy later.

What is MPX?

So why don't we use MPX all the time? For starters, most of the time one cursor on screen is all that we need, because there are very few applications that benefit from multiple pointers. But there are several. The simplest use case is for multi-head setups, where two or more separate login sessions run on the same machine, using different video outputs and separate keyboard/mouse combinations. You often see this type of configuration in classrooms and computer labs.

But the more unusual and intriguing alternative is running multiple pointers in a single session, which can simplify tasks that involve lots of manipulating on-screen objects. If you paint with a pressure-sensitive tablet using Krita or MyPaint, for example, a second cursor might allow you to keep your tablet pen on the canvas and alter paint settings or brush dynamics without hopping back-and-forth constantly. The same goes for other creative jobs like animation and 3D modeling: the more control, the easier it gets. Plus, let's be frank, there's always the wow factor — experimenting and showing off what Linux can do that lesser OSes cannot.

The basics

MPX is implemented in the XInput2 extension, which handles keyboards and pointing devices. The package is a core requirement for all desktop systems, but most users don't know that it comes with a handy command-line tool to inspect and alter the X session's input settings. The tool is named xinput, and if you run xinput list, it will print out a nested list of all of your system's active input devices. Mine looks like this:

 

⎡ Virtual core pointer id=2[master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4[slave pointer (2)]⎜ ↳ Microsoft Microsoft Trackball Explorer®id=9[slave pointer (2)]⎣ Virtual core keyboard id=3[master keyboard (2)] ↳ Virtual core XTEST keyboard id=5[slave keyboard (3)] ↳ Power Button id=6[slave keyboard (3)] ↳ Power Button id=7[slave keyboard (3)] ↳ Mitsumi Electric Goldtouch USB Keyboard id=8[slave keyboard (3)]

 

As you can see, I have one "virtual" core pointer and a corresponding "virtual" core keyboard, both of which are "master" devices to which the real hardware (the trackball and USB keyboard) are attached as slaves. Normally, whenever you attach a new input device, X attaches it as another slave to the existing (and only) master, so that all of your mice control the same pointer. In order to use them separately, we need to create a second virtual pointer device, then attach our new hardware and assign it as a slave to the second virtual pointer.

From a console, run xinput create-master Secondary to create another virtual device. The command as written will name it "Secondary" but you can choose any name you wish. Now plug in your second pointing device (unless it is already plugged in, of course), and run xinput list again. I plugged in my Wacom tablet, which added three lines to the list:

 

⎜ ↳ Wacom Graphire3 6x8 eraser id=16[slave pointer (2)]⎜ ↳ Wacom Graphire3 6x8 cursor id=17[slave pointer (2)]⎜ ↳ Wacom Graphire3 6x8 stylus id=18[slave pointer (2)]

 

The new virtual device also shows up, as a matched pointer/keyboard pair:

⎡ Secondary pointer id=10[master pointer (11)]⎜ ↳ Secondary XTEST pointer id=12[slave pointer (10)]⎣ Secondary keyboard id=11[master keyboard (10)] ↳ Secondary XTEST keyboard id=15[slave keyboard (11)]

 

That's because so far, X does not know whether we're interested in the keyboard or the pointer. You'll also note that every entry has a unique id, including the new hardware and the new virtual devices. The "cursor" entry for the Wacom tablet is the only one of the three we care about, and its id is 17. Running xinput reattach 17 "Secondary pointer" assigns it as a slave to the Secondary virtual pointer device, and voila — a second cursor appears on screen immediately.

You can start to use both pointers immediately (provided that your hand-eye coordination is up to the task). Grab two icons on the desktop at once, rearrange your windows with both hands, click on menus in two running applications at once.

Cursors, Foiled Again

What you'll soon find out, though, is that life on the bleeding edge comes with some complications. First of all, your mouse cursors look exactly the same, which can be confusing. Secondly, although GTK+3 itself is fully aware of XInput2 and can cope with MPX, individual applications may not be. That means you can uncover a lot of bugs simply by using both pointers at once.

You can tackle the indistinguishable-cursors issue by assigning a different X cursor theme to the second pointer. The normal desktop environments don't support this in their GUI preferences tools, but developer Antonio Ospite wrote a quick utility called xicursorset that fills in the gap. Compiling it requires the libxi and libxcursor development packages mentioned earlier, but it does not require any heavy dependencies, nor do you even need to install it as root — it runs quite happily in your home directory.

To assign a theme to your new cursor, simply run ./xicursorset 10 left_ptr some-cursor-theme-name. In this call, 10 is the id of our "Secondary pointer" entry from above, and left_ptr is the default cursor shape (X changes the cursor to a text-insertion point, resize-tool, and "wait" symbol, among other options, as needed). X.org cursor theme packages are provided by every Linux distribution; the "DMZ" theme is GNOME's default and uses a white arrow.

The simplest option might be to use the DMZ-Black variant: ./xicursorset 10 left_ptr DMZ-Black, but there are far more, including multi-color options. I personally grabbed the DMZ-Red theme from gnome-look.org's X cursor section, to make the auxiliary cursor stand out more.

One thing you will notice about these instructions is that they do not persist between reboots or login sessions. Currently there is no way to configure X.org to remember your wild and crazy cursor setups. I asked MPX author Peter Hutterer what the reasoning was, and he said it simply hadn't been asked for yet. Because MPX usage is limited to a few applications and usage scenarios, most people only use it in a dynamic sense — plugging in a second device when they need it for a particular task.

On the other hand, one of Ospite's readers (an MPX user named Vicente) wrote a small Bash script to automate the process; if you use MPX a lot, it could save you some keystrokes.

Multi-Tasking

But, as Hutterer said, apps with explicit support for MPX are pretty limited at this stage. There is a multi-cursor painting application included with Vinput, and the popular multiplayer game World Of Goo can use MPX to enable simultaneous play. There was a dedicated standalone painting application called mpx-paint hosted at GitHub, but the developer's account appears to have shut down.

Mainstream applications have been slower to pick up on MPX support. Inkscape is a likely candidate, where manipulating two or more control points at once would open up new possibilities. Developer Jon Cruz says it is "on the short list" alongside Inkscape's growing support for extended input devices. Whenever the application completes its port to GTK+3, the MPX support will follow.

Blender has also discussed MPX, but so far the project's main focus has been on six-axis 3D controllers (which are understandably a higher priority). However, for broader adoption we may have to wait. Hutterer himself worked on bringing MPX support to the Compiz window manager, but said that the real sticking point is tackling the "unsolved questions" about window management — which is something the desktop environment will need to tackle. Luckily, the latest X.org release includes multi-touch support — a related technology useful for gestures. As Linux on tablets continues to grow, gestures become an increasingly in-demand option. As applications are adapted to support multi-touch gestures, MPX support should grow as well.

The implications are interesting for a number of tasks, including the gaming and creative applications already discussed. When you throw in the possibility of multiple keyboards, the possibilities get even broader. After all, thanks to network-aware text editors like Gobby and Etherpad, we are getting more comfortable with multiple edit points in our document editors. Imagine what you could do with an IDE that allowed you to edit the code in one window while you interacted with the runtime simultaneously. In can be hard to grasp the implications — but you don't have to dream them all up; with MPX you can experiment today.

]]>


Comming Up:


Asbestos death rates are on the rise, learn what you need to lookout for when it comes to asbestos and mesothelioma.

Weekend project: Zap Your Coworkers' Minds with Multi-Pointer X

Care to rethink your desktop user experience? It may be simpler than you think. Chances are you own more than one pointing device, but for years you've never been able to take full advantage of that hardware, because whenever you plugged in a second hardware mouse, it simply shared control of the same cursor. But with X, it doesn't have to: you can have any many distinct, independent cursors on screen as you have devices to control them. So grab a spare from the parts box in the closet, and get ready for the ol' one-two punch.

What, you don't have a spare mouse? There a good chance you do have a second pointing device, though — even if it's a mouse and the built-in trackpad on your laptop. But this process will work for any pointing hardware recognized by X, including a trackball and a mouse, or even a mouse and a pen tablet. To X, they are all the same. You will need to ensure that you are running the X.org X server 1.7 or later to use the feature in questions, Multi-Pointer X (MPX). You almost certainly already are — X.org 1.7 was finalized in 2009, and all mainstream distributions follow X.org closely. Bring up your distribution's package manager and install the libxi and libxcursor development packages as well; they will come in handy later.

What is MPX?

Demo of Vinput demo appSo why don't we use MPX all the time? For starters, most of the time one cursor on screen is all that we need, because there are very few applications that benefit from multiple pointers. But there are several. The simplest use case is for multi-head setups, where two or more separate login sessions run on the same machine, using different video outputs and separate keyboard/mouse combinations. You often see this type of configuration in classrooms and computer labs.

But the more unusual and intriguing alternative is running multiple pointers in a single session, which can simplify tasks that involve lots of manipulating on-screen objects. If you paint with a pressure-sensitive tablet using Krita or MyPaint, for example, a second cursor might allow you to keep your tablet pen on the canvas and alter paint settings or brush dynamics without hopping back-and-forth constantly. The same goes for other creative jobs like animation and 3D modeling: the more control, the easier it gets. Plus, let's be frank, there's always the wow factor — experimenting and showing off what Linux can do that lesser OSes cannot.

The basics

MPX is implemented in the XInput2 extension, which handles keyboards and pointing devices. The package is a core requirement for all desktop systems, but most users don't know that it comes with a handy command-line tool to inspect and alter the X session's input settings. The tool is named xinput, and if you run xinput list, it will print out a nested list of all of your system's active input devices. Mine looks like this:

⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ Microsoft Microsoft Trackball Explorer® id=9 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Power Button id=7 [slave keyboard (3)] ↳ Mitsumi Electric Goldtouch USB Keyboard id=8 [slave keyboard (3)]

As you can see, I have one "virtual" core pointer and a corresponding "virtual" core keyboard, both of which are "master" devices to which the real hardware (the trackball and USB keyboard) are attached as slaves. Normally, whenever you attach a new input device, X attaches it as another slave to the existing (and only) master, so that all of your mice control the same pointer. In order to use them separately, we need to create a second virtual pointer device, then attach our new hardware and assign it as a slave to the second virtual pointer.

From a console, run xinput create-master Secondary to create another virtual device. The command as written will name it "Secondary" but you can choose any name you wish. Now plug in your second pointing device (unless it is already plugged in, of course), and run xinput list again. I plugged in my Wacom tablet, which added three lines to the list:

⎜ ↳ Wacom Graphire3 6x8 eraser id=16 [slave pointer (2)]⎜ ↳ Wacom Graphire3 6x8 cursor id=17 [slave pointer (2)]⎜ ↳ Wacom Graphire3 6x8 stylus id=18 [slave pointer (2)]

The new virtual device also shows up, as a matched pointer/keyboard pair:

⎡ Secondary pointer id=10 [master pointer (11)]⎜ ↳ Secondary XTEST pointer id=12 [slave pointer (10)]⎣ Secondary keyboard id=11 [master keyboard (10)] ↳ Secondary XTEST keyboard id=15 [slave keyboard (11)]

That's because so far, X does not know whether we're interested in the keyboard or the pointer. You'll also note that every entry has a unique id, including the new hardware and the new virtual devices. The "cursor" entry for the Wacom tablet is the only one of the three we care about, and its id is 17. Running xinput reattach 17 "Secondary pointer" assigns it as a slave to the Secondary virtual pointer device, and voila — a second cursor appears on screen immediately.

You can start to use both pointers immediately (provided that your hand-eye coordination is up to the task). Grab two icons on the desktop at once, rearrange your windows with both hands, click on menus in two running applications at once.

Cursors, Foiled Again

What you'll soon find out, though, is that life on the bleeding edge comes with some complications. First of all, your mouse cursors look exactly the same, which can be confusing. Secondly, although GTK+3 itself is fully aware of XInput2 and can cope with MPX, individual applications may not be. That means you can uncover a lot of bugs simply by using both pointers at once.

You can tackle the indistinguishable-cursors issue by assigning a different X cursor theme to the second pointer. The normal desktop environments don't support this in their GUI preferences tools, but developer Antonio Ospite wrote a quick utility called xicursorset that fills in the gap. Compiling it requires the libxi and libxcursor development packages mentioned earlier, but it does not require any heavy dependencies, nor do you even need to install it as root — it runs quite happily in your home directory.

To assign a theme to your new cursor, simply run ./xicursorset 10 left_ptr some-cursor-theme-name. In this call, 10 is the id of our "Secondary pointer" entry from above, and left_ptr is the default cursor shape (X changes the cursor to a text-insertion point, resize-tool, and "wait" symbol, among other options, as needed). X.org cursor theme packages are provided by every Linux distribution; the "DMZ" theme is GNOME's default and uses a white arrow.

The simplest option might be to use the DMZ-Black variant: ./xicursorset 10 left_ptr DMZ-Black, but there are far more, including multi-color options. I personally grabbed the DMZ-Red theme from gnome-look.org's X cursor section, to make the auxiliary cursor stand out more.

One thing you will notice about these instructions is that they do not persist between reboots or login sessions. Currently there is no way to configure X.org to remember your wild and crazy cursor setups. I asked MPX author Peter Hutterer what the reasoning was, and he said it simply hadn't been asked for yet. Because MPX usage is limited to a few applications and usage scenarios, most people only use it in a dynamic sense — plugging in a second device when they need it for a particular task.

On the other hand, one of Ospite's readers (an MPX user named Vicente) wrote a small Bash script to automate the process; if you use MPX a lot, it could save you some keystrokes.

Multi-Tasking

But, as Hutterer said, apps with explicit support for MPX are pretty limited at this stage. There is a multi-cursor painting application included with Vinput, and the popular multiplayer game World Of Goo can use MPX to enable simultaneous play. There was a dedicated standalone painting application called mpx-paint hosted at GitHub, but the developer's account appears to have shut down.

Mainstream applications have been slower to pick up on MPX support. Inkscape is a likely candidate, where manipulating two or more control points at once would open up new possibilities. Developer Jon Cruz says it is "on the short list" alongside Inkscape's growing support for extended input devices. Whenever the application completes its port to GTK+3, the MPX support will follow.

Blender has also discussed MPX, but so far the project's main focus has been on six-axis 3D controllers (which are understandably a higher priority). However, for broader adoption we may have to wait. Hutterer himself worked on bringing MPX support to the Compiz window manager, but said that the real sticking point is tackling the "unsolved questions" about window management — which is something the desktop environment will need to tackle. Luckily, the latest X.org release includes multi-touch support — a related technology useful for gestures. As Linux on tablets continues to grow, gestures become an increasingly in-demand option. As applications are adapted to support multi-touch gestures, MPX support should grow as well.

The implications are interesting for a number of tasks, including the gaming and creative applications already discussed. When you throw in the possibility of multiple keyboards, the possibilities get even broader. After all, thanks to network-aware text editors like Gobby and Etherpad, we are getting more comfortable with multiple edit points in our document editors. Imagine what you could do with an IDE that allowed you to edit the code in one window while you interacted with the runtime simultaneously. In can be hard to grasp the implications — but you don't have to dream them all up; with MPX you can experiment today.



busy

Saturday, January 28, 2012

Weekend Project: Learning Ins and Outs of Arduino

Arduino is an open embedded hardware and software platform designed for rapid creativity. It's both a great introduction to embedded programming and a fast track to building all kinds of cool devices like animatronics, robots, fabulous blinky things, animated clothing, games, your own little fabs... you can build what you imagine. Follow along as we learn both embedded programming and basic electronics.

Arduino is an open embedded hardware and software platform designed for rapid creativity. It's both a great introduction to embedded programming and a fast track to building all kinds of cool devices like animatronics, robots, fabulous blinky things, animated clothing, games, your own little fabs... you can build what you imagine. Follow along as we learn both embedded programming and basic electronics.

What Does Arduino Do?

Arduino was invented by Massimo Banzi, a self-taught electronics guru who has been fascinated by electronics since childhood. Mr. Banzi had what I think of as a dream childhood: endless hours spent dissecting, studying, re-assembling things in creative ways, and testing to destruction. Mr. Banzi designed Arduino to be friendly and flexible to creative people who want to build things, rather than a rigid, overly-technical platform requiring engineering expertise.

The microprocessor revolution has removed a lot of barriers for newcomers, and considerably speeded up the pace of iteration. In the olden days building electronic devices means connecting wires and components, and even small changes were time-consuming hardware changes. Now a lot of electronics functions have moved to software, and changes are done in code.

Arduino is a genuinely interactive platform (not fake interactive like clicking dumb stuff on Web pages) that accepts different types of inputs, and supports all kinds of outputs: motion detector, touchpad, keyboard, audio signals, light, motors... if you can figure out how to connect it you can make it go. It's the ultimate low-cost "what-if" platform: What if I connect these things? What if I boost the power this high? What if I give it these instructions? Mr. Banzi calls it "the art of chance." Figure 1 shows an Arduino Uno; the Arduino boards contain a microprocessor and analog and digital inputs and outputs. There are several different Arduino boards.

You'll find a lot of great documentation online at Arduino and Adafruit Industries, and Mr. Banzi's book Getting Started With Arduino is a must-have.

Packrats Are Good

The world is over-full of useful garbage: circuit boards, speakers, motors, wiring, enclosures, video screens, you name it, our throwaway society is a do-it-yourselfer's paradise. With some basic skills and knowledge you can recycle and reuse all kinds of electronics components. Tons of devices get chucked into landfills because a five-cent part like a resistor or capacitor failed. As far as I'm concerned this is found money, and a great big wonderful playground. At the least having a box full of old stuff gives you a bunch of nothing-to-lose components for practice and experimentation.

The Arduino IDE

The Arduino integrated development environment (IDE) is a beautiful creation. The Arduino programming language is based on the Processing language, which was designed for creative projects. It looks a lot like C and C++. The IDE compiles and uploads your code to your Arduino board; it is fast and you can make and test a lot of changes in a short time. An Arduino program is called a sketch. See Installing Arduino on Linux for installation instructions.Figure 2: A sketch loaded into the Arduino IDE.

Hardware You Need

You will need to know how to solder. It's really not hard to learn how to do it the right way, and the Web is full of good video howtos. It just takes a little practice and decent tools. Get yourself a good variable-heat soldering iron and 60/40 rosin core lead solder, or 63/37. Don't use silver solder unless you know what you're doing, and lead-free solder is junk and won't work right. I use a Weller WLC100 40-Watt soldering station, and I love it. You're dealing with small, delicate components, not brazing plumbing joints, so having the right heat and a little finesse make all the difference.

Another good tool is a lighted magnifier. Don't be all proud and think your eyesight is too awesome for a little help; it's better to see what you're doing.

Adafruit industries sells all kinds of Arduino gear, and has a lot of great tutorials. I recommend starting with these hardware bundles because they come with enough parts for several projects:

Adafruit ARDX – v1.3 Experimentation Kit for Arduino This has an Arduino board, solderless breadboard, wires, resistors, blinky LEDs, USB cable, a little motor, experimenter's guide, and a bunch more goodies. $85.00.9-volt power supply. Seven bucks. You could use batteries, but batteries lose strength as they age, so you don't get a steady voltage.Tool kit that includes an adjustable-temperature soldering iron, digital multimeter, cutters and strippers, solder, vise, and a power supply. $100.

Other good accessories are an anti-static mat and a wrist grounding strap. These little electronics are pretty robust and don't seem bothered by static electricity, but it's cheap insurance in a high-static environment. Check out the Shields page for more neat stuff like the Wave audio shield for adding sound effects to an Arduino project, a touchscreen, a chip programmer, and LED matrix boards.

Essential Electric Terminology

Let's talk about volts (V), current (I), and resistance (r) because there is much confusion about these. Volts are measured in voltage, current is measured in amps, and resistance is measured in ohms. Electricity is often compared to water because they behave similarly: voltage is like water pressure, current is like flow rate, and resistance is akin to pipe diameter. If you increase the voltage you also increase current. A bigger pipe allows more current. If you decrease the pipe size then you increase resistance.

Figure 3: Circuit boards are cram-full of resistors. You will be using lots of resistors.Talk is cheap, so take a look at Figure 3. This is an old circuit board from a washing machine. See the stripey things? Those are resistors. All circuit boards have gobs of resistors, because these control how much current flows over each circuit. The power supply always pushes out more power than the individual circuits can handle, because it has to supply multiple circuits. So there are resistors on each circuit to throttle down the current to where it can safely handle it.

Again, there is a good water analogy — out here in my little piece of the world we use irrigation ditches. The output from the ditch is too much for a single row of plants, because its purpose is to supply multiple rows of plants with water. So we have systems of dams and diverters to restrict and guide the flow.

In your electronic adventures you're going to be calculating resistor sizes for your circuits, using the formula R (resistance) = V (voltage) / I (current). This is known as Ohm's Law, named for physicist Georg Ohm who figured out all kinds of neat things and described them in math for us to use. There are nice online calculators, so don't worry about getting it right all by yourself.

That's all for now. In the next tutorial, we'll learn about loading and editing sketches, and making your Arduino board do stuff.

]]>

Weekend Project: Loading Programs Into Arduino

In last week's Weekend Project, we learned what the Arduino platform is for, and what supplies and skills we need on our journey to becoming Arduino aces. Today we'll hook up an Arduino board and program it to do stuff without even having to know how to write code.

In last week's Weekend Project, we learned what the Arduino platform is for, and what supplies and skills we need on our journey to becoming Arduino aces. Today we'll hook up an Arduino board and program it to do stuff without even having to know how to write code.

Advice From A Guru

Excellent reader C.R. Bryan III (C.R. for short) sent me a ton of good feedback on part 1, Weekend Project: Learning Ins and Outs of Arduino. Here are some selected nuggets:

 

Solder: 63/37 is better, especially for beginning assemblers, because that alloy minimizes the paste phase, where the lead has chilled to solid but the tin hasn't, thus minimizing the chance of movement fracturing the cooling joint and causing a cold solder joint. I swear by Kester "44" brand solder as the best stuff for home assembly/rework.Wash hands after handling lead-bearing solder and before touching anything.Xuron Micro Shear make great flush cutters.Stripping down and re-using old electronic components is a worthy and useful skill, and rewards careful solder-sucking skills (melting and removing old solder). Use a metal-chambered spring-loaded solder sucker, and not the squeeze-bulb or cheapo plastic models.Solder and de-solder in a well-ventilated area. A good project for your new skills is to use an old computer power supply to run a little PC chassis fan on your electronics workbench.

 

Connecting the Arduino

Consult Installing Arduino on Linux for installation instructions, if you haven't already installed it. Your Linux distribution might already include the Arduino IDE; for example, Fedora, Debian, and Ubuntu all include the Arduino software, though Ubuntu is way behind and does not have the latest version, which is 1.0. It's no big deal to install it from sources, just follow the instructions for your distribution.

Now let's hook it up to a PC so we can program it. One of my favorite Arduino features is it connects to a PC via USB instead of a serial port, as so many embedded widgets do. The Arduino can draw its power from the USB port, if you're using a fully-powered port and not an unpowered shared hub. It also runs from an external power supply like a 9V AC-to-DC power plug with a 2.1mm barrel plug, and a positive tip.

Let's talk about power supplies for a moment. A nice universal AC-to-DC adapter like the Velleman Compact Universal DC Adapter Power Supply means you always have the right kind of power. This delivers 3-12 volts and comes with 8 different tips, and has adjustable polarity. An important attribute of any DC power supply is polarity. Some devices require a certain polarity (either positive or negative), and if you reverse it you can fry them. (I speak from sad experience.) Look on your power supply for one of the symbols in Figure 1 to determine its polarity.

Figure 1 shows how AC-to-DC power adapter polarity is indicated by these symbols. These refer to the tip of the output plug.

Getting back to the Arduino IDE: Go to Tools > Board and click on your board. Then in Tools > Serial Port select the correct TTY device. Sometimes it takes a few minutes for it to display the correct one, which is /dev/ttyACM0, as shown in Figure 2. This is the only part I ever have trouble with on new Arduino IDE installations.Figure 2

Figure 2: Selecting the /dev/ttyACM0 serial device.

Most Arduino boards have a built-in LED connected to digital output pin 13. But are we not nerds? Let's plug in our own external LED. Note how the LED leads are different lengths. Figure 3 shows how many Arduino boards have an onboard LED wired to pin 13. External LEDs have two leads, and one is longer than the other. The long lead is the anode, or positive lead, and the short lead is the cathode, or negative lead.Figure 3

Plug the anode into pin 13 and the cathode into the ground. Figure 4 shows the external LED in place and all lit up.

Loading a Sketch

The Arduino IDE comes with a big batch of example sketches. Arduino's "hello world"-type sketch is called Blink, and it makes an LED blink.Figure 4

 

/* Blink Turns on an LED on for one second, then off for one second, repeatedly. This example code is in the public domain. */void setup() { // initialize the digital pin as an output. // Pin 13 has an LED connected on most Arduino boards: pinMode(13, OUTPUT); }void loop() { digitalWrite(13, HIGH); // set the LED on delay(1000); // wait for a second digitalWrite(13, LOW); // set the LED off delay(1000); // wait for a second}

Open File > Examples > Basic > Blink. Click the Verify button to check syntax and compile the sketch. Just for fun, type in something random to create an error and then click Verify again. It should catch the error and tell you about it. Figure 5 shows what a syntax error in your code looks like when you click the Verify button.Figure 5

Remove your error and click the Upload button. (Any changes you make will not be saved unless you click File > Save.) This compiles and uploads the sketch to the Arduino. You'll see all the Arduino onboard LEDs blink, and then the red LED will start blinking in the pattern programmed by the sketch. If your Arduino has its own power, you can unplug the USB cable and it will keep blinking.

Editing a Sketch

Now it gets über-fun, because making and uploading code changes is so fast and easy you'll dance for joy. Change pin 13 to 12 in the sketch. Click Verify, and when that runs without errors click the Upload button. Move the LED's anode to pin 12. It should blink just like it did in pin 13.

Now let's change the blink duration to 3 seconds:

 

digitalWrite(13, HIGH); // set the LED on delay(3000); // wait for three seconds

 

Click Verify and Upload, and faster than you can say "Wow, that is fast" it's blinking in your new pattern. Watch your Arduino board when you click Upload, because you'll see the LEDs flash as it resets and loads the new sketch. Now make it blink in multiple durations, and note how I improved the comments:

 

void loop() { digitalWrite(13, HIGH); // set the LED on delay(3000); // on for three seconds digitalWrite(13, LOW); // set the LED off delay(1000); // off for a second digitalWrite(13, HIGH); // set the LED on delay(500); // on for a half second digitalWrite(13, LOW); // set the LED off delay(1000); // off for a second}

 

Always comment your code. You will forget what your awesome codes are supposed to do, and it helps you clarify your thinking when you write things down.

Now open a second sketch, File > Example > Basics > Fade.

 

/* Fade This example shows how to fade an LED on pin 9 using the analogWrite() function. This example code is in the public domain. */int brightness = 0; // how bright the LED isint fadeAmount = 5; // how many points to fade the LED byvoid setup() { // declare pin 9 to be an output: pinMode(9, OUTPUT);} void loop() { // set the brightness of pin 9: analogWrite(9, brightness); // change the brightness for next time through the loop: brightness = brightness + fadeAmount; // reverse the direction of the fading at the ends of the fade: if (brightness == 0 || brightness == 255) { fadeAmount = -fadeAmount ; } // wait for 30 milliseconds to see the dimming effect delay(30); }

 

Either move the anode of your LED to pin 9, or edit the sketch to use whatever pin you want to use. Click Verify and Upload, and your LED will fade in and out. Note how the Blink sketch is still open; you can open multiple sketches and quickly switch between them.

Now open File > Example > Basics > BareMinimum:

 

void setup() { // put your setup code here, to run once:}void loop() { // put your main code here, to run repeatedly: }

 

This doesn't do anything. It shows the minimum required elements of an Arduino sketch, the two functions setup() and loop(). The setup() function runs first and initializes variables, libraries, and pin modes. The loop() function is where the fun stuff happens, the blinky lights or motors or sensors or whatever it is you're doing with your Arduino.

You can go a long way without knowing much about coding, and you'll learn a lot from experimenting with the example sketches, but of course the more you know the more you can do. Visit the Arduino Learning page to get detailed information on Arduino's built-in functions and libraries, and to learn more about writing sketches. In our final part of this series we'll add a Wave audio shield and a sensor, and make a scare-kitty-off-the-kitchen-counter device.

]]>

Weekend Project: Take Control of Vim's Color Scheme

Vim's syntax highlighting and color schemes can be a blessing, or a curse, depending on how they're configured. If you're a Vim user, let's take a little time to explore how you can make the most of Vim's color schemes.

Vim's syntax highlighting and color schemes can be a blessing, or a curse, depending on how they're configured. If you're a Vim user, let's take a little time to explore how you can make the most of Vim's color schemes.

One of the things that I love about Vim is syntax highlighting. I spend a lot of time working with HTML, for instance, and the syntax highlighting makes it much easier to spot errors. However, there's times when the default Vim colors and the default terminal colors don't play well with one another.

Consider Figure 1, which shows Vim's default highlighting with the default colors in Xfce's Terminal Emulator. Ouch. The comments are in a dark blue and the background is black. You can sort of read it, but just barely.

So what we'll cover here is how to switch between Vim's native color schemes, and how to modify color schemes.

Installing Vim Full

Before we get started, you'll need to make sure that you install the full Vim package rather than the vim-tiny that many distributions ship by default. The reason that many distributions ship a smaller Vim package is that the vim-tiny package has enough of Vim for vi-compatibility for system administration (if necessary), but it saves space for the users who aren't going to be editing with Vim all the time.

Odds are, if you're using Vim regularly enough to care about the color schemes, you've done this already. If you're on Ubuntu, Linux Mint, etc. then you can grab the proper Vim package with sudo apt-get install vim.

Changing Vim Colors

Before we go trying to edit color schemes, let's just try some of Vim's default schemes. You may not know this, but you can rotate through a bunch of default schemes using the :color command. Make sure you're out of insert mode (hit Esc) and then type the :color command but hit the Tab button instead of Enter.

You should see :color blue. If you hit Tab again, you should see :color darkblue. Find a color you think might work for you and hit Enter. Boom! You've got a whole new color scheme.

For the record, I find that the "elflord" color scheme works pretty well in a dark terminal. If your terminal is set to a white background, I like the default scheme or "evening" scheme. (The evening scheme sets a dark background anyway, though.)

Where Colors Live

On Linux Mint, Debian, etc. you'll find the color schemes under /usr/share/vim/vimNN/colors. (Where NN is the version number of Vim, like vim72 or whatever.) But you can also store new color schemes under ~/.vim/colors. So if you find something like the Wombat color scheme you can plop that in.

You might also want to take a look at the Vivify tool, which helps you modify color schemes and see the effects. Then you can download the file and create your own scheme.vim.

Now, if you don't want to be setting the color scheme every single time you open Vim, you just need to add this to your ~/.vimrc:

colorscheme oceandeep

It's just that easy.

If you want to create your own color scheme, I recommend using something like Vivify. The actual color scheme files can be very tedious to edit.

]]>

Friday, January 27, 2012

Weekend Project: Take Control of Vim's Color Scheme

Vim's syntax highlighting and color schemes can be a blessing, or a curse, depending on how they're configured. If you're a Vim user, let's take a little time to explore how you can make the most of Vim's color schemes.

Figure 1One of the things that I love about Vim is syntax highlighting. I spend a lot of time working with HTML, for instance, and the syntax highlighting makes it much easier to spot errors. However, there's times when the default Vim colors and the default terminal colors don't play well with one another.

Consider Figure 1, which shows Vim's default highlighting with the default colors in Xfce's Terminal Emulator. Ouch. The comments are in a dark blue and the background is black. You can sort of read it, but just barely.

So what we'll cover here is how to switch between Vim's native color schemes, and how to modify color schemes.

Installing Vim Full

Before we get started, you'll need to make sure that you install the full Vim package rather than the vim-tiny that many distributions ship by default. The reason that many distributions ship a smaller Vim package is that the vim-tiny package has enough of Vim for vi-compatibility for system administration (if necessary), but it saves space for the users who aren't going to be editing with Vim all the time.

Odds are, if you're using Vim regularly enough to care about the color schemes, you've done this already. If you're on Ubuntu, Linux Mint, etc. then you can grab the proper Vim package with sudo apt-get install vim.

Changing Vim Colors

Before we go trying to edit color schemes, let's just try some of Vim's default schemes. You may not know this, but you can rotate through a bunch of default schemes using the :color command. Make sure you're out of insert mode (hit Esc) and then type the :color command but hit the Tab button instead of Enter.

You should see :color blue. If you hit Tab again, you should see :color darkblue. Find a color you think might work for you and hit Enter. Boom! You've got a whole new color scheme.

For the record, I find that the "elflord" color scheme works pretty well in a dark terminal. If your terminal is set to a white background, I like the default scheme or "evening" scheme. (The evening scheme sets a dark background anyway, though.)

Where Colors Live

On Linux Mint, Debian, etc. you'll find the color schemes under /usr/share/vim/vimNN/colors. (Where NN is the version number of Vim, like vim72 or whatever.) But you can also store new color schemes under ~/.vim/colors. So if you find something like the Wombat color scheme you can plop that in.

You might also want to take a look at the Vivify tool, which helps you modify color schemes and see the effects. Then you can download the file and create your own scheme.vim.

Now, if you don't want to be setting the color scheme every single time you open Vim, you just need to add this to your ~/.vimrc:

colorscheme oceandeep

It's just that easy.

If you want to create your own color scheme, I recommend using something like Vivify. The actual color scheme files can be very tedious to edit.



busy