Wednesday, October 13, 2010

Ritalin for your $PS1

In my last post I shared my colorful but otherwise inert bash prompt. Pedro Melo extended it to integrate __git_ps1 with extra coloring using git status.

Unfortunately this can take a long while on large source trees (git status needs to scan the directory structure), and while smart bash prompts are handy, it can be frustrating if every prompt incurs a delay.

Of course this is annoying for any over zealous $PROMPT_COMMAND, not just git status based ones.

My version of his version has a small trick to add simple but effective throttling:

_update_prompt () {
    if [ -z "$_dumb_prompt" ]; then

        # if $_dumb_prompt isn't set, do something potentially expensive, e.g.:
        git status --porcelain | perl -ne 'exit(1) if /^ /; exit(2) if /^[?]/'

        case "$?" in
             # handle all the normal cases
             ...
             # but also add a case for exit due to SIGINT
             "130" ) _dumb_prompt=1 ;;
        esac
    else
        # in this case the user asked the prompt to be dumbed down
       ...
    fi
}

# helper commands to explicitly change the setting:

dumb_prompt () {
    _dumb_prompt=1
}

smart_prompt () {
    unset _dumb_prompt
}

If the prompt is taking too long to show up I simply hit ^C and my $PROMPT_COMMAND becomes a quicker dumbed down version for the current session.

Tuesday, October 12, 2010

Headless VirtualBox

This being the second time I've set this stuff up, I thought it's worth documenting my VirtualBox development workflow.

A Decent Hacking Environment

The OSX side of my laptop is working pretty smoothly. I've got my stack of tools configured to my liking, from my shell, to my editor, to my documentation browser. I've spent years cargo culting all my dotfiles.

But pick my brain any day and I'll give you a mouthful about Apple and OSX. I also know that there are superior alternatives to most of my software stack.

That said, even if I'm not entirely happy with the setup, I'm definitely content with it, and I have no plans on learning anything new to gain a 3% efficiency in the way I type in text or customize the way I waste time online.

Part of the reason I use OSX is that there is no hope (and therefore no temptation) in trying to fix little annoyances, something that led me to sacrifice countless hours during the brief period of time when I had a fully open source desktop environment.

However, when it comes to installing and configuring various project dependencies (daemons, libraries, etc), OSX can be a real pain compared to a decent Linux distribution.

A Decent Runtime Environment

Disk space is cheap, and virtualization has come along way in recent years, so it really makes a lot more sense to run my code on a superior platform. One image per project also gives me brainless sandboxing, and snapshots mean I can quickly start over when I break everything.

Sweetening the deal even more, I always seem to be surrounded by people who know how to properly maintain a Debian environment much better than I could ever hope to, so I don't even have to think about how to get things right.

Bridging the Gap

In order to make it easy to use both platforms simultaneously, with cheap context switching (on my wetware, that is), I've written a script that acts as my sole entry point to the entire setup.

I don't use the VirtualBox management GUI, and I run the Linux environment completely headless (not just sans X11, without a virtual terminal either).

I hard link the following script in ~/bin, once per VM. To get a shell on a VM called blah, I just type blah into my shell prompt and hit enter:

#!/bin/bash

VM="$( basename "$0" )"

if [ -n "$1" ]; then
    # explicit control of the VM, e.g. `blah stop`
    # useful commands are 'pause', 'resume', 'stop', etc
    case "$1" in
        status) VBoxManage showvminfo "$VM" | grep -i state ;;
        *)      VBoxManage controlvm "$VM" ${1/stop/acpipowerbutton} ;; # much easier to type
    esac
else
    # otherwise just make sure it's up and provide a shell

    # boot the virtual machine in headless mode unless it's already running
    # note that there is a race condition if the machine is in the process of
    # powering down
    VBoxManage showvminfo --machinereadable "$VM" | grep -q 'VMState="running"' || \
    VBoxManage startvm "$VM" -type vrdp;

    # each VM has an SSH config like this:

    # Host $VM
    #     Hostname localhost
    #     Port 2222 # VBoxManage modifyvm "$VM" --natpf1 ...

    # changing ssh port forwarding doesn't require restarting the VM (whereas
    # fiddling with VirtualBox port forwarding does). The following section
    # should probably just be a per VM include, but for my needs it does the
    # job as is.

    # ControlMaster works nicely with a global 'ControlPath /tmp/%r@%h:%p' in
    # my ~/.ssh/config this means the port forwarding stays up no matter how
    # many shells I open and close (unlike ControlMaster auto in the config)

    # this loop quietly waits till sshd is up
    until nc -z localhost 3000 >/dev/null; do
        echo -n "."
        ssh -N -f -q \
            -L 3000:localhost:3000 \
            -o ConnectTimeout=1 \
            -o ControlMaster=yes \
            "$VM" && echo;
    done

    # finally, start a shell
    exec ssh "$VM"
fi

Once I'm in, I also have my code in a mount point under my home directory. I set up a shared folder using VirtualBox's management GUI (installing the VirtualBox guest additions like this). To mount it automatically I've got this in /etc/fstab on the guest OS:

# <file system>  <mount point>               <type>  <options>                       <dump> <pass>
some_dir         /home/nothingmuch/some_dir  vboxsf  uid=nothingmuch,gid=nothingmuch   0      0

I use the same path on the OSX side and the Linux side to minimize confusion. I decided not to mount my entire home directory because I suspect most of my dotfiles aren't that portable, and I'm not really running anything but Perl and services on the debian side.

I use all of my familiar tools on OSX, and instantly run the code on Debian without needing to synchronize anything.

When I'm done, blah stop will shut down the VM cleanly.

Finally, as a bonus, my bash prompt helps keep my confusion to a minimum when sshing all over the place.

Wednesday, October 6, 2010

Hire Me

I'm starting my B.A. at Ben Gurion University (Linguistics & Philosophy), and I'm looking for part time work (1-2 days a week) either telecommuting or in or around Beer Sheva or Tel Aviv.

If you're looking for a developer with strong and diverse technical skills, who is able to work either independently or in a team, feel free to contact me.

My CV is available on my website.

Note that I'm not really looking for contract work unless it may lead to part time employment as a salaried employee, as the overheads of being a freelancer are quite high (both financially and temporally).

Sunday, September 19, 2010

Moose has won

Stevan has always characterized Moose as a disruptive technology.

Pre-Moose metaprogramming has a long history, but you were pretty much stuck rolling your own metamodel back then.

Moose changed this by providing extensible class generation. It tries to create a metamodel in which several specialized metamodels can coexist and work together, even on the same class.

Case in point, a little over a week ago Franck Cuny announced his new SPORE project.

SPORE aims to make using REST services much easier, by generating a lot of code to deal with the transport layer, presenting the data from the REST service using simple OO methods.

In context what's interesting about SPORE is the way that it leverages Moose to do that.

SPORE extends Moose's metamodel objects, specifically the object that represents methods in a class, to create the bridge between the appealing sugar layer (a simple 1:1 mapping between HTTP requests and method calls) and the underlying HTTP client.

Take a look at the Net::HTTP::Spore::Meta::Method class. This is the essence of the sugar layer, bridging the REST client with the sleek OO interface.

Compared with SOAP::Lite (not that the comparison is very fair), SPORE is a far simpler implementation that offers more (e.g. middlewares), even if you ignore the parts of SOAP::Lite that don't apply to SPORE.

Moose made it viable to design such projects "properly", without inflating the scope of the project. In fact, using Moose like this usually reduces the amount of code dramatically.

Before Moose writing a REST toolkit with a similar metaclass based design would be overengineering a simple idea to death. The project would probably never be truly finished due to the competing areas of focus (the metamodel vs. the HTTP client vs. high level REST features).

The alternative design approach is a hand rolled stack that does the bare minimum required for each step. This might do the job, and probably gets finished on time, the code is inherently brittle. It's hard to reuse the different parts because they don't stand alone. Most pre-Moose metaprogramming on the CPAN falls into this category.

KiokuDB is another example. Without Moose it's actually quite useless, it can't deal with more than a handful of classes out of the box. Sure, you could specify the appropriate serialization for every class you want to store, but at that point the design just doesn't make sense anymore; the limitations would make it unusable in practice.

Being able to assume that Moose introspection would be available for most objects stored in the database allowed me to remove all guesswork from the serialization, while still providing an acceptable user experience (it's very rare to need a custom typemap entry in practice).

This shortcut automatically reduced the scope of the project immensely, and allowed me to focus on the internals. The only thing that really separates KiokuDB from its predecessors is that I could build on Moose.

I'm really glad to see how Moose has literally changed the way we approach this set of problems. The MIT approach is now a sensible and pragmatic choice more often than before; or in other words we get a cleaner and more reusable CPAN for the same amount of effort.

Wednesday, July 7, 2010

Are we ready to ditch string errors?

I can't really figure out why I'm not in the habit of using exception objects. I seem to only reach for them when things are getting very complicated, instead of by default.

I can rationalize that they are better, but it just doesn't feel right to do this all the time.

I've been thinking about what possible reasons (perhaps based on misconceptions) are preventing me from using them more, but I'm also curious about others' opinions.

These are the trouble areas I've managed to think of:

  • Perl's built in exceptions are strings, and everybody is already used to them. [1]
  • There is no convention for inspecting error objects. Even ->isa() is messy when the error could be a string or an object.[2]
  • Defining error classes is a significant barrier, you need to stop, create a new file, etc. Conversely, universal error objects don't provide significant advantages over strings because they can't easily capture additional data apart from the message.[3]
  • Context capture/reporting is finicky
    • There's no convention like croak for exception objects.
    • Where exception objects become useful (for discriminating between different errors), there are usually multiple contexts involved: the error construction, the initial die, and every time the error is rethrown is potentially relevant. Perl's builtin mechanism for string mangling is shitty, but at least it's well understood.
    • Exception objects sort of imply the formatting is partly the responsibility of the error catching code (i.e. full stack or not), whereas Carp and die $str leave it to the thrower to decide.
    • Using Carp::shortmess(), Devel::StrackTrace->new and other caller futzery to capture full information context is perceived as slow.[4]
  • Error instantiation is slower than string concatenation, especially if a string has to be concatenated for reporting anyway.[5]

[1] I think the real problem is that most core errors worth discriminating are usually not thrown at all, but actually written to $! which can be compared as an error code (see also %! which makes this even easier, and autodie which adds an error hierarchy).

The errors that Perl itself throws, on the other hand, are usually not worth catching (typically they are programmer errors, except for a few well known ones like Can't locate Foo.pm in @INC).

Application level errors are a whole different matter though, they might be recoverable, some might need to be silenced while others pass through, etc.

[2] Exception::Class has some precedent here, its caught method is designed to deal with unknown error values gracefully.

[3] Again, Exception::Class has an elegant solution, adhoc class declarations in the use statement go a long way.

[4] XS based stack capture could easily make this a non issue (just walk the cxstack and save pointers to the COPs of appropriate frames). Trace formatting is another matter.

[5] I wrote a small benchmark to try and put the various runtime costs in perspective.

Solutions

Here are a few ideas to address my concerns.

A die replacement

First, I see merit for an XS based error throwing module that captures a stack trace and the value of $@ using a die replacement. The error info would be recorded in SV magic and would be available via an API.

This could easily be used on any exception object (but not strings, since SV magic is not transitive), without weird globals or something like that.

It could be mixed into any exception system by exporting die, overriding a throw method or even by setting CORE::GLOBAL::die.

A simple API to get caller information from the captured COP could provide all the important information that caller would, allowing existing error formatters to be reused easily.

This would solve any performance concerns by decoupling stack trace capturing from trace formatting, which is much more complicated.

The idea is that die would not merely throw the error, but also tag it with context info, that you could then extract.

Here's a bare bones example of how this might look:

use MyAwesomeDie qw(die last_trace all_traces previous_error); # tentative
use Try::Tiny;

try {
 die [ @some_values ]; # this is not CORE::die
} catch {
 # gets data out of SV magic in $_
 my $trace = last_trace($_);

 # value of $@ just before dying
 my $prev_error = previous_error($_);

 # prints line 5 not line 15
 # $trace probably quacks like Devel::StackTrace
 die "Offending values: @$_" . $trace->as_string;
};

And of course error classes could use it on $self inside higher level methods.

Throwable::Error sugar

Exception::Class got many things right but a Moose based solution is just much more appropriate for this, since roles are very helpful for creating error taxonomies.

The only significant addition I would add make is having some sort of sugar layer to lazily build a message attribute using a simple string formatting DSL.

I previously thought MooseX::Declare would be necessary for something truly powerful, but I think that can be put on hold for a version 2.0.

A library for exception formatting

This hasn't got anything to do with the error message, that's the responsibility of each error class.

This would have to support all of the different styles of error printing we can have with error strings (i.e. die, croak with and without $Carp::Level futzing, confess...), but also allow recursively doing this for the whole error stack (previous values of $@).

Exposed as a role, the base API should complement Throwable::Error quite well.

Obviously the usefulness should extend beyond plain text, because the dealing with all that data is a task better suited for an IDE or a web app debug screen.

Therefore, things like code snippet extraction or other goodness might be nice to have in a plugin layer of some sort, but it should be easy to do this for errors of any kind, including strings (which means parsing as much info from Carp traces as possible).

Better facilities for inspecting objects

Check::ISA tried to make it easy to figure out what object you are dealing with.

The problem is that it's ugly, it exports an inv routine instead of a more intuitive isa. It's now possible to go with isa as long as namespace::clean is used to remove so it's not accidentally called as a method.

Its second problem is that it's slow, but it's very easy to make it comparable with the totally wrong UNIVERSAL::isa($obj, "foo") in performance by implementing XS acceleration.

Conclusion

It seems to me if I had those things I would have no more excuses for not using exception objects by default.

Did I miss anything?

Tuesday, July 6, 2010

KiokuDB's Leak Tracking

Perl uses reference counting to manage memory. This means that when you create circular structures this causes leaks.

Cycles are often avoidable in practice, but backreferences can be a huge simplification when modeling relationships between objects.

For this reason Scalar::Util exports the weaken function, which can demote a reference so that its referencing doesn't add to the reference count of the referent.

Since cycles are very common in persisted data (because there are many potential entry points in the data), KiokuDB works hard to support them, but it can't weaken cycles for you and prevent them from leaking.

Apart from the waste of memory, there is another major problem.

When objects are leaked, they remain tracked by KiokuDB so you might see stale data in a multi worker style environment (i.e. preforked web servers).

The new leak_tracker attribute takes a code reference which is invoked with the list of leaked objects when the last live object scope dies.

This can be used to report leaks, to break cycles, or whatever.

The other addition, the clear_leaks attribute allows you to work around the second problem by forcibly unregistering leaked objects.

This completely negates the effect of live object caching and doesn't solve the memory leak, but guarantees you'll see fresh data (without needing to call refresh).

my $dir = KiokuDB->connect(
    $dsn,

    # this coerces into a new object
    live_objects => {
        clear_leaks  => 1,
        leak_tracker => sub {
            my @leaked = @_;

            warn "leaked " . scalar(@leaked) . " objects";

            # try to mop up.
            use Data::Structure::Util qw(circular_off);
            circular_off($_) for @leaked;
        }
    }
);

These options were both refactored out of Catalyst::Model::KiokuDB.

Friday, July 2, 2010

Why another caching module?

In the last post I namedropped Cache::Ref. I should explain why I wrote yet another Cache:: module.

On the CPAN most caching modules are concerned with caching data in a way that can be used across process boundaries (for example on subsequent invocations of the same program, or to share data between workers).

Persistent caching behaves more like on disk databases (like a DBM, or a directory of files), Cache::Ref is like an in memory hash with size limiting:

my %cache;

sub get { $cache{$_[0]} }

sub set {
    my ( $key, $value ) = @_;

    if ( keys %cache > $some_limit ) {
        ... # delete a key from %cache
    }

    $cache{$key} = $value; # not a copy, just a shared reference
}

The different submodules in Cache::Ref are pretty faithful implementations of algorithms originally intended for virtual memory applications, and is therefore appropriate for when the cache is memory resident.

The goal of these algorithms is to try and choose the most appropriate key to delete quickly and without storing too much information about the key, or requiring costly updates on metadata during a cache hit.

This also means less control, for example there is no temporal expiry (i.e. cache something for $x seconds).

If most of CPAN is concerned with L5 caching, then Cache::Ref tries to address L4.

High level interfaces like CHI make persistent caching easy and consistent, but seem to add memory only caching as a sort of an afterthought, with most of the abstractions being appropriate for long term, large scale storage.

Lastly, you can use Cache::Cascade to create a multi level cache hierarchy. This is similar to CHI's l1_cache attribute, but you can have multiple levels and you can mix and match any cache implementation that uses the same basic API.

Thursday, July 1, 2010

KiokuDB's Immutable Object Cache

KiokuDB 0.46 added integration with Cache::Ref.

To enable it just cargo cult this little snippet:

my $dir = KiokuDB->connect(
    $dsn,
    live_objects => {
        cache => Cache::Ref::CART->new( size => 1024 ),
    },
);

To mark a Moose based object as cacheable, include the KiokuDB::Role::Immutable::Transitive role.

Depending on the cache's mood, some of those cacheable objects may survive even after the last live object scope has been destroyed.

Immutable data has the benefit of being cacheable without needing to worry about updates or stale data, so the data you get from lookup will always be consistent, it just might come back faster in some cases.

Just make sure they don't point at any data that can't be cached (that's treated as a leak), and you should notice significant performance improvements.

Monday, June 28, 2010

KiokuDB for DBIC Users

This is the top loaded tl;dr version of the previous post on KiokuDB+DBIC, optimized for current DBIx::Class users who are also KiokuDB non-believers ;-)

If you feel you know the answer to an <h2>, feel free to skip it.

WTF KiokuDB?

KiokuDB implements persistent object graphs. It works at the same layer as an ORM in that it maps between an in memory representation of objects and a persistent one.

Unlike an ORM, where the focus is to faithfully map between relational schemas and an object oriented representation, KiokuDB's main priority is to allow you to store objects freely with as few restrictions as possible.

KiokuDB provides a different trade-off than ORMs.

By compromising control over the precise storage details you gain the ability to easily store almost any data structure you can create in memory.[1].

Why should I care?

Here's a concrete example.

Suppose you have a web application with several types of browsable model objects (e.g. pictures, user profiles, whatever), all of which users can mark as favourites so they can quickly find them later.

In a relational schema you'd need to to query a link table for each possible type, and also take care of setting these up in the schema. When marking an item as a favourite you'd need to check what type it is, and add it to the correct relationship.

Every time you add a new item type you also need to edit the favourite management code to support that new item.

On the other hand, a KiokuDB::Set of items can simply contain a mixed set of items of any type. There's no setup or configuration, and you don't have to predeclare anything. This eliminates a lot of boilerplate.

Simply add a favourite_items KiokuDB column to the user, which contains that set, and use it like this:

# mark an item as a favourite
# $object can be a DBIC row or a KiokuDB object
$user->favourite_items->insert($object);
$user->update;

# get the list of favourites:
my @favs = $user->favourite_items->members;
 
# check if an item is a favourite:
if ( $user->favourite_items->includes($object) ) {
    ...
}

As a bonus, since there's less boilerplate this code can be more generic/reusable.

How do I use it?

First off, at least skim through KiokuDB::Tutorial to familiarize yourself with the basic usage.

In the context of this article you can think of KiokuDB as a DBIC component that adds OODBMs features to your relational schema, as a sort of auxiliary data dumpster.

To start mixing KiokuDB objects into your DBIC schema, create a column that can contain these objects using DBIx::Class::Schema::KiokuDB:

package MyApp::Schema::Result::Foo;
use base qw(DBIx::Class::Core);

__PACKAGE__->load_components(qw(KiokuDB));

__PACKAGE__->kiokudb_column('object');

See the documentation for the rest of the boilerplate, including how to get the $kiokudb handle used in the examples below.

In this column you can now store an object of any class. This is like a delegation based approach to a problem typically solved using something like DBIx::Class::DynamicSubclass.

my $rs = $schema->resultset("Foo");

my $row = $rs->find($primary_key);

$row->object( SomeClass->new( ... ) );

# 'store' is a convenience method, it's like insert_or_update
$row->object in KiokuDB

$row->store;

You can go the other way, too:

my $obj = SomeClass->new(
    some_delegate => $row,
);

my $id = $kiokudb->insert($obj);

And it even works for storing result sets:

use Foo;

my $rs = $schema->resultset("Foo")->search( ... );

my $obj = Foo->new(
    some_resultset => $rs,
);

my $id = $kiokudb->insert($obj);

So you can freely model ad-hoc relationships to your liking.

Mixing and matching KiokuDB and DBIC still lets you obsess over the storage details like you're used to with DBIC.

However, the key idea here is that you don't need to do that all the time.

For example, you can rapidly prototype a schema change before writing the full relational model for it in a final version.

Or maybe you need to preserve an intricate in memory data structure (like cycles, tied structures, or closures).

Or perhaps for some parts of the schema you simply don't need to search/sort/aggregate. You will probably discover parts of your schema are inherently a good fit for graph based storage.

KiokuDB complements DBIC well in all of those areas.

How is KiokuDB different?

There are two main things that traditional ORMs don't do easily, but that KiokuDB does.

First, collections of objects in KiokuDB can be heterogeneous.

At the representation level the lowest common denominator for any two arbitrary object might be nothing at all. This makes it hard to store objects of different types in the same relational table.

In object oriented design it's the interface that matters, not the representation. Conversely, in a relational database only the representation (the columns) matters, database rows have no interfaces.

Second, In an graph based object database the key of an object in the database should only be associated with a single object in memory, but in an ORM this feature isn't necessarily desirable:

  • It doesn't interact well with bulk fetches (for instance suppose a SELECT query fetches a collection of objects, some of which are already in memory. Should the fetched data be ignored? Should the primary keys of the already live objects be filtered out of the query?)
  • It requires additional APIs to control this tracking behavior (KiokuDB's new_scope stuff)

In the interests of flexibility and simplicity, DBIx::Class simply stays out of the way as far as managing inflated object (with one exception being result prefetched and cached resultsets). Whenever a query is is issued you're getting fresh every time.

KiokuDB does track references and provides a stable mapping between reference addresses and primary keys for the subset of objects that it manages.

What sucks about KiokuDB?

It's harder to search, sort and aggregate KiokuDB objects. But you already know a good ORM that can do those bits ;-)

By letting the storage layer in on your object representation you allow the database to help you in ways that it can't if the data is opaque.

Of course, this is precisely where it makes sense to just create a relational table, because DBIx::Class does those things very well.

Why now?

Previously you could use KiokuDB and DBIx::Class in the same application, but the data was kept separate.

Starting with KiokuDB::Backend::DBI version 1.11 you can store part of your model as relational data using DBIx::Class and rest in KiokuDB.

[1] You still get full control over serialization if you want, using KiokuDB::TypeMap, but that is completely optional, and most of the time there's no point in doing that anyway, you already know how to do that with other tools.

Sunday, June 27, 2010

KiokuDB 0.46

rafl and I have just uploaded KiokuDB::Backend::DBI version 1.11 and KiokuDB version 0.46.

These are major releases of both modules, and I will post at length on each of these new features in the coming days:

  • Caching live instances of immutable objects. For data models which favour immutability this should provide significant speedups with minimal code changes and no change in semantics.
  • Leak tracking is now in core. This was previously only available in Catalyst::Model::KiokuDB.
  • KiokuDB::Entry objects can be discarded after use to save memory (until now they were always kept around for as long as the object was still live)
  • Integration between KiokuDB managed objects and DBIx::Class managed rows, allowing for mixed relational/graph schemas as in this job queue example.

Friday, June 18, 2010

I hate software

A long standing bug in Directory::Transactional has finally been fixed.

Evidently, universally unique identifiers are only unique as long as the entire universe is contained within a single UNIX process, at least as far as e2fsprogs' libuuid is concerned.

These "unique" strings were used to create names for transaction work directories, so when they in fact turned out to be the same fucking strings across forks, the two processes would overwrite each others' private data.

uuid(3) doesn't even contain any information on how to reseed it even if I would bother checking for that myself.

I simply cannot fathom how a pseudorandom number generator is being used for such a library without taking forking into account. Isn't this stuff supposed to be reliable?

Friday, April 23, 2010

Where are the open edges?

Forgive my cynicism, but where are the edges in the Open Graph Protocol?

As far as I can tell Facebook's graph has two vertex types, people, and things. The edges go between people and things and between people and other people (i.e. friends).

Facebook rightfully requires authorization to access the other parts of the graph through their API (the data is private, after all), but what bothers me is that there's no way to describe a graph of your own, or share it with anyone else.[1]

In more practical terms, in this supposed graph specification there's no way to link to an og:url from my homepage saying that I like that thing (or maybe dislike, or have any other connection to it).

As a producer of "things", if I tell those things' og:type to the internet, my customers can "Like" my og:type. And then I can contact those customers (apparently for free), and presumably later pay Facebook to tell their friends about that thing more often. And there are a few other perks.

I get that Facebook is just trying to run an advertisement business, but why sell it as some hippy Open thing? Sure, part of the data is open, but the real graphyness is in the hrefs, which are still proprietary.

Illustrating my point, they reinvent hCards with a new XML namespace, instead of supporting hCards in their system. People are even less likely to adopt microformats if in order to use them they must add redundant data formats to their pages. It's not about the data being open to everyone, it's about the data being open to Facebook.

[1] The semantic web's immense success notwithstanding.

Thursday, March 18, 2010

What is a mixed schema?

Yesterday's post is a technical one that says that KiokuDB and DBIx::Class can now be used together on the same schema. What it doesn't explain is what this is actually good for.

Most of the application development we do involves OLTP in one form or another. Some of the apps also do reporting on simple, highly regular data.

KiokuDB grew out of the need to simplify the type of task we do most often. For the reporting side we are still trying to figure out what we like best. For example Stevan has been experimenting with Fey (not Fey::ORM) for the purely relational data.

This approach has been far superior to what we had done before: forcing a loosely constructed, polymorphic set of objects with no reporting requirements into a normalized relational schema that's optimized for reporting applications. There is a also new, worse alternative, which is to run aggregate reports on several million data points as in memory objects with Perl ;-)

However, the two pronged approach still has a major drawback: the two data sets are completely separate. There is no way to refer to data in the two sets without embedding knowlege about the database handles into the domain, which is tedious and annoying.

What the new DBIx::Class integration allows is to bridge that gap.

Concrete Example #1: Mixing KiokuDB into a DBIC centric app

Often times I would find myself making compromises about what sort of objects I put into a relational schema.

There is a tension between polymorphic graphs of objects and normalizd relational schema.

Suppose you're writing an image gallery application, and you decide to add support for YouTube videos. Obviously YouTube videos should be treated as image objects in the UI, they should tile with the images, you should be able to rearrange them, add captions/tags, post comments, etc.

This is precisely where polymorphism makes sense, you have two types of things that being used in a single context, but with a completely different representation. One is probably represented by a collection of files on disk, for the original image, previews, thumbnails, etc, and table entry of metadata. The other is represented by an opaque string ID, and most of its functionality is derived by generating calls to a web service.

How do you put YouTube videos into your image table? Do you add a type column? What about a resource table that has NULLable foreign keys to the image table and a NULLable video_id column? What about a blob column containing serialized information about the data?

With a mixed schema you could create a resource table that has a foreign key to the KiokuDB entries table. You could use the resources table for things like random selection, searches, keeping track of views counts, etc.

I'm going to assume that you're not really interested on running reports on which characters show up most often in the YouTube video IDs or what is the average length of image filenames, so that data can be opaque without compromising any features in your application.

On a technical level this is is similar to using a serialized blob column approach, or some combination of DBIx::Class::DynamicSubclass and DBIx::Class::FrozenColumns.

However, by using KiokuDB these objects become first class citizens in your schema, instead of some "extra" data that is tacked on to a data row. You get a proper API for retrieving and updating real graphs of objects, much more powerful and automatable serialization, a large number of standard modules that are supported out of the box, etc.

Perhaps most importantly, the encapsulation and notion of identity is maintained. You can share data between objects, and that data sharing is reflected consistently in memory. You can implement your MyApp::Resource::YouTubeVideo and MyApp::Resource::Image without worrying about mapping columns, or weird interactions with Storable. That, to me, is the most liberating part of using KiokuDB.

Concrete Example #2: Mixing DBIC into a KiokuDB centric app

On the other side of the spectrum (of our apps, anyway) you'll find data models that are just too complicated to put into a relational schema easily; there are mixed data types all over the place, complex networks of data (we've put trees, graphs, DAGs, and other structures, sometimes all in a single app), and other things that are incredibly useful for rapid prototyping or complicated processing.

This usually all works great until you need an aggregate data type at some point. That's when things fall apart. Search::GIN is not nearly as feature complete as I hoped it would be right now, in fact, it's barely a draft of a prototype. The DBI Backend's column extraction is a fantastically useful hack, but it's still just a hack at heart.

But now we can freely refer to DBIC rows and resultsets just like we can in memory, from our OO schema, to help with these tasks.

One of our apps used a linked list to represent a changelog of an object graph, somewhat similarly to Git's object store. After a few months of deployment, we got a performance issue from a client, a specific page was taking about 30 seconds to load. It turned out that normally only the last few revisions had to be queried, but on that specific cases a pathological data construction meant that over a thousand revisions were loaded from the database and had their data analyzed. Since this linked list structure is opaque, this was literally hitting the database thousands of times in a single request.

I ended up using a crude cache to memoize some of the predicates, which let us just skip directly to the revision that had to be displayed.

With the new features in the DBI backend I could simply create a table of revision containers (I would still need to store revisions in KiokuDB, because there were about 6 different revision types), on which I could do the entire operation with one select statement.

Conceptually you can consider the DBIC result set as just an object oriented collection type. It's like any other object in KiokuDB, except that its data is backed by a much smarter representation than a serialized blob, the underlying data store understands it and can query its contents easily and efficiently. The drawback is that it requires some configuration, and it can only contain objects of the same data type, but these are very reasonable limitations, after all we've been living with them for years.

It's all a bit like writing a custom typemap entry to better represent your data to the backend. In fact, this is pretty much exactly what I did to implement the feature ;-)

This still requires making the effort to define a relational schema, but only where you need it, and only for data that make sense in a relational setting anyway. And it's probably less effort than writing a custom typemap to create a scalable/queriable collection type.

Conclusion

Though still far from perfect, I feel that this really brings KiokuDB into a new level of usefulness; you no longer need to drink the kool aid and sacrifice a powerful tool and methodology you already know.

Even though DBIC is not everyone's tool of choice and has its own drawbacks, I feel that is by far the most popular Perl ORM for a reason, which is why I chose to build on it. However, there's no reason why this approach can't be used for other backend types.

Eventually I'd like to be able to see similar typemaps emerge for other backends. For example the Redis backend could support Redis' different data types, CouchDB has design documents and views, and riak's MapReduce jobs and queries (Franck's backend is on GitHub) could all be reflected as "just objects" that can coexist with other data in a KiokuDB object graph.

Wednesday, March 17, 2010

KiokuDB ♡ DBIx::Class

I just added a feature to KiokuDB's DBI backend that allows freely mixing DBIx::Class objects.

This resolves KiokuDB's limitations with respect to sorting, aggregating and querying by letting you use DBIx::Class for those objects, while still giving you KiokuDB's flexible schema for everything else.

The first part of this is that you can refer to DBIx::Class row objects from the objects stored in KiokuDB:

my $dbic_object = $resultset->find($primary_key);

$dir->insert(
    some_id => Some::Object->new( some_attr => $dbic_object ),
);

The second half is that relational objects managed by DBIx::Class can specify belongs_to type relationships (i.e. an inflated column) to any object in the KiokuDB entries table:

my $row = $rs->create({ name => "blah", object => $anything );

$row->insert;

say "Inserted ID for KiokuDB object: ",
    $dir->object_to_id($row->object);
To set things up you need to tell DBIx::Class about KiokuDB:
package MyApp::Schema;
use base qw(DBIx::Class::Schema);

# load the KiokuDB schema component
# which adds the extra result sources
__PACKAGE__->load_components(qw(Schema::KiokuDB));

__PACKAGE__->load_namespaces;



package MyApp::Schema::Result::Foo;
use base qw(DBIx::Class);

# load the KiokuDB component:
__PACKAGE__->load_components(qw(Core KiokuDB));

# do the normal stuff
__PACKAGE__->table('foo');
__PACKAGE__->add_columns(qw(id name object));
__PACKAGE__->set_primary_key('id');

# setup a relationship column:
__PACKAGE__->kiokudb_column('object');



# connect both together
my $dir = KiokuDB->connect(
    dsn => "dbi:SQLite:dbname=blah",
    schema_proto => "MyApp::Schema",
);

my $schema = $dir->backend->schema;



# then you can do some work:
$dir->txn_do( scope => 1, body => sub {
    my $rs = $schema->resultset("Foo");
    my $obj = $rs->find($primary_key)->object;

    $obj->change_something($something_else);

    $dir->update($obj);
});

There are still a few missing features, and this is probably not production ready, but please try it out! A dev release will be out once I've documented it. KiokuDB::Backend::DBI 0.11_01.

In the future I hope to match all of Tangram's features, enabling truly hybrid schemas. This would mean that KiokuDB could store objects in more than one table, with objects having any mixture of properly typed, normalized columns, opaque data BLOBs, or something in between (a bit like DBIx::Class::DynamicSubclass and DBIx::Class::FrozenColumns, but with more flexibility and less setup).

Sunday, March 14, 2010

git snapshot

I've just uploaded a new tool, git snapshot, which lets you routinely capture snapshots of your working directory, and records them in parallel to your explicitly recorded history.

The snapshot revisions stay out of the way for the most part, but if you need to view them you can look at them, for example using gitx refs/snapshots/HEAD

For me this is primarily useful when I'm sketching out a new project and forgetting to commit anything. When working on a large patch I usually use git commit -a --amend -C HEAD fairly often, which in conjunction with git reflog provides similar safety. However, git snapshot is designed to work well in either scenario.

I have a crontab set up to use mdfind so that all directories with the red label are snapshotted once an hour.

Wednesday, March 3, 2010

KiokuDB Introduces Schema Versioning

I've just released KiokuDB version 0.37, which introduces class versioning.

This feature is disabled by default to avoid introducing errors to existing schemas[1]. To try it out pass check_class_versions => 1 to connect:

KiokuDB->connect(
    dsn => ...,
    check_class_versions => 1,
);

To use this feature, whenever you make an incompatible change to a class, also change the $VERSION. When KiokuDB tries to load an object that has been stored before the change was made, the version mismatch is detected (versions are only compared as strings, there is no meaning to the values).

Without any configuration this mismatch will result in an error at load time, but the KiokuDB::Role::Upgrade::Handlers::Table role allows you to declaratively add upgrade handlers to your classes:

package Foo;
use Moose;

with qw(KiokuDB::Role::Upgrade::Handlers::Table);

use constant kiokudb_upgrade_handlers_table => {

    # we can mark versions as being equivalent in terms of their
    # data. 0.01 to 0.02 may have introduced an incompatible API
    # change, but the stored data should be compatible
    "0.01" => "0.02",

    # on the other hand, after 0.02 there may have been an
    # incompatible data change, so we need to convert
    "0.02" => sub {
        my ( $self, %args ) = @_;

        return $args{entry}->derive(
            class_version => our $VERSION, # up to date version
            data => ..., # converted entry data
        );
    },
};

For more details see the documentation, especially KiokuDB::TypeMap::Entry::MOP.

[1] In the future this might be enabled by default, but when data without any version information is found in the database it is assumed to be up to date.

Monday, February 1, 2010

$obj->blessed

I've been meaning to write about this gotcha for a long time, but somehow forgot. This was actually an undiscovered bug in Moose for several years:

use strict;
use warnings;

use Test::More;

use Try::Tiny qw(try);

{
    package Foo;

    use Scalar::Util qw(blessed);

    sub new { bless {}, $_[0] }
}

my $foo = Foo->new;

is( try { blessed($foo) }, undef );

is( try { blessed $foo }, undef );

done_testing;

The first test passes. blessed has't been imported into main, so the code results in the error Undefined subroutine &main::blessed.

The second test, on the other hand, fails. This is because blessed has been invoked as a method on $foo.

The Moose codebase had several instances of if ( blessed $object ), in packages that did not import blessed at all. This worked for ages, because Moose::Object, the base class for most objects in the Moose ecosystem, didn't clean up that export, and therefore provided an inherited blessed method for pretty much any class written in Moose.

I think this example provides a very strong case for using namespace::clean or namespace::autoclean routinely in your classes.

To cover the other half of the problem, the no indirect pragma allows the removal of this unfortunate feature from specific lexical scopes.

Wednesday, January 6, 2010

Importing Keywurl searches to Chrome

I've recently switched to using Chrome. I used Keywurl extensively with Safari. Here's a script that imports the Keywurl searches into Chrome:

#!/usr/bin/perl

use strict;
use warnings;

use Mac::PropertyList qw(parse_plist_file);
use DBI;

my $app_support = "$ENV{HOME}/Library/Application Support";

my $dbh = DBI->connect("dbi:SQLite:dbname=$app_support/Google/Chrome/Default/Web Data");

my $plist = parse_plist_file("$app_support/Keywurl/Keywords.plist");

my $keywords = $plist->{keywords};

$dbh->begin_work;

my $t = time;

my $sth = $dbh->prepare(qq{
    INSERT INTO keywords VALUES (
        NULL, -- id
        ?,    -- name
        ?,    -- keyword
        "",   -- favicon url
        ?,    -- url
        0,    -- show in default list
        0,    -- safe for auto replace
        "",   -- originating URL
        $t,   -- date created
        0,    -- usage count
        "",   -- input encodings
        "",   -- suggest url
        0,    -- prepopulate id
        0     -- autogenerate keyword
    )
});

foreach my $link ( keys %$keywords ) {
    my $data = $keywords->{$link};

    my $url = $data->{expansion}->value;

    $url =~ s/\{query\}/{searchTerms}/g;

    $sth->execute(
        $link, # name
        $link, # keyword
        $url,
    );
}

$dbh->commit;