Nitin Madnani gave a talk at PyCon this weekend about how Dumbo and Amazon EC2 allowed him to process very large text corpora using the machinery provided by NLTK. Unfortunately I wasn’t there but I heard that his talk was very well received, and his slides definitely are pretty awesome.
Consuming Dumbo output with Pig
February 5, 2010Although it abstracts and simplifies it all quite a bit, Dumbo still forces you to think in MapReduce, which might not be ideal if you want to implement complex data flows in a limited amount of time. Personally, I think that Dumbo still occupies a useful space within the Hadoop ecosystem, but in some cases it makes sense to work at an even higher level and use something like Pig or Hive. In fact, sometimes it makes sense to combine the two and do some parts of your data flow in Dumbo and others in Pig. To make this possible, I recently wrote a Pig loader function for sequence files that contain TypedBytesWritables, which is the file format Dumbo uses by default to store all its output on Hadoop. Here’s an example of a Pig script that reads Dumbo output:
register pigtail.jar; -- http://github.com/klbostee/pigtail a = load '/hdfs/path/to/dumbo/output' using fm.last.pigtail.storage.TypedBytesSequenceFileLoader() as (artist:int, val:(listeners:int, listens:int)); b = foreach a generate artist, val.listeners as listeners; c = order b by listeners; d = limit c 100; dump d;
You basically just have to specify names and types for the components of the key/value pairs and you’re good to go.
A possibly useful side-effect of writing this loader is the ability it creates of reading all sorts of file formats with Pig. Everything that Dumbo can read can also be consumed by Pig scripts now, all you have to do is write a simple Dumbo script that converts it to typed bytes sequence files:
from dumbo import run from dumbo.lib import identitymapper if __name__ == "__main__": run(identitymapper)
The proper solution is of course to write custom Pig loaders, but this gets the job done too and doesn’t slow things down that much.
Dumbo on Amazon EMR
December 23, 2009A while ago, I received an email from Andrew in which he wrote:
Now you should be able to run Dumbo jobs on Elastic MapReduce. To start a cluster, you can use the Ruby client as so:
$ elastic-mapreduce --create --alive
SSH into the cluster using your EC2 keypair as user
hadoop
and install Dumbo with the following two commands:
$ wget -O ez_setup.py http://bit.ly/ezsetup
$ sudo python ez_setup.py dumbo
Then you can run your Dumbo scripts. I was able to run the
ipcount.py
demo with the following command.
$ dumbo start ipcount.py -hadoop /home/hadoop \
-input s3://anhi-test-data/wordcount/input/ \
-output s3://anhi-test-data/output/dumbo/wc/The
-hadoop
option is important. At this point I haven’t created an automatic Dumbo install script, so you’ll have to install Dumbo by hand each time you launch the cluster. Fortunately installation is easy.
There was a minor hiccup that required the Amazon guys to pull the AMI with Dumbo support, but it’s back now and they seem to be confident that Dumbo support is going to remain available from now on. They are also still planning to make things even easier by providing an automatic Dumbo installation script.
As an aside, it’s worth mentioning that a bug in Hadoop Streaming got fixed in the process of adding Dumbo support to EMR. I can’t wait to see what else the Amazon guys have up their sleeves.
Dumbo over HBase
July 31, 2009This should be old news for dumbo-user subscribers, but Tim has, once again, put his Java coding skills to good use. This time around he created nifty input and output formats for consuming and/or producing HBase tables from Dumbo programs. Here’s a silly but illustrative example:
from dumbo import opt, run @opt("inputformat", "fm.last.hbase.mapred.TypedBytesTableInputFormat") @opt("hadoopconf", "hbase.mapred.tablecolumns=testfamily:testqualifier") def mapper(key, columns): for family, column in columns.iteritems(): for qualifier, value in column.iteritems(): yield key, (family, qualifier, value) @opt("outputformat", "fm.last.hbase.mapred.TypedBytesTableOutputFormat") @opt("hadoopconf", "hbase.mapred.outputtable=output_table") def reducer(key, values): columns = {} for family, qualifier, value in values: column = columns.get(family, {}) column[qualifier] = value yield key, columns if __name__ == "__main__": run(mapper, reducer)
Have a look at the readme for more information.
Analysing Apache logs
June 18, 2009The Cloudera guys blogged about using Pig for examining Apache logs yesterday. Although it nicely illustrates several lesser-known Pig features, I’m not overly impressed with the described program to be honest. Having to revert to three different scripting languages to do some GeoIP lookups just complicates things too much if you ask me. Personally, I’d much prefer writing something like:
class Mapper: def __init__(self): from re import compile self.regex = compile(r'(?P<ip>[\d\.\-]+) (?P<id>[\w\-]+) ' \ r'(?P<user>[\w\-]+) \[(?P<time>[^\]]+)\] ' \ r'"(?P<request>[^"]+)" (?P<status>[\d\-]+) ' \ r'(?P<bytes>[\d\-]+) "(?P<referer>[^"]+)" ' \ r'"(?P<agent>[^"]+)"') from pygeoip import GeoIP, MEMORY_CACHE self.geoip = GeoIP(self.params["geodata"], flags=MEMORY_CACHE) def __call__(self, key, value): mo = self.regex.match(value) if mo: request, bytes = mo.group("request"), mo.group("bytes") if request.startswith("GET") and bytes != "-": rec = self.geoip.record_by_addr(mo.group("ip")) country = rec["country_code"] if rec else "-" yield country, (1, int(bytes)) if __name__ == "__main__": from dumbo import run, sumsreducer run(Mapper, sumsreducer, combiner=sumsreducer)
After installing Python 2.6, I tested this hits_by_country.py program on my chrooted Cloudera-flavored Hadoop server as follows:
$ wget http://pygeoip.googlecode.com/files/pygeoip-0.1.1-py2.6.egg $ wget http://bit.ly/geolitecity $ wget http://bit.ly/randomapachelog # found via Google $ dumbo put access.log access.log -hadoop /usr/lib/hadoop $ dumbo start hits_by_country.py -hadoop /usr/lib/hadoop \ -input access.log -output hits_by_country \ -python python2.6 -libegg pygeoip-0.1.1-py2.6.egg \ -file GeoLiteCity.dat -param geodata=GeoLiteCity.dat $ dumbo cat hits_by_country/part-00000 -hadoop /usr/lib/hadoop/ | \ sort -k2,2nr | head -n 5 US 9400 388083137 KR 6714 2655270 DE 1859 32131992 RU 1838 44073038 CA 1055 23035208
At Last.fm, we use the GeoIP Python bindings instead of the pure-Python pygeoip module, which is nearly identical API-wise but might be a bit slower. Also, we abstract away the format of our Apache logs by using a parser class and we have some library code for identifying hits from robots as well, much like the IsBotUA() method in the Pig example.
Integration with Java code
June 16, 2009Although Python has many advantages, you might still want to write some of your mappers or reducers in Java once in a while. Flexibility and speed are probably the most likely potential reasons. Thanks to a recent enhancement, this is now easily achievable. Here’s a version of wordcount.py that uses the example mapper and reducer from the feathers project (and thus requires -libjar feathers.jar):
import dumbo dumbo.run("fm.last.feathers.map.Words", "fm.last.feathers.reduce.Sum", combiner="fm.last.feathers.reduce.Sum")
You can basically mix up Python with Java in any way you like. There’s only one minor restriction: You cannot use a Python combiner when you specify a Java mapper. Things should still work in this case though, it’ll just be slow since the combiner won’t actually run. In theory, this limitation could be avoided by relying on HADOOP-4842, but personally I don’t think it’s worth the trouble.
The source code for fm.last.feathers.map.Words and fm.last.feathers.reduce.Sum is just as straightforward as the code for the OutputFormat classes discussed in my previous post. All you have to keep in mind is that only the mapper input keys and values can be arbitrary writables. Every other key or value has to be a TypedBytesWritable. Writing a custom Java partitioner for Dumbo programs is equally easy by the way. The fm.last.feather.partition.Prefix class is a simple example. It can be used by specifying -partitioner fm.last.feather.partition.Prefix.
As you probably expected already, none of this will work for local runs on UNIX, but you can still test things locally fairly easily by running on Hadoop in standalone mode.
Multiple outputs
June 8, 2009Dumbo 0.21.20 adds support for multiple outputs by providing a -getpath option. Here’s an example:
from dumbo import run, sumreducer, opt def mapper(key, value): for word in value.split(): yield word, 1 @opt("getpath", "yes") def reducer(key, values): yield (key[0].upper(), key), sum(values) if __name__ == "__main__": run(mapper, reducer, combiner=sumreducer)
Running this splitwordcount.py program on my chrooted Cloudera-flavored Hadoop server (after updating Dumbo and building feathers.jar) gave me the following results:
$ dumbo splitwordcount.py -input brian.txt -output brianwc \ -hadoop /usr/lib/hadoop/ -python python2.5 -libjar feathers.jar [...] $ dumbo ls brianwc -hadoop /usr/lib/hadoop/ Found 17 items drwxr-xr-x - klaas [...] /user/klaas/brianwc/A drwxr-xr-x - klaas [...] /user/klaas/brianwc/B drwxr-xr-x - klaas [...] /user/klaas/brianwc/C [...] $ dumbo cat brianwc/B -hadoop /usr/lib/hadoop/ be 2 boy 1 Brian 6 became 2
So each ((<path>, <key>), <value>) pair got stored as (<key>, <value>) in <outputdir>/<path>. This only works when running on Hadoop, by the way. For a local run on UNIX everything would still end up in one file.
Under the hood, -getpath yes basically just makes sure that -outputformat sequencefile (which is the default when running on Hadoop) and -outputformat text get translated to -outputformat fm.last.feathers.output.MultipleSequenceFiles and -outputformat fm.last.feathers.output.MultipleTextFiles, respectively. These OutputFormat implementations are nice illustrations of how easy it can be to integrate Java code with Dumbo programs. The brand-new feathers project already provides a few other Java classes that can also easily be used by Dumbo programs, including a mapper and a reducer. I’ll try to find some time to ramble a bit about those as well, but that’s for another post.