ExoMemory - Steve Liles http://steveliles.github.com Overspill from my brain - because I'll forget it if I don't write it down... http://steveliles.github.com/images/viking.png ExoMemory - Steve Liles http://steveliles.github.com en-gb Copyright 2011 Steve Liles. The contents of this feed are available for non-commercial use only. Steve's own home-grown blog generator <![CDATA[ Building OpenCV as an .aar for Android]]> http://steveliles.github.com/building_opencv_as_an_aar_for_android.html After a couple of years doing Node.js and React.js I've just come back to Android to do a refresh of an app I originally built in 2012 using OpenCV 2.4.5.

Since then OpenCV 3.0.0 has been released and I was full of hope that the static integration route would have gotten easier (it has, sort of) and that the OpenCV guys would have stopped pushing the insane OpenCV Manager integration approach (they haven't).

As it turns out things are easier now, but not because of changes in OpenCV - rather its because of changes in Android tooling. The relatively new Android Library format (.aar) means that it is very easy to create a reusable library that bundles OpenCV for Android, and the Gradle build system and its integration with Android Studio makes it super-simple to use a library once it is published somewhere, or to use it as local a module dependency if that's your thing.

I've made an example Android Studio project, and deployed the resulting .aar to bintray. I can't / won't stop anyone using my build, although of course it isn't an official build or anything ;)

I figured I'd document the steps I took, since I'll probably have forgotten by tomorrow otherwise.

  1. Download the OpenCV4Android bundle from sourceforge
  2. Unpack the bundle on your machine
  3. Fire up Android Studio and create a Library Project
  4. Copy the java sources from the OpenCV4Android bundle into src/main/java in your library project
  5. Copy the native library directories from the OpenCV4Android bundle into src/main/jniLibs
  6. Build the .aar

If you only want to integrate OpenCV into one application you could do the above steps as an additional module in your existing app project then include the .aar into your app module as a module dependency.

If you want to use the library in several apps or whatever, you'll probably want to push it to a maven repo - bintray gives you a nice free way of doing that.

Once you've published your .aar, using it is a doddle. First, reference your maven repo in the top-level build.gradle of your project:

allprojects {
  repositories {
    jcenter()
    maven {
        url  "http://dl.bintray.com/steveliles/maven"
    }
  }
}

Next, add the .aar dependency in your app's build.gradle file:

    dependencies {
      compile 'org.opencv:OpenCV-Android:3.0.0'
    }

Finally, bootstrap OpenCV in your Java code:

    if (OpenCVLoader.initDebug()) {
      // do some opencv stuff
    }

Awesome!

Bonus: Split your APK's per architecture

OpenCV includes native libraries for each of the possible device architectures, which means that if you bundle them all in one APK you'll be looking at a minimum 40MB download without counting any of your own assets, code, and what have you.

It is actually very easy to split your APK's by ABI so that the end-user download is kept as small as possible - around 8-9MB - and also the build+deploy time onto your development device is kept to a minimum if you only target your specific hardware architecture during development.

Setting up APK splitting is very easy with gradle. Just modify your app module's build.gradle file, adding the following to the 'android' directive:

splits {
  abi {
    enable true
    reset()
    include 'x86', 'x86_64', 'armeabi', 'armeabi-v7a', 'mips', 'mips64', 'arm64-v8a'
    universalApk false
  }
}

Enjoy!

]]>
Android OpenCV Library .aar Mon, 25 Jan 2016 00:00:00 +0000
<![CDATA[Is my Android app currently foreground or background?]]> http://steveliles.github.com/is_my_android_app_currently_foreground_or_background.html Update, 2015-03-27: Finally got around to having another look at this, attempting to take into account the feedback from commenters.

I just drafted a new version that tries to respond immediately using onStart/onStop when possible, and deals with edge cases like received phone-calls using a delayed Runnable posted to a Handler like the original.

This time I posted an AndroidStudio project as a new github repo rather than just a gist of the interesting bit.

I'm not convinced the defensive WeakReferences to Listeners are strictly necessary, but it seems pear cider brings out my cautious side (its Friday night, what can I say).

Fair warning: I haven't tested this exhaustively, YMMV.

Update, 2015-01-30: Lots of interesting discussion in the comments. Nobody, myself included, is particularly happy with the non-determinism inherent in posting runnables to a handler with an arbitrary delay.

Graham Borland pointed out that if you use onStart/onStop rather than onResume/onPause, you no longer need clever strategies or hacks to determine whether you really have gone background, but others have raised edge cases that complicate matters: phone calls trigger onPause but not onStop, and configuration changes (e.g. rotating the device) call onPause->onStop->onStart->onResume which would toggle our state from foreground to background and back to foreground again.

Original post:

Android doesn't directly provide a way to know if your app is currently foreground or background, by which I mean actively running an Activity (screen on, user present, and your app currently presenting UI to the user).

Obviously if you're coding in an Activity then for almost all of the time (e.g. in any callbacks other than onPause, onStop, or onDestroy) you already know you are foreground, however if you have Service's or BroadcastReceiver's that need to adjust their behaviour when the app is foreground vs. background you need a different approach.

Since API level 14 (Android 4, ICS) we can easily obtain this information by hooking into the activity lifecycle events using Application.registerActivityLifecycleCallbacks.

Using this method we can register a single listener that will be called back whenever an activity lifecycle method is called on any activity in our application. We could call our listener class Foreground, and hook it in to the activity lifecycle methods by providing a custom Application class for our application:

class MyApplication extends Application {
    public void onCreate(){
        Foreground.init(this);
    }
}

Of course, we need to register this custom Application class in our Manifest.xml:

&lt;application
    android:label="@string/app_name"
    android:theme="@style/AppTheme"
    android:name=".MyApplication">

So, what does our Foreground class look like? Let's begin by creating a class that implements ActivityLifecycleCallbacks and allows only one instance of itself to be created via a static method:

class Foreground
implements Application.ActivityLifecycleCallbacks {

    private static Foreground instance;

    public static void init(Application app){
        if (instance == null){
            instance = new Foreground();
            app.registerActivityLifecycleCallbacks(instance);
        }
    }

    public static Foreground get(){
        return instance;
    }

    private Foreground(){}

    // TODO: implement the lifecycle callback methods!

}

This approach of using Singleton's is used a lot in Android programming, as it is a technique recommended by Google.

OK, so we have a class that we can initialise from our Application and then retrieve from any code in our app using Foreground.get(). Now we need to implement the lifecycle callbacks to track the foreground/background status of our app.

To do that we'll use the onActivityPaused/onActivityResumed method-pair, using paused to signal a potential shift to background, and resumed to know we are in the foreground.

private boolean foreground;

public boolean isForeground(){
    return foreground;
}

public boolean isBackground(){
    return !foreground;
}

public void onActivityPaused(Activity activity){
    foreground = false;
}

public void onActivityResumed(Activity activity){
    foreground = true;
}

// other ActivityLifecycleCallbacks methods omitted for brevity
// we don't need them, so they are empty anyway ;)

Nice, so now from any code in our application we can test whether we're currently foreground or not, like this:

Foreground.get().isForeground()

Cool. Are we done? We-ell, depends.

There are three potential issues here:

  1. The app might go to background at any time, so it would be nice if we could get notified instead of having to continually poll the isForeground method.
  2. When an application transitions between two Activities there is a brief period during which the first Activity is paused and the second Activity has not yet resumed ... during this period isForeground will return false, even though our application is the foreground app.
  3. Application.registerActivityLifecycleCallbacks is only available from API-level 14 onwards.

Can we address both of these issues? You betcha!

First lets make it possible to get notified of foreground/background state transitions. We'll add a Listener interface to our Foreground class:

class Foreground
implements Application.ActivityLifecycleCallbacks {

    public interface Listener {
        public void onBecameForeground();
        public void onBecameBackground();
    }

    ...
}

We'll also need to manage any registered listeners and allow listeners to be added and removed. We'll manage registered listeners using a thread-safe and efficient List implementation from java.util.concurrent - CopyOnWriteArrayList:

private List<Listener> listeners =
    new CopyOnWriteArrayList<Listener>();

public void addListener(Listener listener){
    listeners.add(listener);
}

public void removeListener(Listener listener){
    listeners.remove(listener);
}

And, of course, we'll need to notify our listeners whenever we transition between foreground and background states, which we'll do by updating our onActivityPaused and onActivityResumed methods:

public void onPause(){
    foreground = false;
    for (Listener l : listeners){
        try {
            l.onBecameBackground();
        } catch (Exception exc) {
            Log.e("Foreground", "Unhappy listener", exc);
        }
    }
}

public void onResume(){
    foreground = true;
    for (Listener l : listeners){
        try {
            l.onBecameForeground();
        } catch (Exception exc) {
            Log.e("Foreground", "Unhappy listener", exc);
        }
    }
}

Allright, now we're able to register listeners with our Foreground class which will be called-back when we transition from foreground to background and vice-versa.

Bear in mind that the callback is invoked from the lifecycle callbacks and therefore on the main thread. Remember the golden rule of Android development: do not block the main thread. If you don't know what that means you should buy my book :)

Right, that's problem 1 sorted, what about problem 2? (What, you forgot it already? I mean the brief period between onPause being called in Activity A before onResume is called in Activity B).

OK, the issue here is that if we blindly update our foreground/background state in onActivityPaused and onActivityResumed we will always have a period where we're reporting incorrect values. Worse, if we're firing events we'll even tell everyone who's listening that we just went background when we didn't really!

Lets fix that by giving ourselves a brief period of grace before announcing that we've gone background. This is, like many things in engineering, is a compromise - in this case between immediacy and correctness. We'll accept a small delay in order not to falsely report that we went to background.

To do this we'll use one of the nice features of Android's Handler class - the ability to post a Runnable onto the main-thread's event-loop to be executed after a specified delay.

Things are getting a bit more complex now, and we've some extra state to juggle. We're going to introduce another boolean to track whether we're paused or not, and we'll also need to keep a reference to the Runnable that we post to the main thread, so that we can cancel it when necessary.

private boolean foreground = false, paused = true;
private Handler handler = new Handler();
private Runnable check;

A quick note on Handler's: A Handler created with the no-arg constructor will perform all of its work on the thread that created it. Since we're instantiating this Handler inline in the Foreground class, and the Foreground instance is being created on the main thread during our Application's onCreate method callback, any work we post to this Handler will execute on the main thread.

Here's what our updated onActivityPaused and onActivityResumed methods look like:

@Override
public void onActivityResumed(Activity activity) {
    paused = false;
    boolean wasBackground = !foreground;
    foreground = true;

    if (check != null)
        handler.removeCallbacks(check);

    if (wasBackground){
        Log.i(TAG, "went foreground");
        for (Listener l : listeners) {
            try {
                l.onBecameForeground();
            } catch (Exception exc) {
                Log.e(TAG, "Listener threw exception!", exc);
            }
        }
    } else {
        Log.i(TAG, "still foreground");
    }
}

@Override
public void onActivityPaused(Activity activity) {
    paused = true;

    if (check != null)
        handler.removeCallbacks(check);

    handler.postDelayed(check = new Runnable(){
        @Override
        public void run() {
            if (foreground && paused) {
                foreground = false;
                Log.i(TAG, "went background");
                for (Listener l : listeners) {
                    try {
                        l.onBecameBackground();
                    } catch (Exception exc) {
                        Log.e(TAG, "Listener threw exception!", exc);
                    }
                }
            } else {
                Log.i(TAG, "still foreground");
            }
        }
    }, CHECK_DELAY);
}

A couple of things worth pointing out here:

  1. onActivityPaused schedules a Runnable to execute after CHECKDELAY milliseconds (CHECKDELAY is set to 500), and captures the Runnable in the check member variable so it can be cancelled if necessary
  2. onActivityResumed removes (cancels) the check callback if there is one, to cancel the pending notification of going background.

So now we have a nice neat mechanism for making direct checks for foreground/background status (Foreground.get().isBackground(), etc), and for being notified of changes to this status using the Listener interface.

To support API levels below 14 we'd need to hook our Foreground class more directly from the onPause and onResume methods of each individual Activity. This is most easily done by extending all activities in our application from a common base class and implementing the calls to Foreground from there.

For completeness, here's the github gist containing the full code for the Foreground class we've just explored.

]]>
Android Foreground Background Detect Mon, 21 Apr 2014 00:00:00 +0100
<![CDATA[Asynchronous Android]]> http://steveliles.github.com/asynchronous_android.html My book is published!

Its been an incredible learning experience - I read a lot of tech books, and find them invaluable, but if you really want to learn about something, try writing a book about it - the desire to deliver value to your readers with broad and deep coverage, and to not get things wrong, really focuses the mind!

Here's the cover and the marketing blurb I originally wrote, but which the publishers only used in part. I laboured over it for ages, so I figured I'd post it all here:

Asynchronous Android - Cover

With more than a million apps available from Google Play, it is more important than ever to build apps that stand out from the crowd. To be successful, apps must react quickly to user-input, deliver results in a flash, and sync data in the background. The key to this is understanding the right way to implement asynchronous operations that work with the platform, instead of against it.

Asynchronous Android is a practical book that guides you through the concurrency constructs provided by the Android platform, illustrating the applications, benefits, and pitfalls of each.

There's so much more to Android than Activities and Fragments. To get the best from modern Android devices, developers must look to unlock the power of multi-core CPU's. Concurrency is hard, but Google's engineers have worked miracles to build a platform that makes asynchronous programming natural and fun.

In this book you will:

Learn to use AsyncTask correctly to perform operations in the background, keeping user-interfaces running smoothly while avoiding treacherous memory leaks.

Discover Handler, HandlerThread and Looper, the related and fundamental building blocks of asynchronous programming in Android.

Escape from the constraints of the Activity lifecycle to load and cache data efficiently across your entire application with the Loader framework.

Keep your data fresh with scheduled tasks, and understand how Services let your application continue to run in the background, even when the user is busy with something else.

Key Points:

  • Understand Android's process model and its implications for your applications
  • Exercise multi-threading correctly to build well-behaved Android apps that work with the platform, not against it
  • Apply and control concurrency to deliver results quickly and keep your apps responsive to user-input
  • Avoid the common pitfalls that catch out even experienced developers
  • Discover Android-specific constructs that make asynchronous programming easy and efficient
  • Learn how, when, and why to apply Android's concurrency constructs to build smooth, response apps

Asynchronous Android will help you to build well-behaved apps with smooth, responsive user-interfaces; delight your users with speedy results and data that's always fresh; and keep the system happy and the battery charged by playing by the rules.

Right now you can get the book direct from Packt Publishing, and Barnes and Noble.

The links for Amazon US, Amazon UK, OReilly, and Safari aren't active yet, but should be by the time you read this.

A free sample chapter should be available from the Packt page soon - I believe it will be Chapter 7, Scheduling work with AlarmManager.

The source code for all of the examples is available from GitHub, and if you can download a pre-built app that runs all of the examples from Google Play.

]]>
Asynchronous Android Book AsyncTask Handler HandlerThread Loader Service AlarmManager Wed, 18 Dec 2013 00:00:00 +0000
<![CDATA[Porting isChangingConfigurations to API-levels below 11]]> http://steveliles.github.com/porting_ischangingconfigurations_to_api_levels_below_11.html A really handy method since API-level 11 is isChangingConfigurations(). When you need to make a decision about which objects to tear down, observers to unregister, etc., you really want to know if your Activity is restarting, going onto the back-stack, or finishing for good.

isFinishing() differentiates between going to the back-stack or one of the other two cases, but doesn't help us to figure out if we're finishing for good or coming right back following a configuration change, say.

At API-level 11 we got a new method to help address that - isChangingConfigurations(). This is great - in lifecycle methods (typically onPause) we can check to see why we're pausing and potentially leave some of our long lived objects alone, being careful to avoid memory leaks, of course!

What options do we have prior to API-level 11? Not a whole lot, actually. The best I could come up with was to create a base Activity class (sub-classing FragmentActivity, obviously) and override two methods:

  1. onSaveInstanceState - overridden to set a boolean property isConfigChange to true.
  2. isChangingConfigurations - overridden to either invoke the super-class method or return the value of isConfigChange, depending on the API level running the app.

There is one big downside - onSaveInstanceState is not invoked until after onPause has completed, so isChangingConfigurations() will only return a correct value when invoked from onStop pre API-level 11.

Full source code below.

]]>
Fragmentation Android Mon, 21 Oct 2013 00:00:00 +0100
<![CDATA[Android SSL - Certificate not trusted]]> http://steveliles.github.com/android_ssl_certificate_not_trusted.html I hit a problem in Android, trying to talk HTTPS with an Apache web-server that has an SSL certificate purchased from Dynadot/AlphaSSL:

javax.net.ssl.SSLHandshakeException: 
  java.security.cert.CertPathValidatorException: 
    Trust anchor for certification path not found.

The code to talk to the server looks something like this:

...
HttpsURLConnection _conn = null;
try {
    _conn = (HttpsURLConnection) 
        new URL(aUrl).openConnection();
    if ((_conn.getResponseCode() >= 200) && 
        (_conn.getResponseCode() < 300)) {
        return handleSuccessResponse(_conn);
    } else if (...) {
        ...
    } else {
        return handleErrorResponse(_conn);
    }
finally {
    _conn.disconnect();
}
...

I googled the message "Trust anchor for certification path not found". Unsurprisingly, StackOverflow shows up several times in the results, but several of the top hits I got suggest reaching immediately for a custom trust manager as the preferred solution.

This set off alarm bells for me, ranging from "that sounds like a lot of effort" through "so, why does connecting to another https site (e.g. my bank) work?".

You do need a custom trust manager if you signed your own certificate.

I haven't thought about it much, but the extra effort actually seems to outweigh the cost of buying a certificate.

You do not need a custom trust manager if you bought your certificate!

If you bought a certificate, building a custom trust manager is a complicated, slow, high-effort workaround for your actual problem, and - worse - you'd have to repeat the workaround on each client (imagine you are building apps for Android, iOS and Windows Mobile).

Certificates are signed in a "chain", where the chain eventually leads back to a set of root authority certificates that are known and trusted by Android. The point is to be able to trace a certificate back to a trusted signing authority without having to have any advance knowledge of the certificate.

So why is my certificate not trusted?

It turns out the server I was connecting to was misconfigured. It was returning its own certificate, but it was not returning the intermediate certificate, so there was no chain, and Android was unable to verify its authenticity, hence "trust anchor not found". Happily I was able to get access to the server to fix up the configuration.

One way to investigate your certificate chain is with openssl client:

openssl s_client -debug -connect www.thedomaintocheck.com:443

This lists some useful information about your cert, including the chain. If you are experiencing a trust anchor not found, your chain will probably only contain one element, like this:

Certificate chain
  0 s:/OU=Domain Control Validated/CN=www.thedomaintocheck.com
    i:/O=AlphaSSL/CN=AlphaSSL CA - G2

.. and openssl will finish its output with something like

Verify return code: 21 (unable to verify the first certificate)

A server configured correctly with intermediate certificates will return a complete chain, like this:

Certificate chain
  0 s:/OU=Domain Control Validated/CN=www.thedomaintocheck.com
    i:/O=AlphaSSL/CN=AlphaSSL CA - G2
  1 s:/O=AlphaSSL/CN=AlphaSSL CA - G2
    i:/C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
  2 s:/C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
    i:/C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA

.. and Android's HttpsURLConnection will happily accept the certificate - no custom trust manager keystore nonsense here, thank you.

]]>
SSL Android Trust Manager Certificate URLConnection Wed, 01 May 2013 00:00:00 +0100
<![CDATA[Roman numeral conversion in Clojure, part II]]> http://steveliles.github.com/roman_numeral_conversion_in_clojure_part_ii.html Previously I translated Roman numerals to decimals with this code:

(def numerals {\I 1, \V 5, \X 10, \L 50, \C 100, \D 500, \M 1000})

(defn add-numeral [n t]
  (if (> n (* 4 t)) (- n t) (+ t n)))

(defn roman [s]
  (reduce add-numeral (map numerals (reverse s))))

Now I want to continue my Clojure practise by doing the reverse: translating from decimals to Roman numerals.

I started where I left off previously, so I already have the numerals map defined. I figure the inverse of this map will be handy for doing look-ups of decimals, and sure enough clojure has a handy function - map-invert:

=> (use '[clojure.set :only (map-invert)])
nil

=> (doc map-invert)
&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;&ndash;
clojure.set/map-invert
([m])
  Returns the map with the vals mapped to the keys.
nil

Inverting my numerals map gives:

=> (map-invert numerals)
{10 \X, 5 \V, 1000 \M, 50 \L, 1 \I, 500 \D, 100 \C}

Great, but that highlights a lack of foresight in my original naming of the numerals map, so I define a new one named numerals-to-decimals, and decimals to numerals as its inverse:

(def numerals-to-decimals {\I 1, \V 5, \X 10, \L 50, \C 100, \D 500, \M 1000})
(def decimals-to-numerals (map-invert numerals-to-decimals))

Now I can convert easily to numerals where there is an exact match for the decimal:

=> (decimals-to-numerals 10)
\X

But I get nil if there's no match:

=> (decimals-to-numerals 11)
nil

In Roman numerals there is no zero, but they did use the concept of nul (nulla), so I start composing a function which I will incrementally improve as I figure out more steps:

(defn decimal-to-roman [d]
  (cond
    (= 0 d) "nulla"
    :else 
      (decimals-to-numerals d)))

Which yields the following results:

=> (decimal-to-roman 0)
"nulla"
=> (decimal-to-roman 5)
\V
=> (decimal-to-roman 11)
nil

So now I need to tackle the case where the decimal value can only be represented by some composite of the available Roman numerals. Clearly this is going to need to involve dividing the decimal number by the largest available numeral, then repeating for the remainder. Lets try for a single numeral:

(defn decompose-decimal-with-numeral [d v n]
  [n (quot d v) (rem d v)])

Testing that gives:

=> (decompose-decimal-with-numeral 23 10 \X)
[\X 2 3]
=> (decompose-decimal-with-numeral 3 10 \X)
[\X 0 3]

Where the resulting vector contains the numeral, the number of times it divides into the decimal value, and the remainder. Lets apply that function to our map of decimals-to-numerals:

(defn decompose-decimal-with-numerals [d]
  (for [[v n] decimals-to-numerals]
    (decompose-decimal-with-numeral d v n)))

Applying this function to the decimal 123 gives me the following:

=> (decompose-decimal-with-numerals 123)
([\X 12 3] [\V 24 3] [\M 0 123] [\L 2 23] [\I 123 0] [\D 0 123] [\C 1 23])

That's useful progress, but now I want to get the result of the biggest divisor only. To do that I need to filter when the divisors are larger than the input, and sort the results. Filtering is relatively straight-forward by supplying a :when filter function to the comprehension.

(defn decompose-decimal-with-numerals [d]
  (for [[v n] decimals-to-numerals :when (#(>= d v))]
    (decompose-decimal-with-numeral d v n)))

I could sort at various points along the way. Most efficient is probably to work with a sorted list of numerals from the outset, so lets create one:

(def numerals-desc (sort-by first > decimals-to-numerals))

Printing the new list gives:

=> (println numerals-desc)
([1000 M] [500 D] [100 C] [50 L] [10 X] [5 V] [1 I])

Using that in the decompose method gives:

(defn decompose-decimal-with-numerals [d]
  (for [[v n] numerals-desc :when (#(>= d v))]
    (decompose-decimal-with-numeral d v n)))

=> (decompose-decimal-with-numerals 123)
([\C 1 23] [\L 2 23] [\X 12 3] [\V 24 3] [\I 123 0])

Sweet! We only actually need the result for the largest available divisor, so lets re-write this function to do that:

(defn largest-divisor [d]
  (first (for [[v n] numerals-desc :when (#(>= d v))]
    (decompose-decimal-with-numeral d v n))))

=> (largest-divisor 123)
[\C 1 23]

Actually what I really want is the string representation of the numeral * the number of occurrences, so I create a new function n-numerals which concatenates N of the numeral together into a string and modify the decompose-decimal-with-numeral function:

(defn n-numerals [n num]
  (apply str (for [n (range 0 n)] num)))

(defn decompose-decimal-with-numeral [d v n]
  [(n-numerals (quot d v) n) (rem d v)])

Now I can apply decompose-decimal-with-numerals and see the break-down for each numeral:

=> (decompose-decimal-with-numerals 23)
([&quot;XX&quot; 3] [&quot;VVVV&quot; 3] [&quot;IIIIIIIIIIIIIIIIIIIIIII&quot; 0])

Good to see that it is giving the correct values, but more importantly I can use largest-divisor to see the Roman numeral form of the quotient:

=>(largest-divisor 323)
[&quot;CCC&quot; 23]

What remains is to apply this recursively until the remainder is zero:

(defn decompose-decimal-recursively [d]
  (let [[n r] (largest-divisor d)]
    (if (= 0 r) n (apply str n (decompose-decimal-recursively r)))))

Here I'm using tail recursion and the recursion can never be very deep because the set of numerals is small, so there should be no danger of blowing the stack. Testing the function gives:

=> (decompose-decimal-recursively 424)
"CCCCXXIIII"
=> (decompose-decimal-recursively 1984)
"MDCCCCLXXXIIII"

These are correct answers, but there are two problems: First, the results are not optimal - we have runs of 4 numerals, for example IIII which can be written more succinctly as IV. Second, I think I've over-complicated things by using division instead of subtraction.

The decimal-to-roman translation program so far looks like:

(use '[clojure.set :only (map-invert)])

(def numerals-to-decimals {\I 1, \V 5, \X 10, \L 50, \C 100, \D 500, \M 1000})
(def decimals-to-numerals (map-invert numerals-to-decimals))

(def numerals-desc (sort-by first > decimals-to-numerals))

(defn n-numerals [n num]
  (apply str (for [n (range 0 n)] num)))

(defn decompose-decimal-with-numeral [d v n]
  [(n-numerals (quot d v) n) (rem d v)])

(defn largest-divisor [d]
  (first (for [[v n] numerals-desc :when (#(>= d v))]
    (decompose-decimal-with-numeral d v n))))

(defn decompose-decimal-recursively [d]
  (let [[n r] (largest-divisor d)]
    (if (= 0 r) n (apply str n (decompose-decimal-recursively r)))))

I'm going to try to simplify before fixing the optimisation problem. I can find the next largest numeral that can be subtracted from the total like this:

(defn largest-numeral-in [d]
  (first (for [[v n] numerals-desc :when (#(>= d v))] [v n])))

Then I can recursively use this to eat away at the target decimal, like this:

(defn decimal-to-roman [d]
  (let [[v n] (largest-numeral-in d) [r] [(- d v)]]
    (if (= 0 r) (str n) (apply str n (decimal-to-roman r)))))

The program just got a lot simpler! From 4 functions in 10 lines to 3 functions in 6 lines. Here it is in entirety:

(use '[clojure.set :only (map-invert)])

(def numerals-to-decimals {\I 1, \V 5, \X 10, \L 50, \C 100, \D 500, \M 1000})
(def decimals-to-numerals (map-invert numerals-to-decimals))

(def numerals-desc (sort-by first > decimals-to-numerals))

(defn largest-numeral-in [d]
  (first (for [[v n] numerals-desc :when (#(>= d v))] [v n])))

(defn decimal-to-roman [d]
  (let [[v n] (largest-numeral-in d) [r] [(- d v)]]
    (if (= 0 r) (str n) (apply str n (decimal-to-roman r)))))

Now to optimise the results. After noodling with this for a while I realise that there really aren't all that many optimisation cases. They are: 4, 9, 40, 45, 49, 90, 95, 99, 400, 450, 490, 495, 499, 900, 950, 990, 995, and 999. 18 cases. I could generate these and then just do a lookup

(def optimisations (apply hash-map (flatten 
  (for [[n1 d1] numerals-to-decimals]
    (for [[n2 d2] numerals-to-decimals :when (#(&lt; d2 (quot d1 2)))]
      [(- d1 d2) (str n2 n1)])))))

I'm sure there's a much neater way of doing that, but it does the trick and produces this map:

=> (println optimisations)
{450 LD, 99 IC, 995 VM, 4 IV, 900 CM, 999 IM, 40 XL, 9 IX, 
490 XD, 45 VL, 495 VD, 400 CD, 49 IL, 499 ID, 950 LM, 90 XC, 
990 XM, 95 VC}

Now I can simply check this map in addition to the map of single numerals. Even better if there was just one map containing all of these numerals, so:

(def opt-decimals-to-numerals
  (merge decimals-to-numerals optimisations))

Redefining numerals-desc to include the optimisations should be all I need to do:

(def numerals-desc (sort-by first > opt-decimals-to-numerals))

So now when I invoke decimal-to-roman I get:

=> (decimal-to-roman 1999)
"MIM"
=> (decimal-to-roman 1954)
"MLMIV"
=> (decimal-to-roman 1984)
"MLMXXXIV"
=> (decimal-to-roman 313)
"CCCXIII"
=> (decimal-to-roman 413)
"CDXIII"
=> (decimal-to-roman 419)
"CDXIX"

The program in entirety now looks like this:

(use '[clojure.set :only (map-invert)])

(def numerals-to-decimals {\I 1, \V 5, \X 10, \L 50, \C 100, \D 500, \M 1000})
(def decimals-to-numerals (map-invert numerals-to-decimals))

(def optimisations (apply hash-map (flatten 
  (for [[n1 d1] numerals-to-decimals]
    (for [[n2 d2] numerals-to-decimals :when (#(&lt; d2 (quot d1 2)))]
      [(- d1 d2) (str n2 n1)])))))

(def opt-decimals-to-numerals
  (merge decimals-to-numerals optimisations))

(def numerals-desc (sort-by first > opt-decimals-to-numerals))

(defn largest-numeral-in [d]
  (first (for [[v n] numerals-desc :when (#(>= d v))] [v n])))

(defn decimal-to-roman [d]
  (let [[v n] (largest-numeral-in d) [r] [(- d v)]]
    (if (= 0 r) (str n) (apply str n (decimal-to-roman r)))))

There's quite a chunk of code there simply for converting the map of numerals-to-decimals to include the optimisations. The code would be shorter if I simply created that map to begin with:

(def numerals-desc 
  '([1000 "M"] [999 "IM"] [995 "VM"] [990 "XM"] [950 "LM"] [900 "CM"] 
   [500 "D"] [499 "ID"] [495 "VD"] [490 "XD"] [450 "LD"] [400 "CD"] 
   [100 "C"] [99 "IC"] [95 "VC"] [90 "XC"] [50 "L"] [49 "IL"] [45 "VL"] 
   [40 "XL"] [10 "X"] [9 "IX"] [5 "V"] [4 "IV"] [1 "I"]))

(defn largest-numeral-in [d]
  (first (for [[v n] numerals-desc :when (#(>= d v))] [v n])))

(defn decimal-to-roman [d]
  (let [[v n] (largest-numeral-in d) [r] [(- d v)]]
    (if (= 0 r) (str n) (apply str n (decimal-to-roman r)))))

Nice! Now to put everything together so that I can convert both ways - decimal-to-roman and roman-to-decimal:

(use '[clojure.set :only (map-invert)])

(def numerals-desc 
  '([1000 "M"] [999 "IM"] [995 "VM"] [990 "XM"] [950 "LM"] [900 "CM"] 
   [500 "D"] [499 "ID"] [495 "VD"] [490 "XD"] [450 "LD"] [400 "CD"] 
   [100 "C"] [99 "IC"] [95 "VC"] [90 "XC"] [50 "L"] [49 "IL"] [45 "VL"] 
   [40 "XL"] [10 "X"] [9 "IX"] [5 "V"] [4 "IV"] [1 "I"]))

(def numerals-to-decimals
  (map-invert (apply hash-map (flatten numerals-desc))))

(defn largest-numeral-in [d]
  (first (for [[v n] numerals-desc :when (#(>= d v))] [v n])))

(defn decimal-to-roman [d]
  (let [[v n] (largest-numeral-in d) [r] [(- d v)]]
    (if (= 0 r) (str n) (apply str n (decimal-to-roman r)))))

(defn add-numeral [n t]
  (if (> n (* 4 t)) (- n t) (+ t n)))

(defn roman-to-decimal [r]
  (reduce add-numeral 
    (map numerals-to-decimals (map str (reverse r)))))

(defn optimise-roman [r]
  (decimal-to-roman (roman-to-decimal r)))

For fun I added one extra function optimise-roman which takes a roman-numeral string and returns the optimal representation, for example:

=> (optimise-roman "VIIII")
"IX"
=> (optimise-roman "XCVIIII")
"IC"
=> (optimise-roman "MDCCCCLXXXXVIIII")
"MIM"

I confess I am struggling with the mind-shift from imperative to functional programming. I find myself thinking of various convoluted ways to avoid recursion (why!?) and struggling to remember to use list comprehensions, even though (I think) I understand them well enough.

I'm not sure if it was because of this or in spite of it that it took me quite a while to realise that I was starting with a too-complex algorithm (division) when a much simpler one existed (subtraction).

Still, Clojure is fun :)

]]>
clojure exercise roman-numerals Sun, 03 Mar 2013 00:00:00 +0000
<![CDATA[Roman numeral conversion in Clojure]]> http://steveliles.github.com/roman_numeral_conversion_in_clojure.html (see also part II)

I've been reading Stu Halloway's Programming Clojure (which is excellent by the way). When I reached chapter 6 "Concurrency" I thought I'd better pause and practise some of the basics before venturing into deeper waters!

I once saw a programming challenge posted with a job ad, which asked applicants to write some code to convert numbers from the decimal numeral system to Roman. This seemed like a sufficiently challenging and self-contained thing to try for a first attempt at Clojure.

I started by creating a map of the individual numerals:

(def numerals {\I 1, \V 5, \X 10, \L 50, \C 100, \D 500, \M 1000})

Maps are functions, so invoking a map with a key gives the value, like this:

=> (numerals \X)
10

Next I created a function that maps roman numerals to decimal values (baby steps!):

(defn decimal-values [s]
  (map numerals s))

Testing this at the REPL I get:

=> (decimal-values "MCXVII")
(1000 100 10 5 1 1)

Great, I have a list of the values which I need to compose together. Next I want to combine the values in this list by adding them together.

(defn roman [s]
  (reduce + (decimal-values s)))

Testing again at the REPL I get:

=> (roman "MCXVII")
1117

Whoop! OK, we have our first conversion. Are we done? Unfortunately not - there's a complication.

The "simple" way of writing 4 in Roman numerals is IIII. That repetition is allowed, but not commonly used. The short-hand is to prefix a higher numeral with a lower one, e.g. IV, which means subtract the lower number from the higher. This works for all of the other numerals too, so 999 can be written IM (1000 - 1).

I pondered this for a while, then decided to write a combine-numerals function to use in place of + in the reduce call. I want my roman function to look like this:

(defn roman [s]
  (reduce combine-numerals (decimal-values s)))

Now, combine-numerals needs to account for the more complicated logic described above. Here's what I came up with:

(defn combine-numerals [a b]
  (if (> a b)
    (+ a b)
    (- b a)))

This simply checks that the value seen first is larger than the following value - if it is we add, if it isn't we subtract. Testing that at the REPL gives:

=> (combine-numerals 1 10)
9
=> (combine-numerals 10 1)
11

Perfect, if I see a 1 before a 10 it is subtracted to give 9, otherwise it is added to give 11. Lets try our roman function again:

=> (roman "XVII")
17
=> (roman "MCMXVII")
2117

Hmm, that last one isn't right, it should give 1917. What's going on?

reduce takes each value in the given list and applies the function to it and the reduced result so far. That means that when combine-numerals is called it isn't called with two adjacent values except in the case where the numeral string only has two numerals. Bum.

What I really need is to have the sum so far, the previous numeral seen, and the current numeral being handled, so that I can decide whether to add/subtract the current numeral by comparing it with the previous numeral.

I think I could do this with a recursive function, but I'm struggling to remember the syntax, so I ponder a little more and decide to try running in reverse through the numerals, and add if the total so far is less than 4 times the current total.

This involves changing combine-numerals to multiply the current numeral's value by 4 before comparing, and renaming it to add-numeral:

(defn add-numeral [n t]
  (if (> n (* 4 t))
    (- n t)
    (+ t n)))

I also need to change roman to reduce the list in reverse with the new add-numeral function:

(defn roman [s]
  (reduce add-numeral (map numerals (reverse s))))

Testing at the REPL gives:

=> (roman "IX")
9
=> (roman "MCMXIX")
1919
=> (roman "MIM")
1999
=> (roman "MLMXXXIIII")
1984
=> (roman "MCMLXXXIV")
1984

So, lets see the whole program:

(def numerals {\I 1, \V 5, \X 10, \L 50, \C 100, \D 500, \M 1000})

(defn add-numeral [n t]
  (if (> n (* 4 t))
    (- n t)
    (+ t n)))

(defn roman [s]
  (reduce add-numeral (map numerals (reverse s))))

I confess myself truly amazed that -

  1. I managed to write some Clojure that actually works
  2. I think it is nearly idiomatic Clojure (is it? please comment!)
  3. It is incredibly concise and moderately readable (even to my unfamiliar eyes)

Here's a rough translation of the same algorithm to Java -

import java.util.*;

public class RomanNumerals {

    private static Map&lt;Character, Integer> numerals = 
        new HashMap&lt;Character, Integer>();

    static {
        numerals.put('I', 1);
        numerals.put('V', 5);
        numerals.put('X', 10);
        numerals.put('L', 50);
        numerals.put('C', 100);
        numerals.put('D', 500);
        numerals.put('M', 1000);
    }

    public int convert(String aRoman) {
        int total = 0;      
        for (int i=aRoman.length()-1; i>=0; i--) {
            char c = aRoman.charAt(i);
            total = combine(numerals.get(c), total);
        }
        return total;
    }

    private int combine(int aCurrent, int aRunningTotal) {
        return (aRunningTotal > (aCurrent * 4)) ?
            aRunningTotal - aCurrent : aRunningTotal + aCurrent;
    }

    public static void main(String[] args) {
        System.out.println(new RomanNumerals().convert(args[0]));
    }
}

25 lines of Java (not counting the main method) to 7 lines of Clojure, but I'm not judging (yet).

Java pads out with boiler-plate, and there are big losses in populating the HashMap. I walked the String in reverse rather than actually reversing it. I could have reversed it with new StringBuilder(aRoman).reverse().toString() I suppose, but that's pretty ugly.

Fun stuff! I plan to come back to this exercise over time as I learn more Clojure - pretty sure there are several dozen alternative versions that are more idiomatic/elegant, and I haven't made any attempt at handling bad input, etc.

]]>
clojure exercise roman-numerals Thu, 28 Feb 2013 00:00:00 +0000
<![CDATA[GWT i18n using browser locale]]> http://steveliles.github.com/gwt_i18n_using_browser_locale.html I was trying to get GWT to load a localised permutation based on the user's browser locale. For whatever reason I didn't have much luck coming up with a good combination of search keywords to get a good hit from Google.

After rummaging around in the bowels of GWT I found that it is possible to use the browser locale to determine the build permutation to use.

It doesn't seem all that well publicised or documented (does not figure in the i18n docs for example), although it is used in the showcase demo app.

There are clues as to why it isn't well publicised in the bug containing the original patch.

To use the browser's locale to determine the permutation, simply add the following to the module.gwt.xml:

<set-configuration-property name="locale.useragent" value="Y"/>

Setting the locale of chrome for testing is explained for windows, linux, and mac.

]]>
GWT i18n l10n locale Wed, 27 Feb 2013 00:00:00 +0000
<![CDATA[Cross-domain inter-frame communication in javascript]]> http://steveliles.github.com/cross_domain_inter_frame_communication_in_javascript.html Requirement:

Web-page A from domain A' loads web-page B from domain B' into an iframe. Web-page B wants to be able to render some content into the DOM of web-page A (outside of the view-port described by B's iframe). The content which B renders into A needs to be able to HTTP GET and POST data back to the domain B', handle the responses, and update the rendered content in web-page A.

Problems:

  • Scripts loaded into pages from different domains cannot interact so, for example, page B's scripts cannot simply* render content into the parent frame (page A)
  • Page A cannot simply* use XMLHTTPRequest to GET/POST/PUT/DELETE to host B

[*] I say "simply" because you can't just expect it to work like it would in a same-domain environment, but it is possible!

Escaping from an iframe

Lets start with rendering content into a parent frame. To get concrete, lets say that domain A is www.domain.com, while domain B is sub.domain.com.

Yes, they are sub-domains of a common root. No, B is not allowed to modify the DOM of A, or communicate with scripts in A, because web-browsers won't allow that unless the ports, protocols, and domains match exactly.

The only way for B to escape from the iframe is to have co-operation from A. That co-operation can come in one of two forms:

  1. Both pages must explicitly set the document.domain property to the same value (e.g. domain.com). Even if one of the pages is served directly from "domain.com", the act of explicitly setting the domain is required for this technique to work - it signals to the browser that the two pages want to collaborate.
  2. Have host A serve an iframe-buster page (more below)

Iframe Buster

Host A can serve a page which loads scripts on behalf of B, sometimes known as an iframe-buster.

This is a common technique in the ad-delivery world to allow complex ads like page take-over's to escape from the iframe they are loaded into. Note that this is not an exploit as such, since it requires host A to be complicit.

To illustrate how it works, here's what a very simple iframe-buster might look like:

&#60;!DOCTYPE HTML PUBLIC 
  "-//W3C//DTD HTML 4.01 Transitional//EN" 
  "http://www.w3.org/TR/html4/strict.dtd">
&#60;html>
  &#60;body>
    &#60;script type="text/javascript" language="javascript">
      var _url = "http://www.domain.com/bust.js?" + document.location.search;
      var _script = document.createElement("script");
      _script.setAttribute("type", "text/javascript");
      _script.setAttribute("src", _url);
      document.body.appendChild(_script);        
    &#60;/script>    
  &#60;/body>
&#60;/html>

The host-page A will initially load a page from host B. That page B will load the iframe-buster on host A with some parameters which can be used to direct what the bust-out actually does.

Rendering into the parent DOM

Now that we have assistance from the host-page domain, our iframe can communicate directly with the DOM and scripts in the parent frame, using the window.parent handle.

var _ctx = window.parent.document;
var _div = _ctx.createElement("div");
_div.innerHTML = '&#60;h1>Hooray!&#60;/h1>';
_ctx.body.appendChild(_div);

Great!

Making HTTP requests

Now we want to fetch some JSON data from host B by HTTP GET, and render it in the parent frame. For GET requests we might be OK - we just need the host API to support JSONP. If it doesn't we need one of the other techniques as for making POST requests …

What if we want to POST some data to host B? We can't use XMLHTTPRequest to POST from A to B, as the browser security policies won't allow it. So, what are our options?

  1. HTML Form POST
  2. CORS (Cross Origin Resource Sharing)
  3. Pipelined communication through another frame

HTML Form POST

We could use a form POST, which is allowed to POST to another domain (mostly because the HTML Form POST spec pre-dates the tightened security policies), and will receive the response.

You'll need to do a bit of scripting to wrap things up so that you can register callbacks and have things behave similarly to an XMLHTTPRequest.

This method has the advantage of broad browser compatibility, but the implementation is by necessity less clean, and you lose some of the advantages of XMLHTTPRequest (e.g. the ability to check the response status code).

If you're dealing with a pure RESTful API you'll struggle without the ability to check status codes.

If you have help from the server-side you can probably engineer your way around most of the problems, and even tunnel non-POST API calls by using hidden FORM params and a server-side intercept (e.g. Servlet Filter) to translate the request for you before it hits the API handlers.

That said, if you have control of (or co-operation from) the server-side you'll probably want to look at one of the other methods below.

Advantages:

  1. Good browser compatibility
  2. Easily understood

Disadvantages:

  1. Poor handling of pure RESTful APIs

CORS (Cross Origin Resource Sharing)

We could use CORS, which involves the web-server B checking and sending additional HTTP headers.

This requires a relatively modern browser and some server-side work to check and set additional HTTP headers. CORS is nice because it allows us to conveniently use XMLHTTPRequest for all of our requests (and no need for JSONP).

CORS might put a little extra demand on your servers, as browsers "pre-flight" requests as part of the CORS protocol.

Advantages:

  1. Just like working in a same-domain environment (good for RESTful API handling)
  2. CORS is an emerging standard, so you don't necessarily need to own/operate the host for this method to be a realistic possibility

Disadvantages:

  1. Requires modern browser
  2. Requires that the host supports CORS
  3. Some HTTP request overhead (pre-flight)

Pipeline communication through another iframe

A third option is to pipeline your HTTP calls through another iframe - loaded from the domain of the host you want to make calls to.

In newer browsers we can use window.postMessage to send text between frames loaded from different domains.

Since this text can be JSON, and you can register event-handlers for the "message" event, you can set up a communication-frame per host that you need to talk to, and from inside that frame you can use straight-forward XMLHTTPRequest calls, same-domain style.

There are some neat libraries that use a variety of fallback methods (message-passing via window.name; flash) to make this work in older browsers. The most popular one seems to be EasyXDM.

Advantages: 1. Good browser compatibility (use libraries like EasyXDM) 2. Good for RESTful API handling

Disadvantages:

  1. More complex set-up
  2. You need control of the host
  3. There's some small overhead in piping everything as strings through nested iframe's

Summary

As with everything, there is no one-size-fits-all solution, and some flexibility and compromise is likely to be necessary. For the project I'm working on currently I'm using iframe busters, a little CORS, and a lot of pipelining through another frame, but YMMV.

]]>
javascript iframe window cross-domain postMessage Tue, 05 Feb 2013 00:00:00 +0000
<![CDATA[Subversion 1.7 Eclipse integration in Ubuntu 12]]> http://steveliles.github.com/subversion_1_7_eclipse_integration_in_ubuntu_12.html Almost a year ago I posted about getting Subclipse/Subversion/Eclipse/Javahl to play nicely together in Ubuntu 11.10.

Things have changed a little with Quantal Quetzal (notably that canonical have updated their repo's to support SVN 1.7.7 and that the libsvn-java installation has moved), so here's an updated note for getting Subversion 1.7.x integration working with Eclipse (3.7.x) and Subclipse 1.8.x on Ubuntu 12.10.

I'm assuming you already have Eclipse and Subclipse installed (with all the optional extras).

To use the native svn integration you will of course need subversion installed, so install Subversion from canonical's repo's - sudo apt-get install subversion.

You'll also need libsvn-java, to allow subclipse to talk to svn - sudo apt-get install libsvn-java.

To enable Eclipse to see your libsvn-java installation, go to the eclipse install directory (I install in /home/steve/dev/tools/eclipse) and edit the eclipse.ini file.

You need to add -Djava.library.path=/usr/lib/x86_64-linux-gnu/jni/, which is where libsvn-java's native libraries get installed. Add it immediately following -vmargs. My eclipse.ini file now looks like this:

-startup
plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.100.v20110505
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
256m
--launcher.defaultAction
openFile
-vmargs
-Djava.library.path=/usr/lib/x86_64-linux-gnu/jni/
-Xms40m
-Xmx600m

If you use Subclipse but never previously installed Javahl you probably see irritating warning dialogs the first time you do anything in Eclipse after a restart. Installing javahl correctly will prevent those :).

]]>
ubuntu eclipse subversion javahl subclipse quantal Tue, 13 Nov 2012 00:00:00 +0000
<![CDATA[Setting up embedded Jetty 8 and Spring MVC with Maven and NO XML]]> http://steveliles.github.com/setting_up_embedded_jetty_8_and_spring_mvc_with_maven_and_no_xml.html You can check out the complete source of this simple project from github. If you want to set up with XML configuration, check my earlier post.

Starting a new project, and irritated by xml configuration, I thought I'd try Spring @MVC (annotation configured MVC) with Jetty-8 embedded using the no xml servlet 3.0 configuration approach.

Initializing the Servlet Context

The Servlet 3 spec introduces the ability to configure the servlet context from code, via implementations of ServletContainerInitializer. You can dynamically configure Servlets and Filters here.

Spring @MVC provides an implementation of ServletContainerInitializer (SpringServletContainerInitializer) which tells the container to scan for classes which implement WebApplicationInitializer, so when using @MVC we need to provide an implementation of WebApplicationInitializer.

Here's a simple one that gets us up and running with a Spring DispatcherServlet mapped to "/" and JSP processing for *.jsp requests (including those forwarded from Controllers):

public class WebAppInitializer implements WebApplicationInitializer
{
    private static final String JSP_SERVLET_NAME = "jsp";
    private static final String DISPATCHER_SERVLET_NAME = "dispatcher";

    @Override
    public void onStartup(ServletContext aServletContext) 
    throws ServletException
    {       
        registerListener(aServletContext);
        registerDispatcherServlet(aServletContext);
        registerJspServlet(aServletContext);
    }

    private void registerListener(ServletContext aContext)
    {
        AnnotationConfigWebApplicationContext _root = 
            createContext(ApplicationModule.class);
        aContext.addListener(new ContextLoaderListener(_root));
    }

    private void registerDispatcherServlet(ServletContext aContext)
    {
        AnnotationConfigWebApplicationContext _ctx = 
            createContext(WebModule.class);
        ServletRegistration.Dynamic _dispatcher = 
            aContext.addServlet(
                DISPATCHER_SERVLET_NAME, new DispatcherServlet(_ctx));
        _dispatcher.setLoadOnStartup(1);
        _dispatcher.addMapping("/");
    }

    private void registerJspServlet(ServletContext aContext) {
        ServletRegistration.Dynamic _dispatcher = 
            aContext.addServlet(JSP_SERVLET_NAME, new JspServlet());
        _dispatcher.setLoadOnStartup(1);
        _dispatcher.addMapping("*.jsp");
    }

    private AnnotationConfigWebApplicationContext createContext(
        final Class<?>... aModules)
    {
        AnnotationConfigWebApplicationContext _ctx = 
            new AnnotationConfigWebApplicationContext();
        _ctx.register(aModules);
        return _ctx;
    }
}

Notice here that I am registering two "Modules" (a naming convention I've adopted for my Spring @Configuration classes) - ApplicationModule and WebModule. I like to configure the various layers of the application separately.

In ApplicationModule I'll put things like scheduled operations and any dependencies those operations need, while anything that is only needed during web request handling I'll put in WebModule.

ApplicationModule for a simple web-app might be unnecessary.

@Configuration
public class ApplicationModule
{
    // Declare "application" scope beans here (ie., 
    // beans that are not _only_ used by the web context)
}

WebModule will be used to configure Spring MVC, and for a simple web-app might look like this:

@EnableWebMvc
@Configuration
@ComponentScan(basePackages={"com.sjl"})
public class WebModule extends WebMvcConfigurerAdapter
{
    @Override
    public void addViewControllers(ViewControllerRegistry aRegistry)
    {
        aRegistry.addViewController("/").setViewName("index");
    }

    @Override
    public void addResourceHandlers(ResourceHandlerRegistry aRegistry)
    {
        ResourceHandlerRegistration _res = 
            aRegistry.addResourceHandler("/WEB-INF/view/**/*");
        _res.addResourceLocations(
            "classpath:/META-INF/webapp/WEB-INF/view/");
    }

    @Bean
    public ViewResolver viewResolver() 
    {
        UrlBasedViewResolver _viewResolver = 
            new UrlBasedViewResolver();
        _viewResolver.setViewClass(JstlView.class);
        _viewResolver.setPrefix("WEB-INF/view/");
        _viewResolver.setSuffix(".jsp");
        return _viewResolver;
    }
}

I'm extending Spring's WebMvcConfigurerAdapter which provides a host of conveniences. Note that this WebModule sets the annotations @EnableWebMvc and @ComponentScan which are equivalent to the xml configuration you're probably familiar with:

<mvc:annotation-driven/>   
<context:component-scan base-package="com.sjl" />

The ResourceHandlerRegistration provides a mapping from requests forwarded to /WEB-INF/view/ onto the classpath location of the actual files. Without this, for example, Jetty won't be able to find your jsp files when Controller's forward requests to View's (the ViewResolver's prefix must be matched by the ResourceHandler's path-pattern).

What remains is to instantiate Jetty and have it find its configuration from the classpath. I won't list that here as its quite long, and the full working example is in github.

An important thing to point out is that there is a problem with current versions of Jetty (8.1.7) where Jetty won't find your WebApplicationInitializer classes unless they are either inside a Jar or in the WEB-INF/classes. When running embedded from your IDE neither of these will be true.

This results in log output like "No Spring WebApplicationInitializer types detected on classpath" and is why, in my WebServer class, I set a subclass of AnnotationConfiguration which overrides the default Jetty behaviour to also search for non-jar'd classes on the classpath (see the codefrom around line 75).

]]>
Jetty Embedded Spring MVC Maven no-xml Tue, 13 Nov 2012 00:00:00 +0000
<![CDATA[Configuring global exception-handling in Spring MVC]]> http://steveliles.github.com/configuring_global_exception_handling_in_spring_mvc.html It took a couple of hours to figure this out - the mighty Google and even StackOverflow let me down - in the end I had to actually read Spring's DispatcherServlet code! (I know, right!?)

Here's the problem I was having - I'm using Spring MVC's data-binding tricks to inject objects into my @Controller's methods like this:

@Controller
@RequestMapping("things/{thing}.html")
class MyController {
    public ModelAndView thing(@PathVariable Thing aThing) {
        // Thing should be magically mapped from 
        // the {thing} part of the url

        return new ModelAndView(..blah..);
    }
}

I have global Formatter's configured as described in my previous post, and I want my method parameters to be automatically conjured from @PathVariable's and so on.

So far so good .. until I make a screw-up and parameter binding fails for any reason, at which point Spring's exception-handling kicks in. When that happens, Spring eats the exception and dumps me on the worlds shittiest error page saying:

HTTP ERROR 400
Problem accessing /your/url/whatever.html. Reason:
    Bad Request

Wow, thanks Spring!

To blame here are Spring's default set of HandlerExceptionResolver's, which are specified in DispatcherServlet.properties in the spring-webmvc jar. In 3.1.2 it says:

org.springframework.web.servlet.HandlerExceptionResolver=
    org.springfr..AnnotationMethodHandlerExceptionResolver,
    org.springfr..ResponseStatusExceptionResolver,
    org.springfr..DefaultHandlerExceptionResolver

(I've shortened the package-names to keep things readable)

Beats me why the default is to eat the exception without even logging it when Spring is normally so chatty about everything it does, but there you go. OK, so we need to configure some custom exception-handling so we can find out what's actually going wrong. There are two ways (that I know of) to do that:

  1. Use @ExceptionHandler annotated methods in our @Controller's to handle exceptions on a per-controller basis (or across more than one @Controller if you have a hierarchy and implement the @ExceptionHandler method high-up in the hierarchy).
  2. Register a HandlerExceptionResolver implementation to deal with exceptions globally (ie. across all @Controller's, regardless of hierarchy).

@ExceptionHandler

These bad-boys are straight-forward to use - just add a method in your @Controller and annotate it with @ExceptionHandler(SomeException.class) - something like this:

@Controller
class MyExceptionalController {
    @ExceptionHandler(Exception.class) 
    public void handleExceptions(Exception anExc) {
        anExc.printStackTrace(); // do something better than this ;)
    }

    @RequestMapping("/my/favourite/{thing}")
    public void showThing(@PathVariable Thing aThing) {
        throw new RuntimeException("boom");
    }
}

That exception-handler method will now be triggered for any exceptions that occur while processing this controller - including any exceptions that occur while trying to format the Thing parameter.

There's a bit more to it, for example you can parameterise the annotation with an array of exception-types. Shrug.

Just for completeness its worth mentioning that when formatting/conversion fails the exception presented to the @ExceptionHandler will be a TypeMismatchException, possibly wrapping a ConversionFailedException which in turn would wrap any exception thrown by your Formatter classes.

Custom HandlerExceptionResolver

This is the better approach, IMHO: Set up a HandlerExceptionResolver to deal with exceptions across all @Controller's and override with @ExceptionHandler's if you have specific cases that need special handling.

A deadly simple HandlerExceptionResolver might look like this:

package com.sjl.web;

import org.springframework.core.*;
import org.springframework.web.servlet.*

public class LoggingHandlerExceptionResolver 
implements HandlerExceptionResolver, Ordered {
    public int getOrder() {
        return Integer.MIN_VALUE; // we're first in line, yay!
    }

    public ModelAndView resolveException(
        HttpServletRequest aReq, HttpServletResponse aRes,
        Object aHandler, Exception anExc
    ) {
        anExc.printStackTrace(); // again, you can do better than this ;)
        return null; // trigger other HandlerExceptionResolver's
    }
}

Two things worth pointing out here:

  1. We are implementing Ordered and returning Integer.MIN_VALUE - this puts us at the front of the queue for resolving exceptions (and ahead of the default). If we don't implement Ordered we won't see the exception before one of the default handlers grabs and handles it. The default handlers appear to be registered with orders of Integer.MAX_VALUE, so any int below that will do.
  2. We are returning null from the resolveException method - doing this means that the other handlers in the chain get a chance to deal with the exception. Alternatively we can return a ModelAndView if we want to (and if we know how to deal with this particular kind of exception), which will prevent handlers further down the chain from seeing the exception.

There are some classes in Spring's HandlerExceptionResolver hierarchy that you might want to look at sub-classing - AbstractHandlerMethodExceptionResolver and SimpleMappingExceptionResolver are good ones to check first.

Of course we need to make Spring's DispatcherServlet aware of our custom HandlerExceptionResolver. The only configuration we need is:

<bean class="com.sjl.web.LoggingHandlerExceptionResolver"/>

No really, that's it.

There's an unusually high level of magic surrounding the DispatcherServlet, so although you must define your resolver as a bean in your spring config you do not need to inject it into any other spring beans. The DispatcherServlet will search for beans implementing the interface and automagically use them.

]]>
Spring MVC @RequestMapping data-binding exception handling Fri, 05 Oct 2012 00:00:00 +0100
<![CDATA[Configuring global data-binding formatters in Spring MVC]]> http://steveliles.github.com/configuring_global_data_binding_formatters_in_spring_mvc.html Here's a very quick how-to for configuring Spring-MVC (Spring 3.1.x) to use a global set of formatters for converting (data-binding) web-request and form parameters for use in Controllers, rather than having to have an @InitBinder annotated method in all your @Controller's.

In your spring-web configuration:

<mvc:annotation-driven conversion-service="conversionService"/>

<!-- just to show that we can wire other beans into our registrar -->
<bean id="serviceA" class="com.sjl.myproject.ServiceA"/>
<bean id="serviceB" class="com.sjl.myproject.ServiceB"/>

<!-- Binding -->    
<bean 
  id="customFormatterRegistrar" 
  class="com.sjl.myproject.web.config.CustomFormatterRegistrar">
  <constructor-arg ref="serviceA"/>        
  <constructor-arg ref="serviceB"/>
</bean>

<bean 
  id="conversionService"
  class="
    org.springframework.format.support.FormattingConversionServiceFactoryBean">
  <property name="formatterRegistrars">
    <set>
      <ref local="customFormatterRegistrar"/>
    </set>
  </property>
</bean>

Be sure to add the "conversion-service" attribute (pointing at your conversionService bean) to the <mvc:annotation-driven> element, otherwise it won't work!

The CustomFormatterRegistrar class:

package com.sjl.myproject.web.config;

import org.springframework.format.*;
import com.sjl.myproject.*;

public class CustomFormatterRegistrar implements FormatterRegistrar {
    private ServiceA serviceA;
    private ServiceB serviceB;      

    // construct the registrar with other spring-beans as constructor args
    public CustomFormatterRegistrar(
        ServiceA aServiceA,
        ServiceB aServiceB) {
        serviceA = aServiceA;
        serviceB = aServiceB;
    }

    @Override
    public void registerFormatters(FormatterRegistry aRegistry) {
        aRegistry.addFormatter(new SomeTypeFormatter(serviceA));
        aRegistry.addFormatter(new OtherTypeFormatter(serviceB))
    }
}

An example formatter:

package com.sjl.myproject.web.config;

import java.text.*;
import java.util.*;

import org.springframework.format.Formatter;

import com.sjl.myproject.*;

public class SomeTypeFormatter implements Formatter&lt;SomeType> {
    private ServiceA serviceA;

    public SomeTypeFormatter(ServiceA aServiceA) {
        serviceA = aServiceA;
    }

    @Override
    public String print(SomeType aSomeType, Locale aLocale) {
        return aSomeType..; // produce some string-based identifier
    }

    @Override
    public SomeType parse(String aText, Locale aLocale) throws ParseException {
        return serviceA.lookupByNameOrIdOrSomething(aText);
    }
}

And a Controller that benefits from it:

package com.sjl.myproject.web.controllers;

import org.springframework.stereotype.*;
import org.springframework.web.bind.annotation.*;

import com.sjl.myproject.*;

@Controller
public class SomeController {
    public static final String URL = "path/with/{param1}/and/{param2}";

    @RequestMapping(SomeController.URL)
    public String blah(
        @PathVariable SomeType param1, 
        @PathVariable OtherType param2) {

        // .. do stuff with our typed params

        return "view-name";
    }
}
]]>
Spring MVC data-binding convert format Mon, 01 Oct 2012 00:00:00 +0100
<![CDATA[Spring config for parameterised, non-static factory methods]]> http://steveliles.github.com/spring_config_for_parameterised_non_static_factory_methods.html I recently discovered a nice way of using beans defined in your spring config as factories in the definition of other beans.

Its great, for example, when you want a factory that has non-static factory methods, or that relies on a bunch of other dependencies, or you want to instrument beans via some service which is itself a bean (this is what I was doing when I made this discovery). Here's how it looks..

A factory class whose factory-method is non-static and requires parameters:

package com.sjl;

class Factory {
    private DependencyA depA;

    public Factory(DependencyA aDepA) {
        depA = aDepA;
    }

    public ResultType newInstance(DependencyB aDepB) {
        ResultType _result = ..; // use the deps to cook up result
        return _result;
    }
}

.. and a Spring XML config:

<bean id="depA" class="com.sjl.DependencyA"/>
<bean id="depB" class="com.sjl.DependencyB"/>

<bean id="factory" class="com.sjl.Factory">
  <constructor-arg ref="depA"/>
</bean>

<bean id="result" class="com.sjl.ResultType"
       factory-bean="factory" factory-method="newInstance">
   <constructor-arg ref="depB"/>
</bean>

So what we have here is an instance of Factory, created with a dependency (depA), on which we invoke a non-static method with arguments to create our ResultType.

The bit that surprised me was the use of <constructor-arg> elements to define the parameters to pass to the factory method.

Instrumentation

If you followed any of my recent posts you'll know that I've been playing with dynamic proxies to create services that automagically decorate objects with instrumented versions.

As an example, in this post I showed an InstumentationService which adds timing around method invocations.

I wanted to instrument several (about 8 actually) of my beans via a service that adds health monitoring, where the healthiness of a service is measured as a ratio of successful method invocations to unsuccessful ones (that throw exceptions).

The interface for instrumenting objects for health-monitoring looks like this:

interface HealthServiceInstrumenter {
    public <T> T instrument(T aT);
}

So what I needed from Spring is:

  1. to create the instance of my HealthServiceInstrumenter,
  2. to create the instances of various different T to pass through the HealthServiceInstrumenter, and
  3. the tricky part - to get spring to create the instrumented bean of type T by passing the original bean through the instrumenter.

Here's what the spring wiring looks like for that:

<bean id="health-instrumenter" class="com.sjl.HealthInstrumentationService"/>

<bean id="uninstrumented-bean-A" class="com.sjl.BeanA" 
        autowire-candidate="false"/>

<bean id="bean-A" class="com.sjl.BeanA"
       factory-bean="health-instrumenter" 
       factory-method="instrument">
   <constructor-arg ref="uninstrumented-bean-A"/>
</bean>

<bean id="uninstrumented-bean-B" class="com.sjl.BeanB" 
        autowire-candidate="false"/>

<bean id="bean-B" class="com.sjl.BeanB"
       factory-bean="health-instrumenter" 
       factory-method="instrument">
   <constructor-arg ref="uninstrumented-bean-B"/>
</bean>
]]>
Spring Factory Fri, 14 Sep 2012 00:00:00 +0100
<![CDATA[Implicit Future's, aka Promises]]> http://steveliles.github.com/implicit_future_s_aka_promises.html java.lang.concurrent.Future&lt;T> is an example of an explicit future, where client code is well aware that the object it is handling is not a direct reference to the value of interest, and must invoke a method to obtain the value (Future.get() in the case of java.lang.concurrent.Future).

That's all very well, but if you have collaborators that expect to deal with the value T you have limited options:

You could invoke get() on your future, wait for it to be realised, then pass the realised value to the collaborators. This defeats the purpose of Future's, since what you really want is to do as much other work as possible before future.get() is called.

Alternatively, you could modify the collaborators to know that they are dealing with a Future. But you don't really want to do that either - its an implementation detail that they should not be concerned with.

What you really want is to pass around implicit futures that hide the fact that the object is anything other than a pojo.

You can create implicit futures by wrapping an explicit future<T> in an implementation of interface T and delegating all of the methods to future.get().xxx(). Here's what that might look like:

// the type expected by client code
interface ExpensiveToCompute {
    public BigDecimal getValue1() throws Exception;
    public BigInteger getValue2() throws Exception;
}

interface Computer {
    public ExpensiveToCompute compute() throws Exception;
}

class SynchronousComputer {
    public ExpensiveToCompute compute() throws Exception {
        // ..
    }
}

// the implicit future, delegating to an explicit future
class ImplicitFutureExpensiveToCompute implements ExpensiveToCompute {
    private Future&lt;ExpensiveToCompute> delegate;

    public ImplicitFutureExpensiveToCompute(
        Future&lt;ExpensiveToCompute> aDelegate) {
        delegate = aDelegate;
    }

    public BigDecimal getValue1() throws Exception {
        delegate.get().getValue1();
    }

    public BigInteger getValue2() throws Exception {
        delegate.get().getValue2();
    }
}

// the async version that returns implicit futures
class AsynchronousComputer implements Computer {
    private ExecutorService executor = ..;
    private SynchronousComputer sync = ..;

    public ExpensiveToCompute compute() throws Exception {
        return new ImplicitFutureExpensiveToCompute(
            executor.submit(new Callable&lt;ExpensiveToCompute>() {
                public ExpensiveToCompute call() {
                    return sync.compute();
                }
            }));
    }
}

Pretty straight-forward, although there's quite a bit of boiler-plate, and i've passed the buck on exception handling.

This example is very simple, but things can get more involved if, for example, you want to use Future's overloaded get(long timeout, TimeUnit units) and handle timeouts appropriately (say, by returning a default value).

What if, instead of all this, you could pass your current synchronous implementation through some machinery that converted appropriately annotated methods to run asynchronously and return implicit futures, without the chore of having to create those classes yourself?

It might look like this:

// the type expected by client code
interface ExpensiveToCompute {
    public BigDecimal getValue1() throws Exception;
    public BigInteger getValue2() throws Exception;
}

interface Computer {
    @ComputationallyExpensive
    public ExpensiveToCompute compute() throws Exception;
}


// the synchronous implementation - exact same as before
class SynchronousComputer implements Computer{
    public ExpensiveToCompute compute() throws Exception {
        // ..
    }
}

// the async version, returning implicit futures
class AsynchronousComputer implements Computer {
    private Computer async;

    public AsynchronousComputer(
        AsyncificationService anAsyncifier, Computer aDelegate) {
        async = anAsyncifier.makeAsync(aDelegate);
    }

    public ExpensiveToCompute doSomething() {
        return async.doSomething();
    }
}

This time we didn't need to create the implicit future implementation, cutting a whole lot of boiler-plate, and the async implementation got a fair bit simpler too. We marked the expensive method with an annotation so that the AsyncificationService knew to work its magic on that method.

There's a lot more useful stuff we can do when we have the machinery for converting synchronous methods to asynchronous methods that return implicit futures. For example we can transparently handle exceptions and return default values, or we can impose timeouts and return default values if we don't get a result in time, etc., etc.

If you want to see how we might implement such machinery, or want to try using it, fork the code for Implicit-Futures from github.

]]>
Threads Concurrency Java Future Promise Sun, 09 Sep 2012 00:00:00 +0100
<![CDATA[Dynamic Proxies in Java]]> http://steveliles.github.com/dynamic_proxies_in_java.html Dynamic Proxies are a fantastic tool to have in your kit, and pretty easy to get up and running with.

A Dynamic Proxy is just what the name suggests: a proxy to a "normal" Java class, where the proxy is created dynamically - at runtime - and can be substituted instead of the proxied class.

If that still doesn't make sense, hopefully the example below will clear it up.

Lets imagine we want to be able to time the execution of any method on any implementation of any interface. We don't know or care what the interface is. We'll do this by passing the class that implements the interface to an "InstrumentationService", whose interface looks like this:

public interface InstrumentationService {
    /**
     * @param aT - the object to be instrumented for monitoring
     * @return A polymorphically equivalent T which has been instrumented
     */
    public &lt;T> T instrument(T aT);
}

We'll get to the implementation of InstrumentationService shortly, but for now it should be clear that to instrument a class is as simple as this:

private InstrumentationService instr;

public void doSomeStuff() {
    SomeInterface _si = new SomeClassThatImplementsIt();

    _si.doSomething(); // won't be timed

    _si = instr(_si);

    _si.doSomething(); // will be timed!
}

OK, so how can we implement InstrumentationService so that it can decorate arbitrary methods on as-yet-unknown interfaces? Enter Dynamic Proxies.

There are several rules and caveats to follow which I won't go into - they are documented pretty well here. For now it should suffice to say that you can only proxy interfaces (which is ok because you always program to interfaces / design by contract anyway, right?)

Here's an implementation of InstrumentationService that uses Dynamic Proxying:

package com.sjl.example;

import java.lang.reflect.*;
import java.util.*;
import java.util.concurrent.*;

public abstract class DynamicInstrumentationService 
implements InstrumentationService {

    protected abstract void record(String anEvent, long aNanos);

    @SuppressWarnings("unchecked")
    @Override
    public &lt;T> T instrument(final T aT)
    {
        return (T) Proxy.newProxyInstance(
            aT.getClass().getClassLoader(), 
            aT.getClass().getInterfaces(), 
            new InvocationHandler() {
                @Override
                public Object invoke(
                    Object aProxy, Method aMethod, Object[] aArgs) 
                    throws Throwable 
                {
                    long _start = System.nanoTime();
                    try {
                        return aMethod.invoke(aT, anArgs);
                    } catch (InvocationTargetException anExc) {
                        throw anExc.getCause();
                    } finally {
                        record(_t.event(), System.nanoTime()-_start);
                    }
                }
            });
    }
}

So what do we have here?

  1. An abstract implementation of IntrumentationService which defers the actual recording of the timed value to a concrete subclass - you could extend and implement the record method to log to stdout, for example.
  2. The instrument method creates a new Dynamic Proxy around the given class by invoking Proxy.newInstance. Notice that we use the classloader of the given class, and pass all of the interfaces it implements as types to be proxied.
  3. The details of what to do when any method of the proxy is invoked are in the InvocationHandler, implemented here as an anonymous inner class. Its pretty simple - capture the clock time before the method is invoked; invoke the method; capture the clock time after the method completes; record the difference in time (after - before).

Notice that when we invoke the proxied class's method, we wrap the invocation with a try/catch that catches InvocationTargetException, and if such an exception is thrown we propagate its cause, not the InvocationTargetException itself. This is just unwrapping an uninteresting layer of exceptions (which we added by using reflection to invoke the method) to get to the real problem.

This is a pretty simple example of what you can do with Dynamic Proxies. Even with this simple example its clear that you could modify it to, for example, record separate timings for successful invocations vs those that throw exceptions, or to only record timings for methods with annotations (e.g. you might create an @Timed annotation), etc., etc.

I should mention that there is a down-side to Dynamic Proxies: they use reflection to invoke the method of the proxied class, so there is a small performance penalty.

Lately I've been having all kinds of fun with Dynamic Proxies, from instrumentation (somewhat more complex than the above example) to monitoring service health (by escalating through warning statuses based on the ratio of successful/exceptional completion).

My favourite use so far: asynchronous execution of synchronous service calls, returning the result as a disguised/implicit Future/Promise with a coordinated Service Level Agreement cut-off . . . yeah anyways, that's a blog post for another day :) (update 09-09-2012: see Implicit-Futures)

]]>
Java Dynamic Proxy Instrumentation Timing Fri, 07 Sep 2012 00:00:00 +0100
<![CDATA[Setting up embedded Jetty 8 and Spring MVC with Maven]]> http://steveliles.github.com/setting_up_embedded_jetty_8_and_spring_mvc_with_maven.html This is intended to be the first post in a series on building straight-forward web-apps with Spring-MVC and embedded Jetty. You can check out the complete source of this simple project from github.

This post describes configuring your Jetty and Spring MVC with XML-based configuration. If you want to use annotations/java-config take a look at this post.

Its been a while since I last posted. I've been busy starting up two big projects at work, one using Google App Engine (and Spring MVC ;)), the other using Jetty 8 (embedded), Spring MVC, MongoDB and ElasticSearch.

The Jetty/Spring combination is one which I've used before - several times since Jetty-5/Spring-1.2 - and I really like. You just can't beat it for quick debug cycles, complete control of the environment, minimal configuration, single-jar deployment, etc., etc.

On this most recent project I'm using jsp as the view technology - this is probably the easiest way to go since most things "just work" out of the box. Other view technologies can be used too - for example last year we used this same combination with FreeMarker - its just a little more effort to get things wired up right.

Here's how I set my projects up:

Maven POM

<?xml version="1.0" encoding="UTF-8"?>
<project 
 xsi:schemaLocation="
 http://maven.apache.org/POM/4.0.0 
 http://maven.apache.org/xsd/maven-4.0.0.xsd" 
 xmlns="http://maven.apache.org/POM/4.0.0" 
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>

<groupId>com.sjl</groupId>
<artifactId>webapp</artifactId>
<version>1.0-SNAPSHOT</version>
<name>WebApp</name>
<description></description>

<repositories>
  <repository>
    <id>springsource-repo</id>
    <name>SpringSource Repository</name>
    <url>http://repo.springsource.org/release</url>
  </repository>
</repositories>

<properties>
  <jetty.version>8.1.5.v20120716</jetty.version>
  <jetty.jsp.version>8.1.4.v20120524</jetty.jsp.version>
  <spring.version>3.1.2.RELEASE</spring.version>
</properties>

<build>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-source-plugin</artifactId>
      <version>2.1.2</version>
      <executions>
        <execution>
          <id>attach-sources</id>
          <phase>verify</phase>
          <goals>
            <goal>jar-no-fork</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
    <plugin>
      <artifactId>maven-compiler-plugin</artifactId>
      <version>2.3.2</version>
      <configuration>
        <source>1.6</source>
        <target>1.6</target>
      </configuration>
    </plugin>
    <plugin> 
      <artifactId>maven-eclipse-plugin</artifactId> 
      <configuration> 
        <downloadSources>true</downloadSources>
      </configuration> 
    </plugin> 
  </plugins>    
</build>

<dependencies>

  <!-- SPRING DEPENDENCIES -->
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>${spring.version}</version>
  </dependency> 
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>${spring.version}</version>
  </dependency>

  <!-- JETTY DEPENDENCIES -->
  <dependency>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-server</artifactId>
    <version>${jetty.version}</version>
  </dependency>
  <dependency>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-servlet</artifactId>
    <version>${jetty.version}</version>
  </dependency>
  <dependency>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-webapp</artifactId>
    <version>${jetty.version}</version>
  </dependency>
  <dependency>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-servlets</artifactId>
    <version>${jetty.version}</version>
  </dependency>

  <!-- JSP and JSTL SUPPORT -->
  <dependency>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-jsp</artifactId>
    <version>${jetty.jsp.version}</version>
  </dependency>    
  <dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>jstl</artifactId>
    <version>1.2</version>
    <scope>provided</scope>
  </dependency>
</dependencies>

</project>

A couple of things worth pointing out:

  • I've added the spring-source maven repo in order to get the spring dependencies
  • I'm using Jetty 8 from eclipse, who have taken over from codehaus (who took over from mortbay)
  • I'm attaching sources for all dependencies that make them available, so that you can drill down into the source while debugging
  • I'm setting the source and target jdk compliance to 6
  • I'm using Jetty-jsp 8.1.4 - many other jstl implementations I tried have bugs, including a nasty one where recursive calls in tag-files would not compile.
  • I've never really bothered with things like maven archetypes, which is probably lazy-stupid of me. I tend to start from a pom that i've created previously then modify it to suit my needs.

After creating the pom in my project directory I create the src/main/java and src/test/java directories and then run mvn eclipse:eclipse to fetch the dependencies and create the eclipse .project and .classpath files. Having done that I import the project to Eclipse.

Once I have the project in Eclipse I create a few more directories - specifically the META-INF/webapp/WEB-INF directory to host my web.xml and spring context files (amongst other things).

I typically start with a spring application context and a spring web context, so I can specify beans at a larger scope than the web-application. Here's some simple example web and spring configs, all of which I place in WEB-INF:

web.xml

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://java.sun.com/xml/ns/javaee" 
xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
xsi:schemaLocation="
  http://java.sun.com/xml/ns/javaee 
  http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
id="WebApp_ID" version="3.0">

<listener>
  <listener-class>
    org.springframework.web.context.ContextLoaderListener
  </listener-class>
</listener>

<context-param>
    <param-name>contextConfigLocation</param-name>
    <param-value>
        /WEB-INF/application-context.xml
    </param-value>
</context-param>

<!-- Handles all requests into the application -->
<servlet>
    <servlet-name>Spring MVC Dispatcher Servlet</servlet-name>
    <servlet-class>
      org.springframework.web.servlet.DispatcherServlet
    </servlet-class>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/web-context.xml</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>  

<servlet-mapping>
    <servlet-name>Spring MVC Dispatcher Servlet</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>      

</web-app>

application-context.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:context="http://www.springframework.org/schema/context"
  xsi:schemaLocation="
http://www.springframework.org/schema/beans 
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context 
http://www.springframework.org/schema/context/spring-context-3.0.xsd">

<!-- Define your application beans here. They will be available to the
   beans defined in your web-context because it is a sub-context.

   Beans defined in the web-context will not be available in the 
   application context.
-->

</beans>

web-context.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:context="http://www.springframework.org/schema/context"
  xmlns:mvc="http://www.springframework.org/schema/mvc"
  xsi:schemaLocation="
http://www.springframework.org/schema/mvc 
http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
http://www.springframework.org/schema/beans 
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context 
http://www.springframework.org/schema/context/spring-context-3.0.xsd">

<!-- Configures the @Controller programming model -->
<mvc:annotation-driven/>

<!-- Forwards requests to the "/" resource to the "home" view -->
<mvc:view-controller path="/" view-name="index"/>    

<mvc:resources mapping="/i/**" location="WEB-INF/images/" />
<mvc:resources mapping="/c/**" location="WEB-INF/css/" />
<mvc:resources mapping="/s/**" location="WEB-INF/scripts/" />
<mvc:resources mapping="/favicon.ico" 
  location="WEB-INF/images/favicon.ico" />

<!-- Resolve jsp's -->
<bean id="viewResolver" 
  class="org.springframework.web.servlet.view.UrlBasedViewResolver">
    <property name="viewClass" 
      value="org.springframework.web.servlet.view.JstlView"/>
    <property name="prefix" value="/WEB-INF/views/"/>
    <property name="suffix" value=".jsp"/>
</bean>

<!-- i18n message source -->
<bean id="messageSource" 
  class="
    org.springframework.context.support.
      ReloadableResourceBundleMessageSource">
    <property name="basename" value="/WEB-INF/i18n/messages" />
    <property name="defaultEncoding" value="UTF-8"/>
    <property name="cacheSeconds" value="30" />
</bean>

</beans>

Some important things to note in this file are:

  • We're using the annotation model for spring Controllers, hence there aren't any Controller beans specified in this xml
  • We're mapping static resources to be served efficiently by the Spring dispatcher servlet
  • We've set up a view resolver to look for jsp's in /WEB-INF/views. Note that when specifying a view in Controller code you drop the .jsp suffix.
  • We're configuring an internationalization message source, just so we can demonstrate use of a spring taglib a bit later...

Now we need to create a class to set up the embedded Jetty.

Embedded Jetty

package com.sjl;

import java.io.*;
import java.net.*;
import java.util.*;

import org.eclipse.jetty.server.*;
import org.eclipse.jetty.server.handler.*;
import org.eclipse.jetty.server.nio.*;
import org.eclipse.jetty.util.thread.*;
import org.eclipse.jetty.webapp.*;

/**
 * Example WebServer class which sets up an embedded Jetty 
 * appropriately whether running in an IDE or in "production" 
 * mode in a shaded jar.
 */
public class WebServer
{
    // TODO: You should configure this appropriately for 
    // your environment
    private static final String LOG_PATH = 
      "./var/logs/access/yyyy_mm_dd.request.log";

    private static final String WEB_XML = 
      "META-INF/webapp/WEB-INF/web.xml";
    private static final String CLASS_ONLY_AVAILABLE_IN_IDE = 
      "com.sjl.IDE";
    private static final String PROJECT_RELATIVE_PATH_TO_WEBAPP = 
      "src/main/java/META-INF/webapp";

    public static interface WebContext
    {
        public File getWarPath();
        public String getContextPath();
    }

    private Server server;
    private int port;
    private String bindInterface;

    public WebServer(int aPort)
    {
        this(aPort, null);
    }

    public WebServer(int aPort, String aBindInterface)
    {        
        port = aPort;
        bindInterface = aBindInterface;
    }

    public void start() throws Exception
    {
        server = new Server();

        server.setThreadPool(createThreadPool());
        server.addConnector(createConnector());
        server.setHandler(createHandlers());        
        server.setStopAtShutdown(true);

        server.start();       
    }

    public void join() throws InterruptedException
    {
        server.join();
    }

    public void stop() throws Exception
    {        
        server.stop();
    }

    private ThreadPool createThreadPool()
    {
        // TODO: You should configure these appropriately
        // for your environment - this is an example only
        QueuedThreadPool _threadPool = new QueuedThreadPool();
        _threadPool.setMinThreads(10);
        _threadPool.setMaxThreads(100);
        return _threadPool;
    }

    private SelectChannelConnector createConnector()
    {
        SelectChannelConnector _connector = 
            new SelectChannelConnector();
        _connector.setPort(port);
        _connector.setHost(bindInterface);
        return _connector;
    }

    private HandlerCollection createHandlers()
    {                
        WebAppContext _ctx = new WebAppContext();
        _ctx.setContextPath("/");

        if(isRunningInShadedJar())
        {          
            _ctx.setWar(getShadedWarUrl());
        }
        else
        {            
            _ctx.setWar(PROJECT_RELATIVE_PATH_TO_WEBAPP);
        }

        List&lt;Handler> _handlers = new ArrayList&lt;Handler>();

        _handlers.add(_ctx);

        HandlerList _contexts = new HandlerList();
        _contexts.setHandlers(_handlers.toArray(new Handler[0]));

        RequestLogHandler _log = new RequestLogHandler();
        _log.setRequestLog(createRequestLog());

        HandlerCollection _result = new HandlerCollection();
        _result.setHandlers(new Handler[] {_contexts, _log});

        return _result;
    }

    private RequestLog createRequestLog()
    {
        NCSARequestLog _log = new NCSARequestLog();

        File _logPath = new File(LOG_PATH);
        _logPath.getParentFile().mkdirs();

        _log.setFilename(_logPath.getPath());
        _log.setRetainDays(90);
        _log.setExtended(false);
        _log.setAppend(true);
        _log.setLogTimeZone("GMT");
        _log.setLogLatency(true);
        return _log;
    }  

    private boolean isRunningInShadedJar()
    {
        try
        {
            Class.forName(CLASS_ONLY_AVAILABLE_IN_IDE);
            return false;
        }
        catch(ClassNotFoundException anExc)
        {
            return true;
        }
    }

    private URL getResource(String aResource)
    {
        return Thread.currentThread().
            getContextClassLoader().getResource(aResource); 
    }

    private String getShadedWarUrl()
    {
        String _urlStr = getResource(WEB_XML).toString();
        // Strip off "WEB-INF/web.xml"
        return _urlStr.substring(0, _urlStr.length() - 15);
    }
}

Notice that here we try to load a class that will only ever be available if we're running in test mode (e.g. directly from Eclipse).

If the class is found we assume we're running in exploded form, otherwise we assume we're running in a shaded jar - this is so that we can use the correct path to locate the web resources.

If you're trying this out you must make sure that the com.sjl.IDE class exists in your test source tree!.

Spring MVC Controller

package com.sjl.web;

import org.springframework.stereotype.*;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.servlet.*;

@Controller
public class Home {
    @RequestMapping("/")
    public ModelAndView home()
    {
        return new ModelAndView("index");
    }
}

A very simple Controller that simply tells Spring to render the index.jsp view when a request is made for the root of the web-app.

index.jsp

<%@taglib prefix="spring" uri="http://www.springframework.org/tags"%>
<html>
  <body>
    <p><spring:message code="hello"/></p>
  </body>
</html>

messages.properties

hello=Hi

That's almost everything. You will need a couple of other little pieces, for example a main class to instantiate the WebServer.

Project directory structure

Your project directory structure should end up looking something like this:

|_src
|___main
|_____java
|_______META-INF
|_________webapp
|___________WEB-INF
|_____________[web.xml and spring context configs here]
|_____________css
|_____________i18n
|_____________images
|_____________scripts
|_____________views
|_______________[jsp files here]
|_______com
|_________sjl
|___________web
|_____________[Spring MVC Controllers here]
|___test
|_____java
|_______com
|_________sjl

Check out the github project for the complete working example.

]]>
Jetty Embedded Spring MVC Maven Sat, 25 Aug 2012 00:00:00 +0100
<![CDATA[Android Activity Lifecycle Gotcha]]> http://steveliles.github.com/android_activity_lifecycle_gotcha.html My Android app has been live in Google Play for 6 months, but I'm still encountering strange bugs and behaviours, making big mistakes, and learning surprising things. The latest surprise has come with a recent flush of ICS users who's devices are putting more significant demands on my handling of the Activity lifecycle, specifically with relation to managing state.

tl:dr; - beware when invoking startActivityForResult that onActivityResult is invoked before onResume!

Before I get to the problems, lets have a quick look at the Activity lifecycle. I'm going to "borrow" google's lifecycle diagram:

Android Activity Lifecycle

Some important things to remember here are:

  • Apps typically consist of multiple Activity's, and each Activity follows the above lifecycle while your app is running.
  • When your Activity starts a child Activity (with startActivity or startActivityForResult), both the onPause and onStop lifecycle methods of the parent Activity should be called, in that order.
  • When an Activity is invoked as a child Activity, its lifecycle will be completed by the time the parent Activity is fully in control again (at least onCreate, onStart, onResume, and onPause will have been invoked).
  • Your Activity can be killed off at any time after its onPause has completed, without necessarily passing through onStop or onDestroy. It is critically important to remember that this includes situations where your Activity is on the back-stack waiting for a result from a child Activity, or even when it is still visible but mostly covered by a dialog!

With regard to the last point its worth familiarising yourself with the way Android manages Processes.

State, and the Application object

One simple way, you might think, to manage state without worrying too much about the Activity lifecycle is to use the Application object. Android allows you to specify your own class (extends Application), which your Activity's can access through getApplication.

That's nice. It needs care though, since the process that your Application object lives in can be killed and restarted at (perhaps) unexpected junctures. Take this scenario:

  1. App A starts with Activity A1, which sets up some state in the Application object.
  2. Activity A1 starts Activity A2, which uses the state in the Application object.
  3. Activity A2 fires an Intent for Activity B1 of App B and expects some result (lets say we fired an Intent asking for an image to be captured by the camera app).
  4. App B starts, and launches Activity B1.
  5. Activity B1 is memory-heavy, so the system shuts down App A (completely kills its process), even though it is on the back-stack waiting for a result.
  6. Activity B1 returns, app A's Application object is created, Activity A2 is started again but Activity A1 never launched in the lifetime of this Application object so does not get the opportunity to set up the state of the Application object.

The sequence diagram might look something like this:

seuqence diagram

Clearly, if Activity A2 relies on A1 having run first to set up the application state, there's going to be trouble as soon as A2 starts trying to access that state after resuming from B2. If you're going to use the Application object to manage state, make sure that it is set up as part of the Application's own lifecycle methods.

Now, the gotcha that's been hurting me is this: I assumed that onActivityResult would be invoked after onResume. Turns out this is not the case, and in fact onActivityResult was getting called long before my state was re-initialised in onResume.

On my devices I never suffered from this because my process was not being killed and the state was still present in memory at the point when onActivityResult was invoked!

]]>
Android Activity Lifecycle Large Heap Fragmentation Mon, 18 Jun 2012 00:00:00 +0100
<![CDATA[Testing Android apps]]> http://steveliles.github.com/testing_android_apps.html I've been developing an Android app as a spare-time project for 6 months now. I have access to 3 Android devices:

  • Samsung Galaxy SII running ICS (my personal mobile)
  • Samsung Galaxy Mini running Gingerbread (bought as a temporary dev device when my old HTC Desire gave up the ghost)
  • Motorola Xoom II Media Edition (tablet used for day job)

My app runs beautifully on all these devices. I cannot make it crash. It really frustrates me that I get crash reports from other devices and cannot replicate them. For a spare-time project I can't afford to buy lots of devices to test with. Enter AppThwack.

AppThwack

I first heard about AppThwack after I tweeted about one of its competitors - testdroid - trying to find some testimonials from other users. I didn't find any, but I did get a reply from @tdpeterson inviting me to AppThwack's beta.

Some weeks later I finally found a spare few minutes to take a look, and I'm so glad I did!

As they are currently in beta, you need a code to be able to sign up. Mine arrived a few hours after requesting it, and I immediately signed up to give it a try. The service makes it very easy to get started, which is clever, because once you're in you really are hooked - its just too good.

Chaos Monkeys

I uploaded the latest release apk I had to hand and launched the UI Exerciser Monkey tests (these are automatic and randomised tests that poke all the buttons and what-not). From sign-up to running tests took less than 2 minutes.

The monkeys found at least two bugs that I was not aware of on that first run. I am torn between horror and delight. The horror and delight turn to embarassment when I realise that I have uploaded an obfuscated apk, and the stack-traces are in gibberese. Lesson#1 right there.

A test run takes a while to complete, but you can watch the results coming in in real time. When you realise what is going on behind the scenes its really impressive: your app is installed, launched, monkeyed, and then uninstalled simultaneously across a whole farm of different devices running different versions of Android. I'd love to see what that looks like :)

You can view the results of your test runs in a number of different ways. I like the "issues by device" mode, which lists all the devices that experienced problems or warnings, and allows you to drill down to see what happened in the monkey log - with a full stack trace of the crash - and then down again into log-cat.

Pause a second and let that sink in. I just ran my app on 43 different devices and Android versions in under 5 mins and now I have instant access to the stack traces from the crashing devices and the full log-cat output. I even get screen-shots from some of the devices - above is from an HTC Evo 4G running 2.3.3. Holy shitcakes!

Oh, and you can filter the log-cat output directly in the web-app. Nice.

Programmed Tests

OK, now I have some crashes that I want to try to replicate which are going to involve some complex interactions. Its going to take a lot of chaos monkeys a long time to replicate those tests, but the AppThwack guys have thought of that and integrate Robotium testing too.

Robotium was also new to me, but again its a great tool and a doddle to get started:

  1. download the robotium jar from here
  2. create an Android test project, set up to test "this" project (not the project you actually want to test)
  3. set the manifest <instrumentation> tag's targetPackage attribute to be the package of the app you want to test
  4. copy the basic test code from this tutorial (I confess I find it weird that the tutorial is a pdf, not a web page!)
  5. tweak the test code to test your app - its a very straight-forward API
  6. Run the app you want to test in an emulator
  7. Run your test and watch as it drives your app in the emulator - great fun

Once you have a working test that you want to run over at AppThwack's device farm its really easy to get that going too:

  1. export a signed apk of the app you want to test (app-under-test)
  2. export a signed apk of the test project (sign with the same key!)
  3. upload app-under-test apk
  4. click the configure icon
  5. click "add test apk", and upload the test project apk
  6. click go
  7. marvel as the results start rolling in

Early days

I haven't really tried to fix my crashing bugs yet, but at least there is light at the end of the tunnel now, thanks to AppThwack and Robotium.

I noticed a few funnies along the way, which I plan to mention to the AppThwack guys when I've had a bit more experience:

  • Many devices crashed at the same point, but there's no way to collapse common issues together - I found myself opening the log for each device to check the stack-traces to find out the total number of different problems. update: Pawel from AppThwack tells me a new view mode "By Failure" is in the works and will be landing in the beta very soon!)
  • Some devices timed out while installing. I'm not sure if this is a problem with my app (seems unlikely?) or some issue with the devices themselves? Its not a catastrophe if its a device problem, but then probably they should be marked as warnings rather than errors. update: Pawel confirmed there were timeouts within the device-farm which I'm sure they'll resolve soon
  • I noticed lots of these in the monkey logs: java.io.FileNotFoundException: /mnt/sdcard/scriptlog.txt (Permission denied) which seem to be just noise (at least as far as I am concerned)

So far so wonderful as far as I'm concerned: my thanks to Trellis Automation for AppThwack, and I really hope the beta goes well and the business is successful!

]]>
Android Testing Robotium AppThwack Fri, 08 Jun 2012 00:00:00 +0100
<![CDATA[Minimum Viable Products]]> http://steveliles.github.com/minimum_viable_products.html I haven't blogged in a while. I've been very busy with lots of different things, but nothing sufficiently technical and juicy to write about. Some less techy and more entrepreneurial adventures have been floating around, so I though to make some notes on those.

Things have taken a strange turn at work, with a very ambitious new project in the pipeline that is quite a departure from our usual business. I am tasked to deliver both development-wise and in terms of defining the business model.

The development alone will be a monumental task. Defining a business model is … quite some way outside my experience. The budget is almost non-existent. The time-scales are horribly short. I'm not even sure I completely buy the basic premise of the project. And yet … it is still a strangely attractive prospect, and its all Eric Ries' fault.

The Lean Startup

In September last year I read Eric Reis' excellent book - The Lean Startup - and was inspired to put it into practice on my personal projects. I stopped getting hung up on the infinite details of feature-complete perfection, and started thinking in terms of Minimum Viable Product.

It led me to release my first ever solo "for profit" work, a super-simple mortgage repayment calculator.

The development part was fun, and I got an excuse to use one of my open-source libraries.

What turned out to be really interesting though, was that to get it online I had to register a domain-name and figure out how to point it at an Amazon S3 bucket, which I had to configure to host a website. Two things I'd never done before.

I also spent time researching keywords, used multiple landing pages on different sub-domains with different keyword content, and set up analytics to see which gets most traffic - following Eric's advice to scientifically test a theory.

For my second project I had a bit more time - I had a week off in the run-up to Christmas to use up my 2011 vacation entitlement. My wife was working, which meant I had a guilt-free coding week (my wife is great about me coding at home, but it doesn't stop me feeling guilty).

I used the time to introduce myself properly to Android and build upon a little idea that I'd had. It started out by me wanting to use comic-book style pictures of myself on my blog to represent topics.

I had spent time working on some java code to take a bitmap and process it to resemble an SVG. From there it wasn't much of a leap to try to make an app that lets you build comic strips from your photos, and so Comic Strip It! was born on Christmas Eve, 2011.

It really was a minimum viable product. The feature set of release 1.0 was:

  • add photos to your comic strip, in 350x350 pixel frames
  • zoom and rotate photos to fit the frames
  • the comic strip would be laid out automatically in blocks of 3 frames across
  • add a caption below each frame
  • set a title for the comic strip
  • share the comic strip (email/twitter/facebook/etc - Android made this super-easy to do)

I've added many features since, as directed by user feedback. Much of best feedback has come from people who emailed me directly with suggestions, especially in the early days, but some of the real kickers have come from disgruntled commenters.

In the spirit of Lean Startup experimentation I launched two versions - starting with the free version as described above then, in early january, a paid version that included speech balloons.

I've learned so much about people, product development/marketing/monetisation, and myself from having an app out there in the wild with no barrier between me and the users! My main problem now is finding a block of time to work on accelerating the growth of the user base.

More Experiments

Given the lack of a decent block of time to work on the app, I've spent a little time here and there on a couple of new, small, experiments. These aren't really MVP's as such, but they are in the spirit of Lean Startup.

www.tribyute.com

The first is a teaser website for an idea I had about expressing admiration for people you know, in such a way that over time an online character reference builds up, and the recipient gets an occasional boost when someone praises them.

The point here was just to test if there was any interest in the idea by asking people to leave their email to get notified later when the service goes live.

No ideas really about how to monetise it, but that wasn't the point of this one, it was just an idle thought that expressing admiration like that would be a nice thing to be able to do.

I registered a domain and pointed it at an app-engine web-app, thrown together with some simple html and css and a servlet. All in all just a few hours work. Its only just started to appear in google searches, so too early to say how the experiment will pan out.

Zazzle t-shirts

sleep tees

The second experiment is something quite different. For some work-related research I'd been reading about affiliate-marketing.

I was aware of the Amazon affiliate system (indeed the book links on my blog are affiliate links), but I hadn't really considered that it was a common practice til now.

Zazzle's affiliate system allows anyone to set up a "store" under the main zazzle website, and populate it with their own products.

These products are customised versions of the products that zazzle themselves sell - ie. t-shirts, hoodies, decorated mugs, etc. You even get to set your own level of commission on each product. You can see one of my designs here :)

Zazzle help with tools for marketing your products too, but because its so easy lots of people have set up stores, and there's lots of competition. For me its really just a fun experiment, and a chance to do a bit of art work, so its all good really.

Conclusion, sort of

My main point throughout all this waffle is probably this: Stop trying to do something and do it! Create the smallest possible version of your product that can deliver value, release it, and use scientific measurement to direct your efforts for what to do next. If nothing else you will learn so much!

If you are involved at some level in the development of products, buy The Lean Startup - it will change how you think about product development.

While you're there, you might want to look at The 4-hour work week, which shattered lots of my illusions about how the world works, despite the fact that I don't like a fair number of the prescribed methods.

I'll leave you with this superb Lean Startup story of how a startup saved themselves millions of dollars with a $40 prototype:

Ignite: Lean Startup - Paul Howe, Founder & CEO of NeedFeed "How $40 Saved Us 9 Months and $2MM" from DreamSimplicity on Vimeo.

]]>
MVP entrepreneurship Lean Startup Thu, 07 Jun 2012 00:00:00 +0100
<![CDATA[SAX based dsl4xml for Android]]> http://steveliles.github.com/sax_based_dsl4xml_for_android.html I just checked in a SAX parser based version of dsl4xml to github, and finally got a chance to run the perf tests on an Android device. This is how it looks:

Notice that while it loses about 15% on raw SAX parsing, it still provides approximately an order of magnitude greater throughpout than the next best (raw pull parsing). And of course, its damn easy to write readable unmarshalling code with :)

I also added SimpleXML parsing to my performance tests - it ties for last place (performance-wise) with W3C DOM parsing. Arguably it is more readable and requires less code than the others, though personally I'm not a huge fan.

]]>
java xml parse dsl perforamnce Mon, 23 Apr 2012 00:00:00 +0100
<![CDATA[Android Market Comments - Severe Fail]]> http://steveliles.github.com/android_market_comments_severe_fail.html I'm so fed up with the Google Play comments and rating system. How can they have got it so wrong? Comments and user-ratings are a bad system to start with, but Google's system is broken beyond belief.

Comment/Rating is a broken idea

Commenting and rating systems are a bad idea to begin with. They pander to the extremes no matter what the subject under comment.

Check any app - any app - on the market: I defy you to find one that is not overwhelmingly rated 1 or 5 stars. Why? Because most people between the two extremes don't care enough or can't be bothered to rate an app.

That leaves the big fans who love the apps unquestionably but, bless 'em, unhelpfully ("★★★★★ Love this app"), and the haters ("★☆☆☆☆ Gross! sucks! don't download!").

The big fans are wonderful, but not helpful, but neither are they being destructive. The haters though, they really get to me. I want my apps to be good. I want them to work for everyone. I want them to be liked. I feel it really personally when my app gets a bad rating. I shouldn't, everyone tells me, but I can't help it - I'm invested in my work.

I've put a lot of time and effort into this thing - poured myself into it. A lot of that time I've given away free. Gratis. No charge. The "pro" version is the price of a newspaper that you'd read once and throw away.

If you download, don't like, ★☆☆☆☆ and walk away, I'm stuck with a bad rating that I can do nothing about, and its cost you nothing (Google Play allows a refund within a couple of hours of the download).

Please don't misunderstand me - I'm not railing against the people who comment and rate negatively - everyone is entitled to an opinion. The problem is the system which Google have created seems designed to make life difficult for app developers, and frustrating for users.

Negative commenters seem to fall into a few categories:

The Unwittingly Unhelpful

★☆☆☆☆ N. O'Tunreasonable on April 20th, 2012 (Motorola shit-hot-superphone-II with version 2.0.1)

"Doesn't work on Motorola shit-hot-superphone-II. FIX IT OR REFUND ME!".

Dude, I'd love to fix it. No really, I would! Ask any one of the dozen or so people who've emailed or tweeted me about a problem and who got a response within hours and a fix within two days at most (best I can do on a personal project - I have a day job!).

Unfortunately I don't have a Motorola shit-hot-superphone-II (its always Motorola, why is that? Oh, occasionally its an HTC, but nearly always Motorola. Curse them).

It works great on my Samsung's, including the Galaxy mini that cost £50 in Tesco. No new crash reports or freezes in my developer console. How in the living hells do you expect me to FIX IT or, for that matter, to refund you? I do not know who you are because Google anonymise you!

I don't blame these guys actually - they rightly expect the app to work, and equally they expect a commenting system to allow some kind of conversation. Unfortunately, the Android eco-system is fragmented to hell and back, bugs are a universal truth of software, and Play's commenting system does not allow threaded responses and does not give the developer access to the commenters identity.

The first two I can handle - fragmentation requires more work, and bugs can be fixed, but I need to be able to communicate with commenters or I can't help them. I'm looking at you Google. With beetled brows.

The Blackmailer

Uses the power of a bad rating to demand whatever features he feels the app should have before benificently conferring his generous 5 stars.

★☆☆☆☆ Dick Dastardly on April 16th, 2012 (HTC Wonderful with version 2.0.1)

"Great app, will rate 5* when you add XYZ feature"

Really, this has happened to me several times. Luckily so far the "requests" have been for features I was already working on, so I've managed to satisfy these without having to bow to any whims. Strangely they do get all gushy afterwards.

The Affronted

Affronted that after skipping the description and reading just the name and maybe glancing at the icon of the app, it turns out not to do what they wanted - the confusion thus engendered renders the app's very existence a personal insult.

★☆☆☆☆ D. Idnot RTFM on April 16th, 2012 (Samsung Universe XVI with version 2.0.1)

"I ecspected this app to XXX but it dusnt it only YYY sooooo ridicolus OMFG I waisted nerly 30 seconds of my life on this thing and then I culdnt make it XXX but it shoud and like whatever this sux, dont waist ur life on this"

OK, nothing much I an do about that, except try to come up with a better name / more descriptive icon / shorter and more pointed description. I guess I just have to hope that other potential downloaders do read the app description and take these kinds of comments with a pinch of salt.

The Cryptic Critic

★☆☆☆☆ on April 16th, 2012 (HTC Wonderful with version 2.0.1)

"Shocking. So baaad. Even the icons suck. WTF!?"

OK, come on guys! Shocking how? What's bad? Why do the icons suck? Give us a frigging clue here! There must have been something about the app that tempted you to download (unless I've mislabelled one of The Affronted), so presumably the issues with the app could have been worked out.

Except Google didn't give me the chance to help you, or to improve the app for those that come after, because I have no way to answer the comment, publicly or privately, or to try to get any further information from the complainant.

Options for Developers on Google Play

Build great apps

OK, this one sounds obvious, but its hard to build a great app on the first shot without any helpful feedback.

As a solo developer its especially hard, actually, to see your own mistakes and evaluate the quality of something when you are so close to the work. If you have a company or a team working on an app there are lots of eyes and minds to spot mistakes, find bugs, and think of improvements.

There are a few things we could do as developers, but I'm not sure of the efficacy of these strategies:

  • Build diagnostics into our own apps - crash reports are very very useful, but miss a lot of vital information (Android version!? Heap size on all memory errors!?). We either have to wait for Google to make improvements or DIY it.
  • Build feedback into our own apps - ask the user to submit feedback from within the app, and record and publish that feedback on the developers website. Doesn't solve the problem of comments on the Google market, but might make it possible to engage in a conversation with a percentage of the users you would otherwise be unable to talk to.
  • Switch allegiance to Amazon's store. Of course, then you have to pay a fee to join, and are subject to an Apple-like review process, and I've no idea if the result is worth it - do Amazon solve any of the problems with Google's market?
  • Build our own market that does a better job. No seriously, I'd love to do this. Of course, its a massive undertaking, and would require an incredible confluence of circumstances (or marketing budget) to really take off.

Self comment

On my free app I added a comment of my own, and I periodically freshen it up so it stays near the top of the stack so new commenters see it. Here's what it says:

★★★★★ steve on March 10, 2012 (Samsung Galaxy S2 with version 1.5.4) I am the developer...

Hi all! Please consider contacting me before leaving a negative review - I am very keen to improve the app, will fix reported bugs quickly, and will add popularly requested features!

If you only leave a comment like "forces close" I can't fix it because it doesn't give me any info to work with - that makes me sad.

Yes, I five-starred my own app. I don't feel bad actually, because how else can I contend with Google's broken ratings/comment system?

Unfortunately I'm not allowed to buy my own paid app, so this technique doesn't work there. I have added more or less the same information in the app description but, as we know, not everyone reads the descriptions.

Please please, if you have any problems at all contact us directly by email - crash reports and comments are great but Google don't give us any way to contact you back!!

To be fair, many (paying) users have emailed me, as has one person who wanted the app but couldn't download either free or paid versions on her device (apparent Market bug! I sent her the paid app for free and contacted the device manufacturer - Samsung - as there's no clear way to contact Google).

Sometimes when I've had negative comments from paying users I've been able to contact them by matching up the purchase record with the information in the comment.

I can't always match them up, but when I can they are never surprised that I was able to contact them. They expect that commenting makes their information available to the developer. Why wouldn't it?

Mark as spam

The only tool Google give developers to deal with comments is a spam/not-spam toggle. I don't think its appropriate to mark genuine comments (and I do think all of the above are genuine comments) as spam.

I understand this tool was added some time ago because there were big spam problems. So far I didn't get any spam comments at all.

Paid vs Free

Strangely, the users of paid apps are typically much more polite and less inclined to negatively review. They are more often inclined to email for help and delighted to get a response.

Could be its the Principle of Commitment and maybe Post-purchase Rationalisation at work? Or maybe its just the demographic of free vs paid users?

I'm strongly considering never publishing a free app again, just for the reduced hassle.

What Google can do to help developers

I don't pretend to have all the answers, but a few small things would make my life as an Android developer much easier:

  • Give developers access to the email address of commenters, or even a way to contact them via Google that keeps them anonymous. I imagine Google think that they are protecting users by anonymising them, but its really not helping anyone.
  • Allow threaded comments. That would be enough - if I could respond, and the commenter was alerted to the response, an awful lot of problems and misunderstandings could be cleared up quickly. This would be massively to Google's benefit, reducing user and developer frustration!
  • Include more device info in crash reports and comments (e.g. Android build versions and heap size for starters)
  • Maintain a list of device specs or emulator configurations that can be used to replicate crashes or bugs reported by users - including things like the damn heap size. I can't afford to buy one of every device on the market - my developer console states "This application is available to over 1245 devices.".

Please Google, please just help us to help our users. Help us to make the Android eco-system better. Help us to generate more profits for you. Everyone wins.

p.s. I found a product-forums thread on this topic where the originating comment dates back to May 2009 and raises many of the same points I've raised here. Clearly Google's priorities lie elsewhere :(

]]>
Android Market Google Play Comments Fail Sat, 21 Apr 2012 00:00:00 +0100
<![CDATA[Comparing methods of XML parsing in Android]]> http://steveliles.github.com/comparing_methods_of_xml_parsing_in_android.html This post details my experiments parsing the same document with the usual-suspects - DOM, SAX, and Pull parsing - and comparing the results for readability and performance - especially for Android. The parsing mechanisms compared here are:

  1. W3C DOM parsing
  2. W3C DOM and XPath
  3. SAX Parsing
  4. Pull Parsing
  5. dsl4xml (dsl around Pull-parser)
  6. SJXP (thin Pull-parser wrapper using xpath-like expressions)

I hope to add more later - some contenders include: jaxb; xstream; and Simple.

The code for the entire project is in github. You will need to Maven install the dsl4xml library if you want to run the tests yourself, as I'm afraid I don't have a public repo for it yet.

Important Note: This experiment was inspired by some work I did to optimise a slow Android app, where the original authors had used mostly DOM parsing with a sprinkling of XPath.

My ultimate aim was to run these perf tests on one or more real Android devices and show how they compare there.

For this reason if you look at the project in github, you'll see that I've imported the Android 4 jar and used only the parser implementations that are available without additional imports in Android. (OK, the two pull-parser wrappers require very small standalone jars, sorry).

The Android project and Activity for running the tests on a device is in a separate project here.

The XML

The XML file being parsed is a Twitter search result (Atom feed). You can see the actual file here, but this is a snippet of the parts I'm interested in parsing for these tests (the 15 <entry>'s in the document):

<?xml version="1.0" encoding="UTF-8"?>
<feed .. >
  ..
  <entry>
    ..
    <published>2012-04-09T10:10:24Z</published>
    <title>Tweet title</title>
    <content type="html">Full tweet content</content>
    ..
    <twitter:lang>en</twitter:lang>
    <author>
        <name>steveliles (Steve Liles)</name>
        <uri>http://twitter.com/steveliles</uri>
    </author>
  </entry>
  ..
</feed>

The POJO's

The Java objects we're unmarshalling to are very simple and don't need any explanation. You can see them in Github here.

Parsing the Twitter/Atom feed

First, just a few notes on what I'm trying to do. I basically want to compare two things:

  1. Readability/maintainability of typical parsing code.
  2. Parsing performance with said typical parsing code, incl. under concurrent load.

With that in mind, I've tried to keep the parsing code small, tight, and (AFAIK) typical for each mechanism, but without layering any further libraries or helper methods on top.

In working with each parsing mechanism I have tried to choose more performant approaches where the readability trade-off is not high.

Without further ado, lets see what parsing this document and marshalling to Java objects is like using the various libraries.

W3C DOM

DOM (Document Object Model) parsing builds an in-memory object representation of the entire XML document. You can then rummage around in the DOM, going and back and forth between elements and reading data from them in whatever order you like.

Because the entire document is read into memory, there is an upper limit on the size of document you can read (constrained by the size of your Java heap).

Memory is not used particularly efficiently either - a DOM may consist of very many sparsely populated List objects (backed by mostly empty arrays). A side effect of all these objects in memory is that when you're finished with them there's a lot for the Garbage Collector to clean up.

On the plus side, DOM parsing is straight-forward to work with, particularly if you don't care much about speed and use getElementsByTagName() wherever possible.

The actual code I used for the performance test is here, but this is roughly what it ended up looking like:

private DocumentBuilder builder;
private DateFormat dateFormat;

public DOMTweetsReader() 
throws Exception {
    DocumentBuilderFactory factory = 
        DocumentBuilderFactory.newInstance();
    builder = factory.newDocumentBuilder();
    dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
}

@Override
public String getParserName() {
    return "W3C DOM";
}

public Tweets read(InputStream anInputStream) 
throws Exception {
    Document _d = builder.parse(anInputStream, "utf-8");
    Tweets _result = new Tweets();
    unmarshall(_d, _result);
    return _result;
}

public void unmarshall(Document aDoc, Tweets aTo) 
throws Exception {
    NodeList _nodes = aDoc.getChildNodes().item(0).getChildNodes();
    for (int i=0; i&lt;_nodes.getLength(); i++) {
        Node _n = _nodes.item(i);
        if ((_n.getNodeType() == Node.ELEMENT_NODE) && 
            ("entry".equals(_n.getNodeName())
         ){
            Tweet _tweet = new Tweet();
            aTo.addTweet(_tweet);
            unmarshallEntry((Element)_n, _tweet);
        }
    }
}

private void unmarshallEntry(Element aTweetEl, Tweet aTo)
throws Exception {
    NodeList _nodes = aTweetEl.getChildNodes();
    for (int i=0; i&lt;_nodes.getLength(); i++) {
        Node _n = _nodes.item(i);
        if (_n.getNodeType() == Node.ELEMENT_NODE) {                    
            if ("published".equals(_n.getNodeName())) {                         
                aTo.setPublished(dateFormat.parse(getPCData(_n)));
            } else if ("title".equals(_n.getNodeName())) {
                aTo.setTitle(getPCData(_n));
            } else if ("content".equals(_n.getNodeName())) {
                Content _content = new Content();
                aTo.setContent(_content);
                unmarshallContent((Element)_n, _content);
            } else if ("lang".equals(_n.getNodeName())) {
                aTo.setLanguage(getPCData(_n));
            } else if ("author".equals(_n.getNodeName())) {
                Author _author = new Author();
                aTo.setAuthor(_author);
                unmarshallAuthor((Element)_n, _author);
            }
        }
    }
}

private void unmarshallContent(Element aContentEl, Content aTo) {
    aTo.setType(aContentEl.getAttribute("type"));
    aTo.setValue(aContentEl.getNodeValue());
}

private void unmarshallAuthor(Element anAuthorEl, Author aTo) {
    NodeList _nodes = anAuthorEl.getChildNodes();
    for (int i=0; i&lt;_nodes.getLength(); i++) {
        Node _n = _nodes.item(i);
        if ("name".equals(_n.getNodeName())) {
            aTo.setName(getPCData(_n));
        } else if ("uri".equals(_n.getNodeName())) {
            aTo.setUri(getPCData(_n));
        }
    }
}

private String getPCData(Node aNode) {
    StringBuilder _sb = new StringBuilder();
    if (Node.ELEMENT_NODE == aNode.getNodeType()) {
        NodeList _nodes = aNode.getChildNodes();
        for (int i=0; i&lt;_nodes.getLength(); i++) {
            Node _n = _nodes.item(i);
            if (Node.ELEMENT_NODE == _n.getNodeType()) {
                _sb.append(getPCData(_n));
            } else if (Node.TEXT_NODE == _n.getNodeType()) {
                _sb.append(_n.getNodeValue());
            }
        }
    }
    return _sb.toString();
}

Its worth noting that I would normally extract some useful utility classes/methods - for example getPCData(Node) - but here I'm trying to keep the sample self-contained.

Note that this code is not thread-safe because of the unsynchronized use of SimpleDateFormat. I am using separate instances of the Reader classes in each thread for my threaded tests.

W3C DOM and XPath

XPath is a language for describing locations within an XML document as paths from a starting location (which can be the root of the document (/), the current location (.//) or anywhere (//)).

I've used XPath on and off for years, mostly in XSLT stylesheets, but also occasionally to pluck bits of information out of documents in code. It is very straight-forward to use.

Here's a sample for parsing our Twitter Atom feed. The actual test code is in github.

private DocumentBuilder builder;
private XPathFactory factory;

private XPathExpression entry;
private XPathExpression published;
private XPathExpression title;
private XPathExpression contentType;
private XPathExpression content;
private XPathExpression lang;
private XPathExpression authorName;
private XPathExpression authorUri;

private DateFormat dateFormat;

public DOMXPathTweetsReader() 
throws Exception {
    DocumentBuilderFactory _dbf = 
        DocumentBuilderFactory.newInstance();
    _dbf.setNamespaceAware(true);
    builder = _dbf.newDocumentBuilder();
    factory = XPathFactory.newInstance();

    NamespaceContext _ctx = new NamespaceContext() {
        public String getNamespaceURI(String aPrefix) {
            String _uri;
            if (aPrefix.equals("atom"))
                _uri = "http://www.w3.org/2005/Atom";
            else if (aPrefix.equals("twitter"))
                _uri = "http://api.twitter.com/";
            else
                _uri = null;
            return _uri;
        }

        @Override
        public String getPrefix(String aArg0) {
            return null;
        }

        @Override
        @SuppressWarnings("rawtypes")
        public Iterator getPrefixes(String aArg0) {
            return null;
        }
    };

    entry = newXPath(factory, _ctx, "/atom:feed/atom:entry");
    published = newXPath(factory, _ctx, ".//atom:published");
    title = newXPath(factory, _ctx, ".//atom:title");
    contentType = newXPath(factory, _ctx, ".//atom:content/@type");
    content = newXPath(factory, _ctx, ".//atom:content");
    lang = newXPath(factory, _ctx, ".//twitter:lang");
    authorName = newXPath(factory, _ctx, ".//atom:author/atom:name");
    authorUri = newXPath(factory, _ctx, ".//atom:author/atom:uri");

    dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
}

private XPathExpression newXPath(
    XPathFactory aFactory, NamespaceContext aCtx, String anXPath
) throws Exception {
    XPath _xp = factory.newXPath();
    _xp.setNamespaceContext(aCtx);
    return _xp.compile(anXPath);
}

@Override
public String getParserName() {
    return "W3C DOM/XPath";
}

@Override
public Tweets read(InputStream anInputStream)
throws Exception {
    Tweets _result = new Tweets();
    Document _document = builder.parse(anInputStream);

    NodeList _entries = (NodeList) 
        entry.evaluate(_document, XPathConstants.NODESET);                  
    for (int i=0; i&lt;_entries.getLength(); i++) {
        Tweet _tweet = new Tweet();
        _result.addTweet(_tweet);

        Node _entryNode = _entries.item(i);

        _tweet.setPublished(getPublishedDate(_entryNode));
        _tweet.setTitle(title.evaluate(_entryNode));
        _tweet.setLanguage(lang.evaluate(_entryNode));

        Content _c = new Content();
        _tweet.setContent(_c);

        _c.setType(contentType.evaluate(_entryNode));
        _c.setValue(content.evaluate(_entryNode));

        Author _a = new Author();
        _tweet.setAuthor(_a);

        _a.setName(authorName.evaluate(_entryNode));
        _a.setUri(authorUri.evaluate(_entryNode));
    }

    return _result;
}

private Date getPublishedDate(Node aNode) 
throws Exception {
    return dateFormat.parse(published.evaluate(aNode));
}

The code ends up being quite easy to read and can be written to nest in a way that mimics the document structure. There is a very big downside - as you'll see later - the performance is atrocious.

SAX Parser

SAX stands for Simple API for XML. It uses a "push" approach: whereas with DOM you can dig around in the document in whatever order you like, SAX parsing is event-driven which means you have to handle the data as it is given to you.

SAX parsers fire events when they encounter the various components that make up an XML file. You register a ContentHandler whose methods are called-back when these events occur (for example when the parser finds a new start element, it invokes the startElement method of your ContentHandler).

The API assumes that the consumer (ContentHandler) is going to maintain some awareness of its state (e.g. where it currently is within the document). I sometimes use a java.util.Stack to push/pop/peek at which element I'm currently working in, but here I can get away with just recording the name of the current element.

I'm extending DefaultHandler because I'm not interested in many of the events (it provides a default empty implementation of those methods for me).

The actual test code is in github, and is actually more complex in order to handle entity-refs via a LexicalHandler, but here's the gist of it:

private XMLReader reader;
private TweetsHandler handler;

public SAXTweetsReader() 
throws Exception {
    SAXParserFactory _f = SAXParserFactory.newInstance();
    SAXParser _p = _f.newSAXParser();
    reader = _p.getXMLReader();
    handler = new TweetsHandler();
    reader.setContentHandler(handler);
}

@Override
public String getParserName() {
    return "SAX";
}

@Override
public Tweets read(InputStream anInputStream) 
throws Exception {
    reader.parse(new InputSource(anInputStream));
    return handler.getResult();
}

private static class TweetsHandler extends DefaultHandler {

    private DateFormat dateFormat = 
        new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
    private Tweets tweets;
    private Tweet tweet;
    private Content content;
    private Author author;
    private String currentElement;

    public Tweets getResult() {
        return tweets;
    }

    @Override
    public void startDocument() throws SAXException {
        tweets = new Tweets();
    }

    @Override
    public void startElement(
        String aUri, String aLocalName, 
        String aQName, Attributes aAttributes
    ) throws SAXException {
        currentElement = aQName;
        if ("entry".equals(aQName)) {
            tweets.addTweet(tweet = new Tweet());
        } else if ("content".equals(aQName)) {
            tweet.setContent(content = new Content());
            content.setType(aAttributes.getValue("type"));
        } else if ("author".equals(aQName)) {
            tweet.setAuthor(author = new Author());
        }
    }

    @Override
    public void endElement(
        String aUri, String aLocalName, String aQName
    ) throws SAXException {
        currentElement = null;
    }

    @Override
    public void characters(char[] aCh, int aStart, int aLength)
    throws SAXException {
        if ("published".equals(currentElement)) {
            try {
                tweet.setPublished(dateFormat.parse(
                    new String(aCh, aStart, aLength))
                );
            } catch (ParseException anExc) {
                throw new SAXException(anExc);
            }
        } else if (
            ("title".equals(currentElement)) &&
            (tweet != null)
        ) {
            tweet.setTitle(new String(aCh, aStart, aLength));
        } else if ("content".equals(currentElement)) {
            content.setValue(new String(aCh, aStart, aLength));
        } else if ("lang".equals(currentElement)) {
            tweet.setLanguage(new String(aCh, aStart, aLength));
        } else if ("name".equals(currentElement)) {
            author.setName(new String(aCh, aStart, aLength));
        } else if ("uri".equals(currentElement)) {
            author.setUri(new String(aCh, aStart, aLength));
        }
    }
}

One downside when handling more complicated documents is that the ContentHandler can get littered with intermediate state objects - for example here I have the tweet, content, and author fields.

Another is that SAX is very low level and you have to handle pretty much everything - including that text nodes are passed to you in pieces when there are entity-references present.

Pull Parser

Pull-parsing is the "pull" to SAX parsing's "push". SAX pushes content at you by firing events as it encounters constructs within the xml document. Pull-parsing lets you ask for (pull) the next significant construct you are interested in.

You still have to take the data in the order it appears in the document - you can't go back and forth through the document like you can with DOM - but you can skip over bits you aren't interested in.

Test code is in github, this is roughly what it looks like:

private DateFormat dateFormat;
private XmlPullParserFactory f;
private Tweets tweets;
private Tweet currentTweet;
private Author currentAuthor;

public PullParserTweetsReader() 
throws Exception {
    dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
    f = XmlPullParserFactory.newInstance();
    f.setNamespaceAware(true);
}

@Override
public String getParserName() {
    return "Pull-Parser";
}

@Override
public Tweets read(InputStream anInputStream) throws Exception {
    XmlPullParser _p = f.newPullParser();
    _p.setInput(anInputStream, "utf-8");
    return parse(_p);
}

private Tweets parse(XmlPullParser aParser) 
throws Exception {
    tweets = new Tweets();

    int _e = aParser.next();
    while (_e != XmlPullParser.END_DOCUMENT) {
        if (_e == XmlPullParser.START_TAG) {
            startTag(aParser.getPrefix(), aParser.getName(), aParser);
        }
        _e = aParser.next();
    }

    return tweets;
}

private void startTag(String aPrefix, String aName, XmlPullParser aParser)
throws Exception {
    if ("entry".equals(aName)) {
        tweets.addTweet(currentTweet = new Tweet());
    } else if ("published".equals(aName)) {
        aParser.next();
        currentTweet.setPublished(dateFormat.parse(aParser.getText()));
    } else if (("title".equals(aName)) && (currentTweet != null)) {
        aParser.next();
        currentTweet.setTitle(aParser.getText());
    } else if ("content".equals(aName)) {
        Content _c = new Content();
        _c.setType(aParser.getAttributeValue(null, "type"));
        aParser.next();
        _c.setValue(aParser.getText());
        currentTweet.setContent(_c);
    } else if ("lang".equals(aName)) {
        aParser.next();
        currentTweet.setLanguage(aParser.getText());
    } else if ("author".equals(aName)) {
        currentTweet.setAuthor(currentAuthor = new Author());
    } else if ("name".equals(aName)) {
        aParser.next();
        currentAuthor.setName(aParser.getText());
    } else if ("uri".equals(aName)) {
        aParser.next();
        currentAuthor.setUri(aParser.getText());
    }
}

SJXP (Pull-Parser wrapper)

The first of the pull-parser wrappers under test, I stumbled upon this one yesterday. I liked the idea behind it so decided to give it a try.

I'm a big fan of callbacks generally, and having spent quite some time working with XPath in the past the idea of using XPath-like syntax to request callbacks from the pull-parser seems tempting.

There was one problem I couldn't work around which seems like either a gap in my knowledge (and the documentation) or an irritating bug - when declaring the paths you have to use the full namespace uri even on elements in the default namespace.

This means that my path declarations even on this shallow document are enormous and I had to split them onto three lines to fit the width of my blog.

Code is in github, this is the gist of it:

private Tweet currentTweet;
private DateFormat dateFormat;
private XMLParser&lt;Tweets> parser; 

private IRule&lt;Tweets> tweet = new DefaultRule&lt;Tweets>(Type.TAG, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry"
) {
    public void handleTag(
        XMLParser&lt;Tweets> aParser, boolean aIsStartTag, Tweets aUserObject) {
        if (aIsStartTag)
            aUserObject.addTweet(currentTweet = new Tweet());
    }   
};

private IRule&lt;Tweets> published = new DefaultRule&lt;Tweets>(Type.CHARACTER, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://www.w3.org/2005/Atom]published"
) {
    public void handleParsedCharacters(
        XMLParser&lt;Tweets> aParser, String aText, Tweets aUserObject
    ) {
        try {                   
            currentTweet.setPublished(dateFormat.parse(aText));
        } catch (ParseException anExc) {
            throw new XMLParserException("date-parsing problem", anExc);
        }
    }           
}; 

private IRule&lt;Tweets> title = new DefaultRule&lt;Tweets>(Type.CHARACTER, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://www.w3.org/2005/Atom]title"
) {
    public void handleParsedCharacters(
        XMLParser&lt;Tweets> aParser, String aText, Tweets aUserObject
    ) {
        currentTweet.setTitle(aText);
    }           
};

IRule&lt;Tweets> content = new DefaultRule&lt;Tweets>(Type.TAG, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://www.w3.org/2005/Atom]content" +
) {
    public void handleTag(
        XMLParser&lt;Tweets> aParser, boolean aIsStartTag, Tweets aUserObject
    ) {
        if (aIsStartTag)
            currentTweet.setContent(new Content());
        super.handleTag(aParser, aIsStartTag, aUserObject);
    }
};

private IRule&lt;Tweets> contentType = new DefaultRule&lt;Tweets>(Type.ATTRIBUTE, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://www.w3.org/2005/Atom]content", "type"
) {
    public void handleParsedAttribute(
        XMLParser&lt;Tweets> aParser, int aIndex, String aValue, Tweets aUserObject
    ) {                 
        currentTweet.getContent().setType(aValue);
    }
};

private IRule&lt;Tweets> contentText = new DefaultRule&lt;Tweets>(Type.CHARACTER, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://www.w3.org/2005/Atom]content"
) {
    public void handleParsedCharacters(
        XMLParser&lt;Tweets> aParser, String aText, Tweets aUserObject
    ) {                 
        currentTweet.getContent().setValue(aText);
    }
};

private IRule&lt;Tweets> lang = new DefaultRule&lt;Tweets>(Type.CHARACTER, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://api.twitter.com/]lang"
) {
    public void handleParsedCharacters(
        XMLParser&lt;Tweets> aParser, String aText, Tweets aUserObject
    ) {
        currentTweet.setLanguage(aText);
    }
};

private IRule&lt;Tweets> author = new DefaultRule&lt;Tweets>(Type.TAG, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://www.w3.org/2005/Atom]author"
) {
    public void handleTag(
        XMLParser&lt;Tweets> aParser, boolean aIsStartTag, Tweets aUserObject
    ) {
        if (aIsStartTag)
            currentTweet.setAuthor(new Author());
        super.handleTag(aParser, aIsStartTag, aUserObject);
    }
};

private IRule&lt;Tweets> authorName = new DefaultRule&lt;Tweets>(Type.CHARACTER, 
    "/[http://www.w3.org/2005/Atom]feed"
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://www.w3.org/2005/Atom]author" +
    "/[http://www.w3.org/2005/Atom]name"
) {
    public void handleParsedCharacters(
        XMLParser&lt;Tweets> aParser, String aText, Tweets aUserObject
    ) {
        currentTweet.getAuthor().setName(aText);
    }
};

private IRule&lt;Tweets> authorUri = new DefaultRule&lt;Tweets>(Type.CHARACTER, 
    "/[http://www.w3.org/2005/Atom]feed" +
    "/[http://www.w3.org/2005/Atom]entry" +
    "/[http://www.w3.org/2005/Atom]author" +
    "/[http://www.w3.org/2005/Atom]uri"
) {
    public void handleParsedCharacters(
        XMLParser&lt;Tweets> aParser, String aText, Tweets aUserObject
    ) {
        currentTweet.getAuthor().setUri(aText);
    }
};

@SuppressWarnings("all")
public SJXPTweetsReader() {
    dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
    parser = parser = new XMLParser&lt;Tweets>(
        tweet, published, title, content, contentType, 
        contentText, lang, author, authorName, authorUri
    );
}

@Override
public String getParserName() {
    return "SJXP (pull)";
}

@Override
public Tweets read(InputStream anInputStream) 
throws Exception {
    Tweets _result = new Tweets();  
    parser.parse(anInputStream, "utf-8", _result);
    return _result;
}

I like the idea of SXJP and I think that - particularly on more complex documents - it will lead to code that is easier to understand and maintain because you can consider each part entirely separately. It bulks up with boiler-plate though, especially with that namespace issue I mentioned.

Like SAX and "straight" Pull parsing it also suffers the problem of having to manage intermediate state (in my sample its currentTweet). It does allow a state/context object to be pushed into the callback methods, so I could have passed a customised context class to manage my state in instead of passing Tweets.

dsl4xml (Pull-parser wrapper)

This is my own small wrapper around XMLPullParser. The goals and reasons for it are stated at length else-where, but suffice to say that readability without sacrificing speed was my main aim.

Dsl4xml parsing code has a declarative style, is concise, and uses reflection to cut boiler-plate to a minimum.

Actual code is in Github, here's what it looks like:

private DocumentReader&lt;Tweets> reader;

public Dsl4XmlTweetsReader() {
    reader = mappingOf(Tweets.class).to(
        tag("entry", Tweet.class).with(
            tag("published"),
            tag("title"),
            tag("content", Content.class).with(
                attribute("type"),
                pcdataMappedTo("value")
            ),
            tag("twitter", "lang").
                withPCDataMappedTo("language"),
            tag("author", Author.class).with(
                tag("name"),
                tag("uri")
            )
        )
    );

    reader.registerConverters(
        new ThreadUnsafeDateConverter("yyyy-MM-dd'T'HH:mm:ss")
    );
}

@Override
public String getParserName() {
    return "DSL4XML (pull)";
}

@Override
public Tweets read(InputStream anInputStream) throws Exception {
    return reader.read(anInputStream, "utf-8");
}

There are two things I want to point out, which I guess you will have noticed already:

  1. This is by far the shortest and simplest code of all the samples shown.
  2. The code is slightly unusual in its style because it uses an Internal Domain Specific Language. The nice thing (IMHO) is that it is very readable, and even mimics the structure of the XML itself.

Its still early days for dsl4xml, so the DSL may evolve a bit with time. I'm also looking into ways to keep the same tight syntax without resorting to reflection - the aim being to narrow the performance gap between the raw underlying parser (currently a Pull parser) and dsl4xml.

Performance Comparison

I built some performance tests using the mechanisms described above to parse the same document repeatedly.

The tests are run repeatedly with increasing numbers of threads, from 1 to 8, parsing 1000 documents in each thread. The xml document is read into a byte array in memory before the test starts to eliminate disk IO from consideration.

When the statistics for each method have been collected, the test generates a html document that uses Google charts to render the results.

Each parsing method is tested several times and the results averaged to smooth out some of the wilder outliers (still far from perfect, partly due to garbage collection). I ran the tests on my Linux Desktop, Macbook Air, Samsung Galaxy S2 and Morotola Xoom2 Media Edition.

Here is the chart for the desktop (Core i7 (quad) 1.8GHz, 4GB RAM, Ubuntu 11.10, Sun JDK 1.6.0-26). There is a noticeable hump at 4 threads, presumably because its a quad core. Performance keeps rising up to 8 threads, this presumably because the cpu has hyperthreading. After 8 threads the performance slowly drops off as the context-switching overhead builds up (not shown here):

And here's the chart from my MacBook Air (Core i5 (dual) 1.7GHz, 4GB RAM, OSX Lion, Apple JDK 1.6.0-31):

The difference running under Android is, to put it mildly, astonishing. Here's the chart from my Samsung Galaxy S2 running Android 2.3.4, 64Mb heap. I reduced the max concurrency to 4 and the number of documents parsed per thread to 10, otherwise my phone would be obsolete before the results came back :)

Yep, SAX kicking ass right there.

Here's how it looks on a Motorola Xoom 2 Media edition running Android 3.2.2 (with 48Mb heap):

Confirming that SAX is the way to go on Android!

Quick side note about iOS

My friend Matt Preston did a quick port of the DOM and SAX parsing tests to iOS.

He didn't produce a chart (yet!), but the DOM parsing throughput on an iPhone 4S was approximately twice as good as SAX parsing on my Samsung. SAX Parsing on the iPhone churned through on average 150 docs/sec!

Its interesting to note that the iPhone4S runs a 1GHz Cortex A9 CPU clocked down to 800Mhz, while my Samsung is running a 1.2GHz Cortex A9.

Why XPath parsing sucked so bad

The observant will have noticed the charts do not contain figures for the XPath parsing. That's because I dropped it when I realised it was two orders of magnitude slower even than DOM parsing.

This appalling performance seems to be because when executing each xpath expression a context object is created which involves looking up several files on the classpath (and all the inherent synchronisation this entails). I don't intend to waste my time digging into why this can't done once and cached :(.

If you're interested, this is what my threads spent most of their time doing in the XPath test:

"Thread-11" prio=5 tid=7fcf544d2000 nid=0x10d6bb000 
    waiting for monitor entry [10d6b9000]
    java.lang.Thread.State: BLOCKED (on object monitor)
    at java.util.zip.ZipFile.getEntry(ZipFile.java:159)
    - locked &lt;7f4514c88> (a java.util.jar.JarFile)
    at java.util.jar.JarFile.getEntry(JarFile.java:208)
    at java.util.jar.JarFile.getJarEntry(JarFile.java:191)
    at sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:757)
    at sun.misc.URLClassPath$JarLoader.findResource(URLClassPath.java:735)
    at sun.misc.URLClassPath.findResource(URLClassPath.java:146)
    at java.net.URLClassLoader$2.run(URLClassLoader.java:385)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findResource(URLClassLoader.java:382)
    at java.lang.ClassLoader.getResource(ClassLoader.java:1002)
    at java.lang.ClassLoader.getResource(ClassLoader.java:997)
    at java.lang.ClassLoader.getSystemResource(ClassLoader.java:1100)
    at java.lang.ClassLoader.getSystemResourceAsStream(ClassLoader.java:1214)
    at com.sun.org.apache.xml.internal.dtm.SecuritySupport12$6.run
        (SecuritySupport12.java:117)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    com.sun.org.apache.xml.internal.dtm.SecuritySupport12.
        getResourceAsStream(SecuritySupport12.java:112)
    at com.sun.org.apache.xml.internal.dtm.ObjectFactory.
        findJarServiceProviderName(ObjectFactory.java:549)
    at com.sun.org.apache.xml.internal.dtm.ObjectFactory.
        lookUpFactoryClassName(ObjectFactory.java:373)
    at com.sun.org.apache.xml.internal.dtm.ObjectFactory.
        lookUpFactoryClass(ObjectFactory.java:206)
    at com.sun.org.apache.xml.internal.dtm.ObjectFactory.
        createObject(ObjectFactory.java:131)
    at com.sun.org.apache.xml.internal.dtm.ObjectFactory.
        createObject(ObjectFactory.java:101)
    at com.sun.org.apache.xml.internal.dtm.DTMManager.
        newInstance(DTMManager.java:135)
    at com.sun.org.apache.xpath.internal.XPathContext.
        &lt;init>(XPathContext.java:100)
    at com.sun.org.apache.xpath.internal.jaxp.XPathExpressionImpl.
        eval(XPathExpressionImpl.java:110)

Conclusions

Readability

Of the mechanisms tested so far, and from the code samples above, I think that dsl4xml produces far the most readable and maintainable parsing code. Of course I am biased.

I think SAX parsing would have worked out to be the most readable of the other mechanisms if it hadn't been for those pesky entity-refs. As it is I have to recommend Pull-parsing as the way to go for readability.

Desktop/laptop xml parsing performance

SAX parsing and the pull-parsing wrappers give comparable performance. Raw Pull-parsing beats the lot by a margin of around 15%. DOM performs relatively badly - around twice as slow as any of the others. Don't go near XPath based parsing unless you like watching paint dry.

Recommendation: Pull Parser for max performance and relative ease of use. Dsl4xml if you want performance and great readability :)

Android xml parsing performance

Avoid XPath at all costs. DOM and pull-parsing appear to have similarly poor performance characteristics. SAX absolutely destroys all the others - roughly an order of magnitude quicker.

Recommendation: SAX, every time. I'll get working on a SAX-based dsl4xml implementation :)

Update (23rd April 2012): Just finished a SAX-based dsl4xml - here's the performance chart for my Samsung Galaxy SII again (also includes figures for SimpleXML):

Final words

The Twitter Atom feed is not particularly complicated - tags are not deeply nested, not too many attributes, no nested tags of the same name, no mixed content (tags and text-nodes as siblings), etc.

I suspect that the performance gap between the different mechanisms widens as the document complexity increases, but as yet have no real evidence to back that up.

]]>
java xml parse unmarshall comparison Tue, 10 Apr 2012 00:00:00 +0100
<![CDATA[DSL for XML parsing in Android]]> http://steveliles.github.com/dsl_for_xml_parsing_in_android.html For a readability and performance comparison of different parsing mechanisms available in Android, have a look at my more recent post that compares parsing a Twitter search result using DOM, SAX, and various Pull parsing methods.

The short story: Always use SAX in Android (here's why).

SAX and Pull-parsing are fast, but don't lead to the most readable/maintainable code. Instead, how about a super-simple internal DSL for describing the mapping for unmarshalling from XML to POJO's, with pull-parsing performance? Quick example:

<books>
  <book>
    <title>The Hobbit</title>
    <synopsis>
        A little guy goes on an adventure, 
        finds ring, comes back.
    </synopsis>
  </book>
  <book>
    <title>The Lord of the Rings</title>
    <synopsis>
        A couple of little guys go on an adventure, 
        lose ring, come back.
    </synopsis>
  </book>
</books>

Can be unmarshalled to simple POJO's with this:

import static com.sjl.dsl4xml.DocumentReader.*;

class BooksReader {
    private DocumentReader&lt;Books> reader;

    public BooksReader() {
        reader = mappingOf(Books.class).to(
           tag("book", Book.class).with(
               tag("title"),
               tag("synopsis")
           )
        );
    }

    public Books reader(Reader aReader) {
        return reader.read(aReader);
    }
}

The long story:

I recently had occasion to work on an Android app that was suffering horrible performance problems on startup (approx 7-12 seconds before displaying content).

A look at the code showed up several possible contenders for the source of the problem:

  1. Many concurrent http requests to fetch XML content from web
  2. Parsing the returned XML documents concurrently
  3. Parsing documents using DOM (and some XPath)

A quick run through with the excellent profiler built in to DDMS immediately showed lots of time spent in DOM methods, and masses of heap being consumed by sparsely populated java.util.List objects (used to represent the DOM in memory).

Since the app was subsequently discarding the parsed Document object, the large heap consumption was contributing a huge garbage-collection load as a side-effect.

Parsing many documents at once meant that the app suffered a perfect storm of exacerbating issues: Slow DOM traversal with XPath; constant thread context-switching; massive heap consumption; and huge object churn.

The network requests - even over 3G - were comparatively insignificant in the grand scheme.

Reducing thread context switching

An obvious and inexpensive thing to try at this point was reducing the concurrency to minimise the overhead of context-switching and hopefully enable the CPU caches to be used to best advantage.

I confess I hoped for a significant improvement from this small change, but the difference, while measurable, was too small to be significant (~5-10%).

More efficient parsing

XPath is easy to use, and typically makes it possible to write straight-forward code for marshalling data from an XML document into Java objects. It is, however, horribly slow and a terrible memory hog.

I decided to try an experiment with an alternative parsing method, to see if a worthwhile performance gain could be achieved on one of the smaller documents that could then be applied to others.

I wrote a small test-case confirming the correctness of the existing parsing mechanism and testing the throughput in documents per second, then extracted an interface and created a new implementation that used Pull-Parsing instead of DOM and XPath.

The result was quite pleasing: 5x faster on a simple document. I fully expected the performance gains to be even better on more complex documents, so was quite eager to repeat the process for one of the most complex documents.

However, I had one major concern that put me off: the code for parsing even a simple document was already quite long and had a nasty whiff of conditional-overkill (think: lots of if statements). I wasn't too happy about trading code readability for performance.

I pondered a few alternatives like XStream which I've used a lot for converting from Java to XML but not much the other way around, and SimpleXML which I have used previously and can be nice, but pollutes your model objects with annotations and in some situations can be a real pain to get working.

An Internal DSL for mapping XML to POJO's

In the end I decided to spend just a few hours turning the problem over in code to see if I could come up with something more readable for working with the pull-parser directly.

The result, after an afternoon of attempting to parse the most complex XML file the app consumed, was a small Internal DSL (Domain Specific Language) for declaratively describing the mapping between an XML and the Java model classes, and a 15x performance improvement in startup time for the app (7-12 seconds down to ~0.5s).

The DSL I originally came up with required some boiler-plate code to do the final mapping between text nodes / attributes and the model classes being populated. If Java had a neat syntax for closures this would have been much less irritating :)

As it was the boiler plate irked me - too much stuff getting in the way of reading what was really important. I thought about it a bit in my spare time, and had another shot at it. My aims were:

  1. To make readable, maintainable, declarative code that unmarshalls XML documents to Java objects.
  2. To make unmarshalling XML documents to Java objects very fast (sax/pull-parsing speeds).
  3. To avoid polluting model classes with metadata about xml parsing (no annotations).
  4. To avoid additional build-time steps or "untouchable" code (code generators, etc).
  5. To produce a very small jar with no large dependencies.

The result is starting to take shape in github as dsl4xml. It removes all of the boiler plate in exchange for a small performance penalty due to use of reflection. I don't have comparative performance figures yet, but will post some when I get time.

Another example

XML:

<hobbit>
  <name firstname="Frodo" surname="Baggins"/>
  <dob>11400930</dob>
  <address>
    <house>
      <name>Bag End</name>
      <number></number>
    </house>
    <street>Bagshot Row</street>
    <town>Hobbiton</town>
    <country>The Shire</country>
  </address>
</hobbit>

POJO's: See the source-code of the test-case

Unmarshalling code:

private static DocumentReader&lt;Hobbit> newReader() {
    DocumentReader&lt;Hobbit> _marshaller = mappingOf(Hobbit.class).to(
        tag("name", Name.class).with(
            attributes("firstname", "surname")
        ),
        tag("dob"),
        tag("address", Address.class).with(
            tag("house", Address.House.class).with(
                tag("name"),
                tag("number")
            ),
            tag("street"),
            tag("town"),
            tag("country")
        )
    );

    _reader.registerConverters(new ThreadUnsafeDateConverter("yyyyMMdd"));

    return _reader;
}

A DocumentReader, once constructed, is intended to be re-used repeatedly. The DocumentReader itself is completely thread-safe as unmarshalling does not modify any of its internal state. To ensure thread-safety you must use only thread-safe type converters (see type conversion section below).

A minimum of garbage is generated because we're using a pull parser to skip over parts of the document we don't care about, and the only state maintained along the way (in a single-use context object for thread safety) is the domain objects we're creating.

Type conversion

You can create and register your own type converters. They are used only to map the lowest level xml data to your Java objects - attribute values and CData Strings. The Converter interface looks like this:

package com.sjl.dsl4xml.support;

public interface Converter&lt;T> {
    public boolean canConvertTo(Class<?> aClass);
    public T convert(String aValue);
}

An example Converter for converting String values to primitive int's looks like this:

class PrimitiveIntConverter implements Converter&lt;Integer> {
    @Override
    public boolean canConvertTo(Class&lt;?> aClass) {
        return aClass.isAssignableFrom(Integer.TYPE);
    }

    @Override
    public Integer convert(String aValue) {
        return ((aValue == null) || ("".equals(aValue))) ? 
            0 : new Integer(aValue);
    }
}

Most converters can be thread-safe, but some may require concurrency control for multi-threaded use (example: when converting dates using SimpleDateFormat).

You can use optimised type converters in situations where you know you will not be unmarshalling from multiple threads concurrently. An example is the ThreadUnsafeDateConverter which is used in the example above because it came from a test-case that will only ever run single-threaded.

public class ThreadUnsafeDateConverter implements Converter&lt;Date> {
    private DateFormat dateFormat;

    public ThreadUnsafeDateConverter(String aDateFormatPattern) {
        // SimpleDateFormat is NOT thread-safe
        dateFormat = new SimpleDateFormat(aDateFormatPattern);
    }

    @Override
    public boolean canConvertTo(Class&lt;?> aClass) {
        return aClass.isAssignableFrom(Date.class);
    }

    @Override
    public Date convert(String aValue) {
        try {
            return ((aValue == null) || ("".equals(aValue))) ? 
                null : dateFormat.parse(aValue);
        } catch (ParseException anExc) {
            throw new XmlMarshallingException(anExc);
        }
    }
}

The alternative ThreadSafeDateConverter looks like this:

class ThreadSafeDateConverter implements Converter&lt;Date> {
    private ThreadLocal&lt;DateFormat> dateFormat;

    public ThreadSafeDateConverter(final String aDateFormatPattern) {
        dateFormat = new ThreadLocal&lt;DateFormat>() {
            protected DateFormat initialValue() {
                return new SimpleDateFormat(aDateFormatPattern);
            }
        };
    }

    @Override
    public boolean canConvertTo(Class&lt;?> aClass) {
        return aClass.isAssignableFrom(Date.class);
    }

    @Override
    public Date convert(String aValue) {
        try {
            return ((aValue == null) || ("".equals(aValue))) ? 
                null : dateFormat.get().parse(aValue);
        } catch (ParseException anExc) {
            throw new XmlMarshallingException(anExc);
        }
    }
}

Missing features

This is still a very new project, and in an experimental stage. There's loads still to do:

  • Experiment with more documents to drive improvements to the DSL
  • More converters for the obvious types (e.g., BigDecimal, BigInteger, File, URI, etc.)
  • Support for namespaced documents
  • Support for CDATA (so far only tested with PCDATA)
  • Performance comparisons with DOM, SAX and non-DSL'd Pull parsing
  • Support for explicit (non-reflective) marshalling of properties
  • Support for SAX parsing instead of Pull-Parsing (see notes below)
  • Performance tests
  • Performance optimisations

Notes

I came across some interesting comments by Diane Hackborn (Android platform developer) in this thread.

Diane points out that SAX parsing is faster than Pull Parsing (at least on Android). I had been under the impression it was the other way around, hence I went with Pull parsing.

Later perf tests show SAX to be much faster on Android, so I will probably refactor to use SAX.

android parser performance

]]>
xml unmarshall android pull-parsing dsl Thu, 05 Apr 2012 00:00:00 +0100
<![CDATA[Android's AsyncTask]]> http://steveliles.github.com/android_s_asynctask.html The Android platform allows you to use all of the normal Java concurrency constructs. You should use them if you need to do any long-running* operations: you must do these off the main UI thread if you want keep your users happy, and the platform even enforces this by displaying an Application Not Responding dialog if an app does not respond to user-input within 5 seconds.

The platform provides a couple of mechanisms to facilitate communication between background threads and the main application thread: Handler's and AsyncTasks. In this article I want to concentrate on AsyncTask.

update, December 2013: I came across a StackOverflow post referencing this article, where the poster was - understandably - confused by my use of "long-running" in the first paragraph. Long-running is a relative term, and here I mean 'more than a few milliseconds and less than ~500ms'.

The main thread has just 16.67ms to render a single frame at 60Hz, so if you do anything on the main thread that gets even close to using up that 16ms you're risking skipped frames.

For operations that are long-running in human terms (500ms+) there are other constructs which may be more appropriate (Loaders/Services) - you might find my book useful, which covers all of the Android concurrency constructs in detail, including much more up-to-date and in-depth coverage of AsyncTask.

There are all kinds of things you will want to do off the UI thread, but AsyncTask (when used from an Activity or Fragment) is only really appropriate for relatively short operations. Ideal uses include CPU-intensive tasks, such as number crunching or searching for words in large text strings, blocking I/O operations such as reading and writing text files, or loading images from local files with BitmapFactory.

The basics of AsyncTask

AsyncTask provides a simple API for doing work in the background and re-integrating the result with the main thread. Here's what it looks like:

new AsyncTask&lt;Param, Progress, Result>() {
    protected void onPreExecute() {
        // perhaps show a dialog 
        // with a progress bar
        // to let your users know
        // something is happening
    }

    protected Result doInBackground(Param... aParams) {
        // do some expensive work 
        // in the background here
    }

    protected void onPostExecute(Result aResult) {
        // background work is finished, 
        // we can update the UI here
        // including removing the dialog
    }
}.execute();

The template methods onPreExecute() and onPostExecute(Result) are invoked such that you can safely update the UI from there.

There is a fourth template method - onProgressUpdate(Progress[]) - which you can implement if you want to update the UI to show progress is being made within the background thread. For this to actually work you will need to invoke publishProgress(Progress[]) regularly from within doInBackground(Param[]).

AsyncTask is generic, and presents three type variables:

class AsyncTask&lt;Params, Progress, Result>

They are used as follows:

  1. Params is the argument type for the varargs array passed in to doInBackground.
  2. Progress is the argument type for the varargs array passed in to onProgressUpdate, and so is also the type (of array) you must use when invoking publishProgress.
  3. Result is the return type of doInBackground, which in turn is the argument type passed in to onPostExecute.

What happens when you execute()?

When execute(Object.. params) is invoked on an AsyncTask the task is executed in a background thread. Depending on the platform AsyncTasks may be executed serially (pre 1.6 and potentially again in 4+), or concurrently (1.6-3.2).

To be sure of running serially or concurrently as you require, from API Level 11 onwards you can use the executeOnExecutor(Executor executor, Object.. params) method instead, and supply an executor. The platform provides two executors for convenience, accessable as AsyncTask.SERIAL_EXECUTOR and AsyncTask.THREAD_POOL_EXECUTOR respectively. (Note: If you are targeting earlier API levels executeOnExecutor is not available, but you have several options - see below).

I have not tested exhaustively, but at least on tablets running HoneyComb the THREAD_POOL_EXECUTOR is set up with a maximum pool size of 128 and an additional queue length of 10.

If you exhaust the pool by submitting too many AsyncTask's concurrently you will receive RejectedExecutionException's - a subclass of RuntimeException which, unless handled, will crash your application.

I suspect that on a resource-constrained device it is probably quite a disaster if you actually have that many AsyncTask's active concurrently - context-switching all those threads will render the cpu-cache ineffective, cost a lot in terms of CPU time, and anyway all those concurrently active threads will likely be using a good chunk of your heap and generating garbage for the GC to contend with.

You might want to consider an alternative Executor configured with a lower max threads and a longer queue, or a more appropriate strategy for managing the background work: for example if you have many files to download you could enqueue a url and a callback to a download-manager instead of executing an AsyncTask for each one.

executeOnExecutor for API levels below 11

AsyncTask gained a nice new method at API level 11 - executeOnExecutor - allowing you some control of the concurrency of your AsyncTask's. If you need to support older API levels you have a choice to make: do you absolutely have to have executeOnExecutor, or do you simply want to use it when it is available, and fall-back to execute otherwise?

The fallback approach

If you want a simple way to take some measure of control where possible, you can subclass AsyncTask, test for the API level at runtime, and invoke the executeOnExecutor method if it is available - something like this:

class MyAsyncTask&lt;Param, Progress, Result> {

    private static final boolean API_LEVEL_11 
        = android.os.Build.VERSION.SDK_INT > 11;

    public void execute(Executor aExecutor, Params... aParams) {     
        if(API_LEVEL_11)
            executeOnExecutor(aExecutor, aParams); 
        else
            super.execute(aParams);
    }

}

I know that at first glance something appears wrong here:

private static final boolean API_LEVEL_11 
    = android.os.Build.VERSION.SDK_INT > 11;

This looks like it will be optimised out by the compiler - a static comparison of a literal integer (11) with what appears to be another static integer (android.os.Build.VERSION.SDKINT), but in fact the upper-case VERSION.SDKINT is slightly misleading - the values in VERSION are extracted at runtime from system properties, so the comparison is not baked in at compile-time.

executeOnExecutor for all API levels

If you insist on having executeOnExecutor available for all API levels you might try this: copy the code for AsyncTask from API level 15, rename it (and make a few small changes as described here), and use that everywhere in place of the SDK version.

AsyncTask and the Activity lifecycle

The Activity lifecycle is well defined and provides template methods which are invoked when critical events occur in the life of an Activity.

AsyncTask's are started by an Activity because it needs some potentially blocking work done off the UI thread, and unless you really really know what you are doing they should live and die with that Activity.

If your AsyncTask retains a reference to the Activity, not cancelling the task when the Activity dies wastes CPU resources on work that cannot update its UI, and creates a memory leak (the Activity and all its View hierarchy will be retained until the task completes).

Don't forget that the Activity is destroyed and re-created even on something as simple as a device orientation change, so if a user rotates their device you will have two copies of your Activity retained until the last AsyncTask completes. In a memory constrained environment this can be a disaster!

For longer operations, e.g. networking, consider whether IntentService or Service are more appropriate to your needs (be aware, though, that Service does not automatically offload work from the main thread!).

For some useful discussion and ideas related to AsyncTask and lifecycle management, see this stackoverflow post.

AsyncTask and good Android citizenship

AsyncTask is in the SDK because it fulfils a common need, but it does not enforce a usage pattern that makes your app a good Android citizen.

In an environment where users switch contexts frequently and quickly (example: receive a phone call while in the midst of writing an email on your phone), it is probably important that your app does not hog resources whilst it is not the current focus.

If, as described above, you've set yourself up to cancel tasks according to the Activity lifecycle methods then you're all set and should not face any issues here.

My book, Asynchronous Android, goes into a lot more detail of how to use AsyncTask, including things like showing progress bars, continuing work across Activity restarts, understanding how AsyncTask works, cancelling running tasks, exception handling, and how to avoid some very common issues that developers face.

AsyncTask is just one of the 7 chapters; the book aims to arm you with a complete toolkit for building smooth, responsive apps by working with the platform to get work done off the main thread.

]]>
Android AsyncTask Threads Mon, 26 Mar 2012 00:00:00 +0100
<![CDATA[Returning a result from an Android Activity]]> http://steveliles.github.com/returning_a_result_from_an_android_activity.html The Android platform prescribes a number of patterns for putting together an application that plays well with the platform and feels familiar to users.

One of those patterns is the hierarchical use of Activities to segregate the application, and to provide re-usable chunks of application that can service certain requirements.

The higher design goal is to create an eco-system of separable Activities that fulfil Intents that can be re-used by other applications - for example: if my application needs an image, it can request one by invoking an Intent to use an image, and all Activities that can fulfil that Intent will be offered as a choice to the user.

Lets see what that looks like with a code example.

Invoking an Activity with an Intent

First of all, lets look at how to invoke an activity with an Intent. Lets say we want to explicitly open the Gallery app to select an image to use in our application. Its very simple:

private static final int PICK_IMAGE_REQUEST = 1;

public void selectImageFromGallery() {
    Intent _intent = new Intent();
    _intent.setType("image/*");
    _intent.setAction(Intent.ACTION_GET_CONTENT);
    startActivityForResult(
        Intent.createChooser(_intent, "Select Picture"), 
        PICK_IMAGE_REQUEST
    );
}

This will open the gallery app and allow the user to select an image. Notice that in the call to startActivityForResult we provided an int value in the form of PICK_IMAGE_REQUEST - this tells the system what return-code to use when the invoked Activity completes, so that we can respond correctly.

Lets see how we do that ..

@Override
protected void onActivityResult(
    int aRequestCode, int aResultCode, Intent aData
) {
    switch (aRequestCode) {
        case PICK_IMAGE_REQUEST:
            handleUserPickedImage(aData);
            break;
        case SOME_OTHER_REQUEST:
            handleSomethingElse(aData);
            break;
    }
    super.onActivityResult(aRequestCode, aResultCode, aData);
}

Here we're overriding a method of Activity to handle results being passed back from invoked activities.

The value of aRequestCode is the value passed to the startActivityForResult method (so for us its PICK_IMAGE_REQUEST), and is how we distinguish which activity is returning a result.

aResultCode will contain the value set by the invoked Activity's setResult(int), while aData Intent contains any data returned by the Activity. In our example the Intent contains the Uri of the selected image, which we can access like this:

private void handleUserPickedImage(Intent aData) {
    if ((aData != null) && (aData.getData() != null)) {
        Uri _imageUri = aData.getData();
        // Do something neat with the image...
    } else {
        // We didn't receive an image...
    }
}

Returning values from an Activity

Great, we can invoke existing Activity's and collect results. What does it look like from the other side? - How does the Activity return its results to us?

Uri _resultUri = .. // Some uri we want to return  
Intent _result = new Intent();              
_result.setData(_resultUri);
setResult(Activity.RESULT_OK, _result);
finish();

Its a simple as that:

  1. Create an Intent (the result object)
  2. Set the result data (you don't have to return a Uri - you can use the putExtra methods to set any values you want)
  3. Call setResult on your Activity, giving it the result Intent
  4. Call finish on your Activity
]]>
Android Activity Result Sat, 03 Mar 2012 00:00:00 +0000
<![CDATA[Custom fonts in Android]]> http://steveliles.github.com/custom_fonts_in_android.html I've been playing with Android since late December (2011). Its been fun. I've been meaning to document some things I've picked up, but I've been pretty busy hacking away. Time to write a few things down before I forget!

In my - admittedly limited - experience, Android devices typically come pre-installed with just the one font family. Before Ice-Cream Sandwich that font was Droid. In Ice-Cream Sandwich its Roboto.

If you want to use other fonts in your app, you must package them as assets. I believe Android has supported true-type fonts since the beginning, but now also supports open-type fonts (since 1.6). I always use true-type fonts anyway.

To bundle the font, simply place the .tff file in your project's assets directory. Here's how you load the font in your Activity code:

Typeface t = Typeface.createFromAsset(getAssets(), "my-font.ttf");

If you want to reference the font from xml markup you're in for a frustrating time. If you do want to take that path, check out this handy StackOverflow post.

If, like me, you prefer to set the font programmatically to the necessary views, you can call setTypeface(Typeface) on TextView's and EditText's.

In some of my layouts (for example "help" screens) I have many TextView's interspersed with ImageView's. To make life a bit easier I use the following utility method to set the font on all TextView's in the view hierarchy:

public static void setFontForAllTextViewsInHierarchy(
    ViewGroup aViewGroup, Typeface aFont) {
    for (int i=0; i&lt;aViewGroup.getChildCount(); i++) {
        View _v = aViewGroup.getChildAt(i);
        if (_v instanceof TextView) {
            ((TextView) _v).setTypeface(aFont);
        } else if (_v instanceof ViewGroup) {
            setFontForAllTextViewsInHierarchy((ViewGroup) _v, aFont);
        }
    }
}

Using this utility method is as simple as finding ViewGroup whose descendants need a font change, then invoking the method with that ViewGroup:

// somewhere in an Activity..
Typeface font = Typeface.createFromAsset(getAssets(), "my-font.ttf");
ViewGroup vg = (ViewGroup) findViewById(R.id.myViewGroup);
Utils.setFontForAllTextViewsInHierarchy(vg, font);

I worried at first that this would cause a noticeable re-draw where you first see the system font, then it flashes over to the custom font. Since I tend to set my fonts in my Activity's onCreate method immediately following the call to setContentView, my custom font is already specified before the first onDraw invocation hits any of my View's.

]]>
fonts android Mon, 20 Feb 2012 00:00:00 +0000
<![CDATA[Android 2.1 - trouble with bitmaps]]> http://steveliles.github.com/android_2_1_trouble_with_bitmaps.html I got the following mail from a user of my app:

Hi,

Comic strip is a great app and i love it very much. But there are couple of problem

  1. It force close when i apply FX
  2. When i preview the strip, the pictures turn out black

Please fix it My handphone is samsung galaxy beam andriod 2.1

This user has paid for the "pro" version. He also added the following comment in the market:

(2 stars) on February 18, 2012 (Samsung Galaxy Beam with version 1.5.0) It force close when i fx the pics and the pics turn out blank in the preview page.. Will upgrade to 5 stars when fixed

Interesting use of both carrot ("will upgrade to 5 stars"") and stick (current rating: 2 stars). I think the rating mechanism is pretty harsh on developers, but that's a topic for another post :)

Black images in preview

I started by looking into problem (2) - the black images in preview. This one sounded unusual - I've had no other reports of this problem at all.

I set up an Android 2.1 device in my emulator and set about trying to replicate the issue. I allowed it a 24Mb heap per app, and started to look into the black images in preview.

Galaxy-Beam virtual device

I was pretty surprised to see that there was indeed a problem. In all my testing using other Android API levels I hadn't encountered any such issue. Every time I tried to re-load one of the images for preview I saw the following error in log-cat:

Resolve uri failed on bad Bitmap uri: ...

The strange thing, of course, is that the uri was perfectly fine, works great in all API levels greater than 7, was created by the system using Uri.fromFile(file), and is working fine even in API level 7 when I reload the image in the scene editor activity!

Fix for "resolve uri failed on bad bitmap uri"

Given that the scene-editor was able to load the image just fine, I compared the code I was using to load images in the scene-editor with the code in the preview-activity. I had the following:

// scene-editor-activity snippet - works fine!
Bitmap _b = BitmapFactory.decodeStream(
    rslv.openInputStream(aScene.getBackgroundUri()), null, _opts
);
_img.setImageBitmap(_b);

// preview-activity snippet - fails with 'bad bitmap uri'
ImageView _image = new ImageView(ctx);  
_image.setImageURI(aScene.getBackgroundUri());

It seems that ImageView in Android versions less than 2.2 (API level 8) has a problem with directly resolving perfectly valid Bitmap uri's.

In the scene-editor I was always resolving the Uri to an InputStream using ContentResolver (which you can obtain from the Activity with getContentResolver()), whilst in my preview activity I was simply expecting ImageView to resolve the uri.

The fix for all Android versions was to use the slightly more laborious method of loading the Bitmap via ContentResolver and setting the Bitmap to the ImageView, like this:

// Resolve a uri to a Bitmap
private Bitmap getImageBitmap(Uri aUri) {
    try {
        BitmapFactory.Options _opts = new BitmapFactory.Options();
        _opts.inScaled = false;
        _opts.inSampleSize = 1;
        _opts.inPreferredConfig = config;

        return BitmapFactory.decodeStream(
            rslv.openInputStream(aUri), null, _opts
        );
    } catch (Exception anExc) {
        L.e("loading bitmap", anExc);
        return null;
    }
}

// set a bitmap instead of a uri...
_image.setImageBitmap(getImageBitmap(aScene));

If the Uri really resolves to a missing file then I'll still get black images, but this is only likely in fairly extreme circumstances, and at least the app doesn't crash :). I suppose a better solution would be to return the app icon in such cases.

So to the next problem...

Force-Close while applying FX

My first guess was that this was going to be a VM budget issue. I've had them before, but with the v1.5.0 release I seemed to have largely solved them (no crash reports at all since). Here are some of the issues:

  1. Older Android devices only allow 16Mb to each running app. The generation of devices from about 2 years ago (e.g. original Motorola Droid) often allow 24Mb per app, which is still pretty small for dealing with large images. Current generation (e.g. Samsung Galaxy Mini and S2) allow 64Mb (yay!).
  2. One of the nice things about Android devices is the way in which they integrate with Google's eco-system. For example, the "Gallery" app on most devices shows images from your Picasa Web Albums, as well as photos taken directly on the device. Of course, this means that the phone has access to potentially very large images taken with a "real" camera.
  3. Many mobiles these days have 8MP camera's built in, therefore a single photo can be very large!
  4. The nature of my app (making comic strips from your own images) means that I am dealing with potentially many images at any given time. Applying FX requires at least two such images concurrently in-memory (the source, and the target). The finished strips are rendered as rows of 350x350 images, so the size of that final bitmap depends on how many frames you add to your strip.

A quick investigation revealed that I wasn't exceeding the VM budget - nowhere near in fact: the app crashed frequently with the VM size still less than 7Mb. Switching back and forth between API levels 7 and 8 showed that this was definitely only a problem at API level 7 (Android 2.1).

In 2.2 and above I can go through all of the FX several times over with no problems. In 2.1 the app usually crashes at the application of the second effect, but sometimes goes at the first or third attempt.

SIGSEGV while recycling Bitmap's

My app allows you to apply some image effects to the photos you select for each frame, to give a more comic-book feel. For example, you can apply a half-tone print effect, or a quantised and outlined "cartoon" effect.

To process these effects I have to juggle multiple Bitmap's and Canvas's, and - because of the resource-constrained environment of a mobile device - clean up the memory that these objects were using as soon as they are no longer needed.

To make the user-experience more friendly the FX are processed in a background thread. On the UI thread I show a dialog with a spinner to let the user know something is happening. This is nothing special - I'm using the AsyncTask class provided by the Android framework for exactly this purpose.

In Android - pre Honeycomb - Bitmap memory is allocated off-heap by JNI calls in the Bitmap class. It doesn't gain you extra memory to play with in your VM - the bitmap pixel data is still counted within the total memory used by your app (witness the number of StackOverflow questions pertaining to Bitmap's and VM budget!). In Honeycomb the bitmap pixel data has moved into the VM

As soon as you're done with a Bitmap, you are supposed to let the Runtime know, by invoking Bitmap.recycle(), then null'ing the reference to the Bitmap. Fine, my app works great on API levels above 7 - no crashes, no warnings, no memory leaks.

At API level 7 (Android 2.1) however, this is what happens:

02-19 09:41:19.710: I/DEBUG(28): *** *** *** *** *** *** *** *** *** 
    *** *** *** *** *** *** ***
02-19 09:41:19.710: I/DEBUG(28): Build fingerprint: 
    'generic/sdk/generic/:2.1-update1/ECLAIR/35983:eng/test-keys'
02-19 09:41:19.710: I/DEBUG(28): pid: 224, tid: 234  
    >>> com.roundwoodstudios.comicstripitpro <<<
02-19 09:41:19.710: I/DEBUG(28): signal 11 (SIGSEGV), fault addr 00000028
02-19 09:41:19.720: I/DEBUG(28):  
    r0 00000000  r1 0012715c  r2 00000000  r3 0012715c
02-19 09:41:19.720: I/DEBUG(28):  
    r4 00137e18  r5 0012719c  r6 00000000  r7 00000000
02-19 09:41:19.720: I/DEBUG(28):  
    r8 00000001  r9 00000000  10 00000000  fp 00000000
02-19 09:41:19.720: I/DEBUG(28):  
    ip ff000000  sp 47285c58  lr 00000000  pc ac065288  
    cpsr 60000010
02-19 09:41:19.840: I/DEBUG(28):  #00  pc 00065288  /system/lib/libskia.so
02-19 09:41:19.840: I/DEBUG(28):  #01  pc 00065dcc  /system/lib/libskia.so
02-19 09:41:19.840: I/DEBUG(28):  #02  pc 00064148  /system/lib/libskia.so
02-19 09:41:19.840: I/DEBUG(28):  #03  pc 00041986  
    /system/lib/libandroid_runtime.so
02-19 09:41:19.850: I/DEBUG(28):  #04  pc 0000f1f4  /system/lib/libdvm.so
02-19 09:41:19.850: I/DEBUG(28):  #05  pc 00037f90  /system/lib/libdvm.so
02-19 09:41:19.850: I/DEBUG(28):  #06  pc 00031612  /system/lib/libdvm.so
02-19 09:41:19.860: I/DEBUG(28):  #07  pc 00013f58  /system/lib/libdvm.so
02-19 09:41:19.860: I/DEBUG(28):  #08  pc 00019888  /system/lib/libdvm.so
02-19 09:41:19.860: I/DEBUG(28):  #09  pc 00018d5c  /system/lib/libdvm.so
02-19 09:41:19.880: I/DEBUG(28):  #10  pc 0004d6d0  /system/lib/libdvm.so
02-19 09:41:19.880: I/DEBUG(28):  #11  pc 0004d702  /system/lib/libdvm.so
02-19 09:41:19.880: I/DEBUG(28):  #12  pc 00041c78  /system/lib/libdvm.so
02-19 09:41:19.890: I/DEBUG(28):  #13  pc 00010000  /system/lib/libc.so
02-19 09:41:19.890: I/DEBUG(28):  #14  pc 0000fad4  /system/lib/libc.so
02-19 09:41:19.890: I/DEBUG(28): code around pc:
02-19 09:41:19.890: I/DEBUG(28): ac065278 e1d4e2f4 e1d472f6 e5946004 e197200e 
02-19 09:41:19.890: I/DEBUG(28): ac065288 e5969028 e596a024 0a00002e e59db00c 
02-19 09:41:19.900: I/DEBUG(28): ac065298 e2848028 e1a0c008 e8bb000f e8ac000f 
02-19 09:41:19.900: I/DEBUG(28): code around lr:
02-19 09:41:19.900: I/DEBUG(28): stack:
02-19 09:41:19.900: I/DEBUG(28):     47285c18  4001d001  
    /dev/ashmem/mspace/dalvik-heap/zygote/0 (deleted)
02-19 09:41:19.900: I/DEBUG(28):     47285c1c  ad04d21d  /system/lib/libdvm.so
02-19 09:41:19.900: I/DEBUG(28):     47285c20  00000000  
02-19 09:41:19.910: I/DEBUG(28):     47285c24  00010002  [heap]
02-19 09:41:19.910: I/DEBUG(28):     47285c28  00010002  [heap]
02-19 09:41:19.910: I/DEBUG(28):     47285c2c  418ab254  
    /dev/ashmem/dalvik-LinearAlloc (deleted)
02-19 09:41:19.910: I/DEBUG(28):     47285c30  0012a0f8  [heap]
02-19 09:41:19.910: I/DEBUG(28):     47285c34  ad04d6d9  /system/lib/libdvm.so
02-19 09:41:19.910: I/DEBUG(28):     47285c38  ad07ff50  /system/lib/libdvm.so
02-19 09:41:19.910: I/DEBUG(28):     47285c3c  42ab4edd  
    /data/dalvik-cache/system@framework@framework.jar@classes.dex
02-19 09:41:19.910: I/DEBUG(28):     47285c40  47285c48  
02-19 09:41:19.910: I/DEBUG(28):     47285c44  00000001  
02-19 09:41:19.910: I/DEBUG(28):     47285c48  00000001  
02-19 09:41:19.910: I/DEBUG(28):     47285c4c  00000007  
02-19 09:41:19.910: I/DEBUG(28):     47285c50  df002777  
02-19 09:41:19.920: I/DEBUG(28):     47285c54  e3a070ad  
02-19 09:41:19.920: I/DEBUG(28): #00 47285c58  44ebe8a0  
    /dev/ashmem/mspace/dalvik-heap/2 (deleted)
02-19 09:41:19.920: I/DEBUG(28):     47285c5c  0012a0f8  [heap]
02-19 09:41:19.920: I/DEBUG(28):     47285c60  418ab254  
    /dev/ashmem/dalvik-LinearAlloc (deleted)
02-19 09:41:19.920: I/DEBUG(28):     47285c64  00127174  [heap]
02-19 09:41:19.920: I/DEBUG(28):     47285c68  47285c70  
02-19 09:41:19.920: I/DEBUG(28):     47285c6c  47285cd4  
02-19 09:41:19.930: I/DEBUG(28):     47285c70  000000f0  
02-19 09:41:19.930: I/DEBUG(28):     47285c74  00127128  [heap]
02-19 09:41:19.930: I/DEBUG(28):     47285c78  000000e4  
02-19 09:41:19.930: I/DEBUG(28):     47285c7c  0012a0f8  [heap]
02-19 09:41:19.930: I/DEBUG(28):     47285c80  00000001  
02-19 09:41:19.930: I/DEBUG(28):     47285c84  00000007  
02-19 09:41:19.930: I/DEBUG(28):     47285c88  00000001  
02-19 09:41:19.941: I/DEBUG(28):     47285c8c  ad040a89  /system/lib/libdvm.so
02-19 09:41:19.941: I/DEBUG(28):     47285c90  00000000  
02-19 09:41:19.941: I/DEBUG(28):     47285c94  0012a0f8  [heap]
02-19 09:41:19.941: I/DEBUG(28):     47285c98  ad07ecc0  /system/lib/libdvm.so
02-19 09:41:19.941: I/DEBUG(28):     47285c9c  ad03775b  /system/lib/libdvm.so
02-19 09:41:19.941: I/DEBUG(28):     47285ca0  ad037745  /system/lib/libdvm.so
02-19 09:41:19.941: I/DEBUG(28):     47285ca4  47285d2c  
02-19 09:41:19.941: I/DEBUG(28):     47285ca8  47285cd0  
02-19 09:41:19.941: I/DEBUG(28):     47285cac  00127128  [heap]
02-19 09:41:19.941: I/DEBUG(28):     47285cb0  00000000  
02-19 09:41:19.941: I/DEBUG(28):     47285cb4  00000001  
02-19 09:41:19.950: I/DEBUG(28):     47285cb8  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285cbc  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285cc0  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285cc4  ac065dd0  
    /system/lib/libskia.so
02-19 09:41:19.950: I/DEBUG(28): #01 47285cc8  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285ccc  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285cd0  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285cd4  afe0f2c0  /system/lib/libc.so
02-19 09:41:19.950: I/DEBUG(28):     47285cd8  47285d28  
02-19 09:41:19.950: I/DEBUG(28):     47285cdc  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285ce0  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285ce4  00000000  
02-19 09:41:19.950: I/DEBUG(28):     47285ce8  00127128  [heap]
02-19 09:41:19.960: I/DEBUG(28):     47285cec  afe0f3b0  /system/lib/libc.so
02-19 09:41:19.960: I/DEBUG(28):     47285cf0  00000000  
02-19 09:41:19.960: I/DEBUG(28):     47285cf4  afe0f2c0  /system/lib/libc.so
02-19 09:41:19.960: I/DEBUG(28):     47285cf8  00000003  
02-19 09:41:19.960: I/DEBUG(28):     47285cfc  afe3b9bc  
02-19 09:41:19.960: I/DEBUG(28):     47285d00  00137e18  [heap]
02-19 09:41:19.960: I/DEBUG(28):     47285d04  47285d2c  
02-19 09:41:19.960: I/DEBUG(28):     47285d08  00127128  [heap]
02-19 09:41:19.960: I/DEBUG(28):     47285d0c  00000003  
02-19 09:41:19.960: I/DEBUG(28):     47285d10  ffffffff  
02-19 09:41:19.960: I/DEBUG(28):     47285d14  47285d88  
02-19 09:41:19.960: I/DEBUG(28):     47285d18  42f0cd88  
02-19 09:41:19.970: I/DEBUG(28):     47285d1c  42f0cd74  
02-19 09:41:19.980: I/DEBUG(28):     47285d20  0012a0f8  [heap]
02-19 09:41:19.980: I/DEBUG(28):     47285d24  ac06414c  /system/lib/libskia.so
02-19 09:41:21.230: D/Zygote(30): Process 224 terminated by signal (11)
02-19 09:41:21.230: I/WindowManager(52): WIN DEATH: 
    Window{44d330a0 Just a sec! paused=false}
02-19 09:41:21.240: I/ActivityManager(52): Process 
    com.roundwoodstudios.comicstripitpro (pid 224) has died.
02-19 09:41:21.250: I/WindowManager(52): WIN DEATH: Window{44d72738
    com.roundwoodstudios.comicstripitpro/
        com.roundwoodstudios.comicstripit.SceneActivity paused=false}
02-19 09:41:21.320: I/UsageStats(52): Unexpected resume of com.android.launcher 
    while already resumed in com.roundwoodstudios.comicstripitpro

Yep, that's a seg-fault of the Dalvik VM triggered in the libskia library (Android's graphics lib), so I'm pretty screwed here - there's no catch and recover strategy for that! I've tried all sorts of things to try to work around it for Eclair, but so far no joy.

I got quite a few hits on StackOverflow for similar problems. Most seemed to be related to calling recycle, but I often hit the problem even before I recycle - I get blow-outs when creating Bitmaps (and yes, I'm still well within the VM budget, and I even tried allowing a 64Mb heap per app).

This looks like a monstrous bug in Android-2.1 to me. If I can't work around it I'll refund my user (he's a user of the paid version), but I doubt if that will lead to recovery of my previously 4.8 star rating.

Did I mention that I thought the rating mechanism was pretty harsh on developers? :(

Update - a few hours later :)

After some more debugging, I isolated the problem and created the simplest re-construction possible. The following code crashes reliably under API level 7, but runs to completion under API level 8 or above:

package com.roundwoodstudios.bitmaptest;

import android.app.Activity;
import android.graphics.Canvas;
import android.graphics.Color;
import android.os.Bundle;
import android.util.Log;

public class BitmapActivity extends Activity {
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);

        for (int i=0; i<100; i++) {
            Log.i("Bitmap Test", "Iteration: " + i);
            Canvas _c = new Canvas();
            _c.drawColor(Color.WHITE);
        }
    }
}

When you look at it like that its fairly clear what's wrong: the canvas isn't really initialised properly yet - it doesn't know how large it is, for example. If I set a bitmap to it first it runs fine even under API level 7:

package com.roundwoodstudios.bitmaptest;

import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.os.Bundle;
import android.util.Log;

public class BitmapActivity extends Activity {
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);

        for (int i=0; i<100; i++) {
            Log.i("Bitmap Test", "Iteration: " + i);
            Canvas _c = new Canvas();
            Bitmap _b = Bitmap.createBitmap(350, 350, Bitmap.Config.ARGB_8888);
            _c.setBitmap(_b);
            _c.drawColor(Color.WHITE);
        }
    }
}

Phew, that's a relief :)

]]>
android eclair bitmap crash sigsegv Sun, 19 Feb 2012 00:00:00 +0000
<![CDATA[Paths and Selections in Gimp]]> http://steveliles.github.com/paths_and_selections_in_gimp.html As I mentioned in my previous post, I've been learning to use Gimp so I can produce icons for my apps. The most significant new finding for me has been the discovery of paths and how to use them in combination with selections.

The ability to use set operations to combine selections in different ways, and to remember selections as paths opens up a whole world of possibilities. Lets look at a really simple example:

Fire up the Gimp, and create a new image to work on. I'm going with a 512x512 canvas again. Choose the oval selection tool and draw an oval on your canvas, then hold down shift and draw another oval that overlaps the first a little. (I was already surprised that I could add to the selections like this actually!)

multiple concurrent selections with gimp

Now, in the "Layers, Channels, Paths, .." window, open the paths tab, and click the "Selection to Path" icon (red circle with black lines above and below). You'll see a new entry added in the palette for your current selection shape. Here's how mine looks at this point:

gimp path dialog

There are immediately some neat things we can do:

  • We can recreate this same selection at any time by selecting the path in the paths palette and clicking the "path to selection"" icon (the red square icon with dotted outline).
  • We can "Stroke" this path (draw around its outline), using the currently selected tool, brush and colour, by clicking the paint-brush icon on the paths panel.
  • We can add to or subtract from this selection using other selections, by drawing a new selection then holding down shift/ctrl/shift+ctrl and clicking the "path to selection" icon.
  • We can add hand-drawn shapes to our path by using the path tool to create them, then adding them to a selection as described above.

One thing I found very useful, is that you can convert text to a path, which allows you to get creative with the stroke and fill used on that text. I'm going to clear my canvas and add some text:

  1. Select the text tool, click on the canvas and enter some text (I went with "Hello!")
  2. Increase the font size so that the text just fits on the canvas (I'm using Trebuchet MS bold italic at 175px). You might have to grab the corners of the text selection and expand it so that you can see all of the text.
  3. Move the text into the centre of the canvas.
  4. On the tool window, in the section at the bottom containing the tool controls, look for a button that says "Path from Text" - should be right at the bottom. Click it.
  5. Open the paths dialog again (from the layers, channels, paths.. window) - you should see that you now have a path called "Hello!", which looks like an outlined version of your text.

Now you can stroke and fill your text, convert it to a selection (allowing you to paint only inside or only outside), and generally do all kinds of neat things. For example:

  1. Switch to the "layers" dialog, and delete the floating text layer.
  2. Switch back to the "Paths" dialog again, click the "path to selection" icon
  3. Pick a nice colour to outline your text with (I've selected a deep blue)
  4. Invert your selection (ctrl-i) - this is so that we only paint around the outside in the next step
  5. From the "Paths" dialog, click the stroke icon
  6. In the dialog that pops up, set the stroke width to be around 10. Because we are stroking the line at the edge of the selection, half of the stroke width falls inside the text outline and will not be painted because it isn't part of our current selection. Click OK to stroke the path.

Now lets paint a gradient inside our text:

  1. From the "paths" dialog, click the "path to selection" icon
  2. From the tool palette, select the gradient tool
  3. Choose a foreground and background colour for the end-colours of your gradient (I'm going with red and yellow)
  4. Click and hold the left button in the "H" of your text, then drag over to the "!" before letting go of the button. A gradient will be painted inside your text outline only.

]]>
Gimp Icon Path Selection Tue, 14 Feb 2012 00:00:00 +0000
<![CDATA[Creating comic-book style icons with Gimp]]> http://steveliles.github.com/creating_comic_book_style_icons_with_gimp.html I've long had a love-hate relationship with Gimp. I hate the MDI interface - all those windows really grate on my nerves. I love that its a really great and full-featured graphics package, and free.

In the past I've fired it up to do all kinds of little jobs, but never really tried to use it to compose artwork of any complexity from scratch. That changed recently when I needed to create some nice icons for my Android app Comic Strip It!.

Finally, and almost by accident, I've learned how to use paths, masks, and layers, as well as a few of the filters, and managed to turn out some icons that I'm not too embarrassed by! Here's some examples:

pick from gallery icon next arrow icon accept icon magnifier preview icon camera icon piggy-bank save icon double-arrow share icon text-size icon help icon speech bubble icon special effects / fx icon new strip icon

Since the app is all about creating comic strips I wanted icons that really fit with that idea, so I spent some time looking at comics and comic graphics.

The half-tone effect - coloured dots printed in rows at different sizes and angles to create the illusion of different shades of colour - figures heavily in print comic-books, and has a such a recognisable stylistic effect that I settled on using that as much as possible. My attempts at re-creating it aren't "real" half-tones, but I think the effect works.

A few working practices

I drew all my icons at 512x512 pixels, then re-scaled for use. Android wants 4 different sizes to cater for four different screen densities (low - 32x32, medium - 48x48, high - 72x72, extra-high - 96x96).

Drawing at the much larger size of 512x512 just makes it much easier to work, covers minor errors once you scale down, and means that you usually end up with a decent quality scaled image.

I save all the original icons using Gimp's native XCF file format, which retains the layers and masks and so on - this means I can always go back later and make small changes if I need to (and I did, many times).

Because my laptop runs Ubuntu I saved my icons in a folder shared by Ubuntu One (cloud backup and replication), but this didn't always work out well - connectivity issues on a few evenings meant that I got conflicting versions between my laptop and my desktop, and it wasn't too easy to sort out.

In future I will use git, as that puts control of when you update in your hands, and provides explicit versioning and access to old versions. I've upgraded to a paid github account now that my app is live in the Android Market, and all my source-code lives there now.

So, to the icons...

Creating a halftone background

Almost all of my icons use a circular splash of half-tone colour as a background. It took me a good long time to figure out how to do this properly, but once you know how its done its actually very quick to re-create.

Step 1: The background gradient

First, create a new 512x512 file, and paint a linear gradient filling the entire image. To paint a gradient:

  1. select the "blend" tool, and make sure its send to "linear blend"
  2. select foreground and background colours as the start and end colours for your gradient
  3. hold down the left mouse-button in one corner of your image, move to the diagonally opposite corner and release the button to paint the gradient

I picked a deep orange foreground colour, a paler orange-yellow background colour, and drew the gradient from bottom-left to top-right, resulting in the following image:

step 1 - the background gradient

Step 2: The half-tone mask

To create the half-tone dot effect we're going to use one of Gimp's built in filters - "News-print". We want the dots to cluster heavily in the middle, then space further apart and get smaller the further they are from the centre of the image. Here's how we do that:

  1. Create a new layer, above the existing colour gradient. I like to do that directly in the "Layers" palette by right-clicking and choosing "New Layer", making sure to select "Transparency" as the layer fill type, and give it a name (I usually call this layer "bg_halftone").
  2. Make sure the layer is above your colour gradient in the layer palette. If it isn't, click and drag and drop it above the background layer.
  3. Select the "blend" tool again, and choose full black as the foreground colour, and full white as the background colour. This time we want to do a "radial blend".
  4. Move the mouse to the centre of your image, hold down the left button, and drag towards the right-hand edge. When you release the mouse you'll get a disc with a black centre, graduating through grey on a white background, as shown below.

black-to-white gradient disc

Now we're going to convert the smooth black-to-white gradient of our disc to a half-tone effect. We'll do that using the News-print filter:

  1. From Gimp's "Filters" menu, choose "Distorts -> Newsprint"
  2. Increase the cell-size to something reasonably large - 25 to 30 works well if you're going to scale the final image down as I did. If you won't be scaling the image down, stick to something less than 10.
  3. Visit the three colour channel tabs (red/green/blue) and set the same angle in all of them. I'm using 15 in this example.
  4. Finally, set the anti-aliasing oversample to 15, and apply the filter.

newsprint effect

Step 3: Allowing the colour to shine through the mask

Almost done! We just need to let the colour shine through from the background layer. I tried various ways of doing this. Masks worked well, and "Select->By Color" isn't bad (but does'nt capture anti-aliased areas perfectly). Eventually I got into the habit of a much simpler way:

  1. From Gimp's "Colors" menu choose "Color to Alpha"
  2. Click the colour-box (from:) and select full black
  3. Click OK and you'll see your background gradient shine through the dots

Note: If you noticed that the dots don't quite match up between the following image and the others on this page, its because I added this image after I originally posted, and had to re-create it because I had already deleted the .xcf file I originally used while writing this post. Oops.

For completeness, here's the "Select->By Color" method, which I don't use any more because it involves more steps and leaves the dots with slightly jagged edges:

  1. From Gimp's "Select" menu, choose "By Color", and then select the white background of the image.
  2. Invert the selection (ctrl-i) so that the black dots are now selected. (We could have selected the black dots initially, but I find that if you do this you end up with black artefacts after the next step)
  3. From the "Edit" menu choose "Clear", or hit the delete key, if you have one (my laptop doesn't). You should now see the gradient colour from your background layer shining through the holes you just made in the top-most layer.
  4. De-select (ctrl-a) and merge the two layers (right-click in the layers palette and choose "merge visible..")

finished half-tone background

Notice that the edges of the dots in this version of the image are decidedly jagged, compared to the version created by using the colour-to-alpha technique. This is to do with the select-by-colour method not including anti-aliasing pixels that are closer to white than black in the selection.

Step 4: Optional - de-focus for use as a background

I used this technique to create the backgrounds for my icons, adding one more step: gaussian blur to de-focus the background and make the foreground icon stand out nice and sharp.

I used the same technique to pattern-fill the icon detail by using paths and selections. I'll save the details of that for another post.

]]>
Gimp Icon Comic-Book Half-tone Sun, 12 Feb 2012 00:00:00 +0000
<![CDATA[Creating iOS style icons with ImageMagick]]> http://steveliles.github.com/creating_ios_style_icons_with_imagemagick.html Icons for iOS apps are generally provided by the app with square corners and no "sheen". The rounded corners and glossy sheen are added by iOS.

If you want to achieve the iOS icon look on icons used elsewhere (Android, Web, etc), you need to round the corners and apply the sheen yourself.

What follows is my quick attempt to give the iOS treatment to this 512x512 icon:

Original Icon

Transparent Rounded Corners

OK, transparent rounded corners are actually the easy part. There are several ways to get ImageMagick to do this. The easiest (read: shortest) command i've found looks like this:

convert -size 512x512 xc:none -fill white -draw \
    'roundRectangle 0,0 512,512 50,50' in.png \
    -compose SrcIn -composite rounded.png

So now we have this:

Rounded Corners

Overlay some sheen

The sheen is trickier. I imagine that real ImageMagick pro's could generate the sheen mask with some deft command-line, but I'm nowhere near that proficient.

Instead I created the following image in Gimp - exactly how is a topic for another post :). The black background is coming from the div containing the image - where you see black is actually transparent in my png, and the grey highlight at the top is semi-transparent:

To composite the rounded-corner image with the glossy overlay (gloss-over.png), I use this ImageMagick command:

convert -draw "image Screen 0,0 0,0 'gloss-over.png'" \
    rounded.png final.png

Final iOS style image

]]>
ImageMagick iOS Icon Fri, 27 Jan 2012 00:00:00 +0000
<![CDATA[Invoking Processes from Java]]> http://steveliles.github.com/invoking_processes_from_java.html Invoking an external process from Java appears easy enough but there are sooo many gotchas to watch out for. Typical problems that arise include:

  1. Hanging Processes - The invoked process "hangs" and never completes (because it is waiting for input that never comes, or for the output buffer(s) to be drained).
  2. Failure to execute - Commands that work fine from the cmdline refuse to run when invoked from Java (because the parameters are passed incorrectly).
  3. Mysterious issues in production - Peculiar situations where processes cease to work after running happily for some time (the file-handle quota is exhausted because the IO streams are not being correctly closed).

The first two are irritating, but at least they present themselves immediately and are typically fixed before the code leaves the developer.

The last problem is much more insidious and often only rears its head after some time in production (sometimes this is because it takes time and a significant number of executions before it manifests, other times it is because of differences between the development and production environments).

Lets have a look at the general solution to each of these problems. Later I'll list some code that I've been using to invoke processes safely.

Hanging Processes

Symptoms: When invoked, the process starts but does not complete. Sometimes this may appear to be caused by the input that is being fed to the process (e.g. with input A it works but with input B it does not), which adds to the confusion over why the problem occurs.

Cause: The most common reason for this problem is failing to pump input into the program, and drain output buffers from the program, using separate threads.

If a program is consuming sufficient input via standard-input (stdin), or producing sufficient output via stdout or stderr, the limited buffers available to it will fill up. Until those buffers are drained the process will block on IO to those buffers, so the process is effectively hung.

Solution: When you invoke any process from Java, you must use separate threads to pump data to/from stdin, stdout, and stderr:

// invoke the process, keeping a handle to it for later...
final Process _p = Runtime.getRuntime().exec("some-command-or-other");

// Handle stdout...
new Thread() {
    public void run() {
    try {
            Streams.copy(_p.getInputStream(), System.out);
        } catch (Exception anExc) {
            anExc.printStackTrace();
        }
    }
}.start();

// Handle stderr...
new Thread() {
    public void run() {
    try {
            Streams.copy(_p.getInputStream(), System.out);
        } catch (Exception anExc) {
            anExc.printStackTrace();
        }
    }
}.start();

Correctly pumping data into and out of the std io buffers will keep your processes from hanging.

Failure to communicate

Symptoms: You have a command-line that works perfectly when executed at the shell prompt, but invoking it from Java results in strange errors and, perhaps, complaints about invalid parameters.

Cause: Typically this occurs when you try to pass parameters which include spaces - for example file-names - which you escape or quote at the shell prompt.

Java invoking ImageMagick - a lego comic strip created with Comic Strip It! for Android

Example: Running ImageMagick "convert" to add transparent rounded corners to an icon:

convert -size 72x72 xc:none -fill white -draw \
  'roundRectangle 0,0 72,72 15,15' in.png \
  -compose SrcIn -composite out.png

This command-line works fine at a bash prompt, but if you try to invoke it naively from Java it will likely fail in a variety of interesting ways depending on your platform:

public static void main(String... anArgs) {
    // invoke the process, keeping a handle to it for later...
    final Process _p = Runtime.getRuntime().exec(
        "/usr/bin/convert -size 72x72 xc:none -fill white -draw" +
        " 'roundRectangle 0,0 72,72 15,15' /home/steve/Desktop/in.png" +
        " -compose SrcIn -composite /home/steve/Desktop/out.png"
    );

    // Handle stdout...
    new Thread() {
        public void run() {
            try {
                Streams.copy(_p.getInputStream(), System.out);
            } catch (Exception anExc) {
                anExc.printStackTrace();
            }
        }
    }.start();

    // Handle sderr...
    new Thread() {
        public void run() {
            try {
                Streams.copy(_p.getErrorStream(), System.out);
            } catch (Exception anExc) {
                anExc.printStackTrace();
            }
        }
    }.start();

    // wait for the process to complete
    _p.waitFor();
}

Whilst the command-line worked fine at the bash prompt, running the same command from Java results in an error message!:

convert: non-conforming drawing primitive definition 
    `roundRectangle' @ error/draw.c/DrawImage/3143.
convert: unable to open image `0,0':  @ error/blob.c/OpenBlob/2489.
convert: unable to open image `72,72':  @ error/blob.c/OpenBlob/2489.
convert: unable to open image `15,15'':  @ error/blob.c/OpenBlob/2489.
convert: non-conforming drawing primitive definition 
    `roundRectangle' @ error/draw.c/DrawImage/3143.

What's going on!? Basically the command we gave to Runtime.exec has been sliced up at spaces, ignoring the single quotes, and so ImageMagick has seen a very different command-line to the one we presented via the shell.

Solution: The solution this time is very easy: Use the overloaded Runtime.exec(..) methods that accept the command and the parameters as an array of String's. Re-writing our previous example:

public static void main(String... anArgs) 
throws Exception {
    // invoke the process, keeping a handle to it for later...
    // note that we pass the command and its params as String's in
    // the same String[]
    final Process _p = Runtime.getRuntime().exec(
        new String[]{
            "/usr/bin/convert",
            "-size", "72x72", "xc:none", "-fill", "white", "-draw",
            "roundRectangle 0,0 72,72 15,15", 
            "/home/steve/Desktop/in.png", "-compose", "SrcIn",
            "-composite", "/home/steve/Desktop/out.png"
        }
    );

    // Handle stdout...
    new Thread() {
        public void run() {
            try {
                Streams.copy(_p.getInputStream(), System.out);
            } catch (Exception anExc) {
                anExc.printStackTrace();
            }
        }
    }.start();

    // Handle sderr...
    new Thread() {
        public void run() {
            try {
                Streams.copy(_p.getErrorStream(), System.out);
            } catch (Exception anExc) {
                anExc.printStackTrace();
            }
        }
    }.start();

    // wait for the process to complete
    _p.waitFor();
}

Passing your cmdline parameters in a String array instead of as one long String should prevent your parameters from being chewed up and mis-interpreted.

Mysterious issues in production

Symptoms: For a good while things appear to be working fine. Processes are invoked, do their work, and shut-down. After a while a problem occurs - the processes are no longer being invoked, or hang.

Cause: The cause of this is usually exhaustion of the available file-handles, which in turn is caused by failing to correctly close all of the IO streams opened to handle the process IO.

Solution: Careful closure of all standard IO streams opened by the process and streams opened by you to consume the data from the standard streams opened by the process. Note: That's SIX streams in total, not just the three that you open to deal with stdin, stdout and stderr! I also recommend calling destroy on the Process object.

I may be being over-cautious in closing the process's own std streams, but I have seen many cases where closing these streams solved problems of leaked file-handles. (btw., A handy tool if you're running a *nix is lsof, which lists open file handles).

Here's how I recommend cleaning up after your process completes (this assumes that you did provide input via stdin):

public static void main(String... anArgs) {
    Process _process = null;
    InputStream _in = null;
    OutputStream _out = null;
    OutputStream _err = null;
    try {
        _process = Runtime.getRuntime().exec( ... );
        // ... don't forget to initialise in, out, and error,
        // .... and consume the streams in separate threads!
        _process.waitFor();
    } finally {
        if( _process != null ) {
            close(_process.getErrorStream());
            close(_process.getOutputStream());
            close(_process.getInputStream());
            _process.destroy();
        }
        close(_in);
        close(_out);
        close(_err);
    }
}

private static void close(InputStream anInput) {
    try {
        if (anInput != null) {
            anInput.close();
        }
    } catch (IOException anExc) {
        anExc.printStackTrace();
    }
}

private static void close(OutputStream anOutput) {
    try {
        if (anOutput != null) {
            anOutput.close();
        }
    } catch (IOException anExc) {
        anExc.printStackTrace();
    }
}

These days I usually use some utility classes which I've written to wrap all this stuff up and make life a little easier. You can find them in my sjl.io project at github. There's an example of usage in the test source tree - ExternalProcessTest - which invokes ImageMagick.

]]>
Java Process Thu, 26 Jan 2012 00:00:00 +0000
<![CDATA[Another mini-figure, another comic strip...]]> http://steveliles.github.com/another_mini_figure_another_comic_strip.html Here's a quick taster of the new speech-balloon styles and colours available in Comic Strip It! v1.4.2 (and featuring my latest lego minifigure - the Mad Scientist).

Joe Hazmat vs Mad Scientist, a lego comic strip made with Comic Strip It!

Sorry iOS folks, Comic Strip It! is only available for Android at this time. Android folks can jump to the Market with this QR code.
]]>
Comic Strip It Android App Comic Strip Lego Wed, 25 Jan 2012 00:00:00 +0000
<![CDATA[Maven, Android and Eclipse - joining the team]]> http://steveliles.github.com/maven_android_and_eclipse_joining_the_team.html I blogged recently about building Android projects with Maven, and how we've set things up for team development using Maven, Android and Eclipse. This follow up post describes how someone joining the team would go about setting up and getting to work...

To join in the fun you need a straight-forward installation of the following pre-requisite tools:

  • Eclipse (I usually go for Eclipse classic)
  • Subversion
  • Maven 3.0.3
  • Android SDK

You will then need the following Eclipse plugins:

  • Android Development Tools (ADT) - update site: https://dl-ssl.google.com/android/eclipse/
  • m2eclipse - update site: http://download.eclipse.org/technology/m2e/releases
  • m2e-android - Whilst you can install this like a normal plugin (thx Ricardo for the correction), I recommend that you don't use an update site to install this - instead, follow the instructions at the bottom of this post (after the comic), or at the m2e-android site.

Since you are setting up to join an existing team, most of the maven configuration has presumably already been done for you. To get working on a project (assuming it is set up as I described) you need to check out two projects:

  1. The "parent" project containing the common configuration for all Android-Maven projects
  2. The project you actually need to work on
  3. (OK, yes, also any apklib library projects if you need to debug or work on those too)

I highly recommend checking out so that all of these projects are siblings in a common projects directory.

The biggest single difference from ADT's usual working style is that you can't (currently) work with the apklib projects as project dependencies because of this issue. Instead, if you make any changes to an apklib project, you'll need to mvn install or mvn deploy it before you can see the change in your dependent apk projects.

I found that I had to "mvn install" each of the apklib projects locally before the dependent projects would build, as the remotely deployed projects for some reason did not include the pom resource - I haven't yet had time to investigate why.

Converting Eclipse ADT projects to build with Maven, a lego comic strip made with Comic Strip It!

Installing M2E-Android

Don't install this like a normal Eclipse feature! To install M2E-Android, open an Android-Maven project and open the pom.xml. You should see that the element is being highlighted as an error, because without M2E-Android, M2Eclipse does not understand the apk or apklib packaging types.

In the header of the pom.xml editor you should see a red error message: plugin execution not covered by lifecycle configuration....

error when pom packaging set to apk or apklib

Click the error and some details open up, including two quick fixes. Click the first quick fix ("discover new m2e connectors"). The following dialog pops up and after a short search, shows the m2e-android connector:

discover connectors dialog

Install the connector and the warnings should go away. Actually on one of my two machines they did not - I don't know why, but I had to take the 2nd quick-fix option of turning it off in Eclipse. For me that's just about ok, as I want the maven build to be the master anyway.

]]>
Maven Android Eclipse m2e-Android m2eclipse ADT Mon, 23 Jan 2012 00:00:00 +0000
<![CDATA[Creating colourised icon theme-sets with Image-Magick]]> http://steveliles.github.com/creating_colourised_icon_theme_sets_with_image_magick.html While automating production of customised applications I needed to automatically create a set of icons that match the colour scheme selected by the customer. This article describes some of my experiments (details follow the strip...).

ImageMagick comic strip made with @ComicStripIt

My aim was to find a way to take a target colour and re-create the entire icon set matched as closely as possible to that input colour, using a single command-line.

Lets start with a quick look at a sample icon - this was created by our UX designer and used in building the proto-typical instance of the application. It is part of a set of around 30:

Original icon

Given that I want to be able to apply a colour selected by a customer, I need to start from a neutral state, so my first step is to de-colourise the original icon, producing this grey-scale version:

Example icon to be coloured

Note that the icons all have transparency, but otherwise are largely made from shades of a single colour (a gradient) with white highlights.

I started by looking at the simple built-in image-magick commands. Given that I'm converting a large batch of icons I'm using mogrify instead of convert, which also requires that the command-line is re-ordered slightly, for example:

convert in.png -brightness-contrast 20x20 out.png

becomes:

mogrify -brightness-contrast 20x20 *

My first attempts used the Image-Magick commands tint, colorize, and +level-colors individually, as I was hoping for a very simple solution to present itself. Let's look at what each of those commands produces if we try to create icons with the following base colour:

The background here is our base colour
mogrify -fill "#0000cc" -tint 100 *

imagemagick tint

mogrify -colorize 100,0,0 *

imagemagick colorize

mogrify +level-colors "#000066","#0000cc" *

imagemagick +level-colors

As you can see from those examples, tint does the best job of retaining the fidelity of the icon, but doesn't really get close to the target colour.

Colorize has also kept most of the fidelity, but the white foreground has tended towards the target blue colour along with the grey background parts, though neither has really got very close to our intended colour.

+level-colors has got us closer to our target colour, but we've almost completely lost the white and the fidelity of the icon is, as a result, pretty much destroyed.

Reduce and re-compose

OK, so we can't get there with a simple one-liner. What about if we strip out different aspects of the image, perform different operations on each composite part, and then re-combine them later?

This is ultimately what I ended up doing:

  1. Extract the white part only
  2. Brighten the grey part (helps the later stages to get closer to the target colour)
  3. Adjust the grey (background) part towards our target colour
  4. Composite the white foreground back over the re-coloured background

Here's the commands to achieve that (note: I switched to using convert instead of mogrify because it was easier to test incremental changes this way):

# extract the white parts
convert -fuzz 60% -transparent black in.png 2.png

# lighten the original image
convert in.png -brightness-contrast 20x0 3.png

# level colours ...
convert +level-colors "#000066","#0000cc" 3.png

# composite together ...
convert 3.png 2.png in.png -composite out.png

It shouldn't be too difficult to follow that.

The first command extracts the white-ish parts of the image (foreground) by making shades of grey - from black through 60% grey - transparent. The fuzz factor is what determines the cut-off point. We produce this white-foreground as a separate image (2.png) because we still need the original for later steps.

Next we create a 3rd image (background) as a lightened version of the original (3.png) then colourise it using the +level-colors command we used earlier.

Finally we composite together the background image as the base, the foreground image on top, and use the original image as a mask so that we don't lose the transparency. The final result looks like this:

final

This is the best I've managed so far with my rudimentary knowledge of ImageMagick.

Since I'm invoking this conversion from a Java process I think I'll try something a little more low-level in Java next. I want the fidelity of the "tint" operation, with the precise colour targeting of the composite approach, I just don't know how to get there with ImageMagick.

]]>
colour icon Image Magick automation Wed, 18 Jan 2012 00:00:00 +0000
<![CDATA[Capture screenshots from the Android Emulator or Mobile Device]]> http://steveliles.github.com/capture_screenshots_from_the_android_emulator_or_mobile_device.html The first few times I needed screenshots of an Android app for the Android Market description I alt-printscreen'd the emulator then sliced the app screenshot out of the resulting image. This is a pain and - as it turns out - completely unnecessary.

For capturing screenshots from physical devices there are (paid) apps in the store, but again, this is completely unnecessary if you are a developer and have set up the Android Development Tools.

Why? Because a screenshot tool comes packaged as part of the android sdk!

From Eclipse you can grab a screenshot by opening DDMS (Window -> Open Perspective -> DDMS), then in the Device pane, select the device you want to take a screenshot from (which can be the emulator or a "real" mobile device), then click the camera icon (top right in the following screenshot):

screenshot showing the take-screenshot icon in DDMS

From the command-line I'm afraid you're pretty much out of luck right now unless you feel like a bit of hacking to create your own cmdline screenshot grabber by connecting to the same service that DDMS connects to.

Android Emulator Screenshots, a lego comic strip made with Comic Strip It!

]]>
android emulator eclipse screenshot ddms Thu, 12 Jan 2012 00:00:00 +0000
<![CDATA[The dreaded UNEXPECTED TOP-LEVEL EXCEPTION]]> http://steveliles.github.com/the_dreaded_unexpected_top_level_exception.html Library projects with Eclipse Android and Maven, a lego comic strip made with Comic Strip It!

I'm working on extracting library projects to factor out common code shared between multiple projects. With everything compiling successfully I attempted to run my apk project in an emulator, and got hit with the following:

UNEXPECTED TOP-LEVEL EXCEPTION:
java.lang.IllegalArgumentException: 
  already added: 
    Lcom/android/vending/licensing/Manifest$permission;

Now it seems there's been a lot of problems with this recently due to changes in ADT, but the added complexity of Maven in my setup throws a few more spanners into the machinery. Robert Schmid describes a project hierarchy very similar to mine here, and actually gave me the final clue I needed to unravel the mess.

The difference between my situation and Robert's is that I'm using Maven for release builds and continuous integration - and so far its proving to be ... tricky ... to get the combination of Eclipse, Maven and ADT to play well together.

I got the dreaded UNEXPECTED TOP-LEVEL EXCEPTION because somewhere in the build cycle the Maven-Eclipse plugin is injecting its apklib dependencies into my eclipse build as well as the referenced projects in Eclipse. Having finally worked out what was causing my problem it was pretty easy to resolve:

  • right-click the project, select properties
  • go to the Maven pane
  • uncheck "Resolve dependencies from workspace projects"
  • repeat for all of the apklib projects referenced by your apk project

The down-side of this is that if I make changes in my eclipse apklib projects I have to build the jars with Maven before the changes are available to the dependent apk projects. I actually slightly prefer working this way anyway - I find that a little bit of isolation helps.

I should probably point out that I am using Maven-3.0.3, the m2eclipse and m2e-android Eclipse plugins, and the very latest SDK at time of writing (r16). YMMV.

]]>
android eclipse ADT maven Wed, 11 Jan 2012 00:00:00 +0000
<![CDATA[Setting up Maven, Android and SVN for team development of multiple applications]]> http://steveliles.github.com/setting_up_maven_android_and_svn_for_team_development_of_multiple_applications.html If you don't yet have your Eclipse - ADT - Maven tool-chain set up you might be interested in the previous post. If you are joining a team that has already set up as I describe here you probably want this post instead.

Google's ADT is great if you're working alone, but falls short when a team needs to work on the same Android project. It gets worse when you have multiple projects - especially if some are library projects.

It gets worse still if the development team is distributed (as we are) and/or running different development platforms - Windows, Linux, Mac OSX - (as we do). A description of how I've set things up follows this brief interlude:

Android Maven Eclipse Subversion, a lego comic strip made with Comic Strip It!

The important things I wanted to enable in our team environment are:

  1. That the whole team can "get" the latest code quickly and easily
  2. That the whole team can contribute updates to the codebase quickly and easily
  3. That any new team member coming on-board can build immediately from check-out (given a short list of pre-requisites)
  4. That any team member can easily, consistently and correctly build a signed apk for release to the market
  5. New projects can be created quickly and easily with minimum of re-work and copy-paste in configuration
  6. Componentisation (e.g. jars and apklibs) is a Good Thing, and should be encouraged by making it as straight-forward as possible
  7. Developers have their choice of OS

Here's how I've set things up to support these goals...

Pre-requisites

I am assuming that:

  • You use some form of source-code control (Subversion/GIT/other...). Of course you do :)
  • All developers will install Eclipse and ADT for themselves as a pre-requisite.
  • If, as a team, you use Maven and/or Continuous integration, all developers will also install m2eclipse and m2e-android eclipse plugins and Maven 3 (see previous article).
  • You have some common practices in your team like, for example, checking out all projects as siblings in a single workspace directory (otherwise you'll have problems with sharing relative paths to referenced projects between developers).

Our Setup

I've set up projects in the workspace such that all of the following are siblings in a single workspace directory:

  • A parent project that hosts most of the maven-android config as a parent pom.
  • A project that hosts the keystore, and is checked in to source-code control (I actually use the same project for both the parent pom and keystore).
  • A (Android Library) project that contains a copy of the market licensing code (Google recommend keeping a separate copy outside of the SDK install directory). Ours is checked in to SVN for convenient sharing.
  • Multiple Android library (apklib) projects for our own code that is shared between multiple apps (apk's).
  • Multiple Android (apk) projects

Since most of the maven configuration is provided by the parent pom, each new project requires only minimal configuration. The parent pom for our android projects currently looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="
  http://maven.apache.org/POM/4.0.0 
  http://maven.apache.org/xsd/maven-4.0.0.xsd" 
  xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.mycompany</groupId>
  <artifactId>android</artifactId>
  <version>1.0</version>
  <packaging>pom</packaging>
  <build>
    <sourceDirectory>src/java/main</sourceDirectory>
    <pluginManagement>
      <plugins>
        <!--This plugin's configuration is used to store 
                Eclipse m2e settings only. It has no influence 
                on the Maven build itself.-->
        <plugin>
          <groupId>org.eclipse.m2e</groupId>
          <artifactId>lifecycle-mapping</artifactId>
          <version>1.0.0</version>
          <configuration>
            <lifecycleMappingMetadata>
              <pluginExecutions>
                <pluginExecution>
                  <pluginExecutionFilter>
                <groupId>
                  com.jayway.maven.plugins.android.generation2
            </groupId>
                    <artifactId>android-maven-plugin</artifactId>
                    <versionRange>[3.0.0,)</versionRange>
                <goals>
                  <goal>proguard</goal>
                </goals>
              </pluginExecutionFilter>
              <action>
                <ignore></ignore>
              </action>
                </pluginExecution>
              </pluginExecutions>
            </lifecycleMappingMetadata>
          </configuration>
        </plugin>
      </plugins>
    </pluginManagement>
    <plugins>
      <plugin>
        <groupId>com.jayway.maven.plugins.android.generation2</groupId>
        <artifactId>android-maven-plugin</artifactId>
        <version>3.0.0</version>
        <configuration>
          <androidManifestFile>
            ${project.basedir}/AndroidManifest.xml
              </androidManifestFile>
          <assetsDirectory>${project.basedir}/assets</assetsDirectory>
          <resourceDirectory>${project.basedir}/res</resourceDirectory>
          <nativeLibrariesDirectory>
            ${project.basedir}/src/main/native
          </nativeLibrariesDirectory>
          <sdk>
            <platform>14</platform>
          </sdk>
          <proguard>
            <skip>false</skip>
          </proguard>
          <sign>
            <debug>false</debug>
          </sign>
          <deleteConflictingFiles>true</deleteConflictingFiles>
          <undeployBeforeDeploy>true</undeployBeforeDeploy>
        </configuration>
        <extensions>true</extensions>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-jarsigner-plugin</artifactId>
        <version>1.2</version>
        <executions>
          <execution>
            <id>signing</id>
            <goals>
              <goal>sign</goal>
            </goals>
            <phase>package</phase>
            <inherited>true</inherited>
            <configuration>
              <archiveDirectory></archiveDirectory>
              <includes>
                <include>target/*.apk</include>
              </includes>
              <keystore>../android/keystore</keystore>
              <storepass>keystore-password-goes-here</storepass>
              <keypass>key-password-goes-here</keypass>
              <alias>key-alias-goes-here</alias>
            </configuration>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>2.3.2</version>
        <configuration>
          <source>1.6</source>
          <target>1.6</target>
        </configuration>
      </plugin>
    </plugins>
  </build>
</project>

An example of a pom from a library (apklib) project looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="
    http://maven.apache.org/POM/4.0.0 
    http://maven.apache.org/xsd/maven-4.0.0.xsd" 
  xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.mycompany</groupId>
    <artifactId>android</artifactId>
    <version>1.0-SNAPSHOT</version>
  </parent>
  <artifactId>android.util</artifactId>
  <version>1.0.1-SNAPSHOT</version>
  <name>Android Utils</name>
  <packaging>apklib</packaging>
  <description></description>
  <dependencies>
    <dependency>
      <groupId>com.google.android</groupId>
      <artifactId>android</artifactId>
      <version>2.2.1</version>
      <scope>provided</scope>
    </dependency>
        <!-- made available to android by 
             "maven android sdk deployer" -->
    <dependency>
      <groupId>android.support</groupId>
      <artifactId>compatibility-v13</artifactId>
      <version>r6</version>
    </dependency>
    <dependency>
      <groupId>oauth.signpost</groupId>
      <artifactId>signpost-core</artifactId>
      <version>1.2</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>oauth.signpost</groupId>
      <artifactId>signpost-commonshttp4</artifactId>
      <version>1.2</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>org.twitter4j</groupId>
      <artifactId>twitter4j-core</artifactId>
      <version>2.1.0</version>
    </dependency>
  </dependencies>
  <scm>
    <connection>scm:svn:svn://repo/project/trunk</connection>
    <developerConnection>
      scm:svn:svn://repo/project/trunk
    </developerConnection>
  </scm>
</project>

An example pom for an app (apk) project looks like this:

<project xsi:schemaLocation="
  http://maven.apache.org/POM/4.0.0 
  http://maven.apache.org/xsd/maven-4.0.0.xsd" 
  xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.mycompany</groupId>
    <artifactId>android</artifactId>
    <version>1.0-SNAPSHOT</version>
  </parent>
  <artifactId>android.ui</artifactId>
  <version>1.0.1-SNAPSHOT</version>
  <packaging>apk</packaging>
  <description></description>
  <dependencies>
    <dependency>
      <groupId>com.mycompany</groupId>
      <artifactId>domain</artifactId>
      <version>1.0.1-SNAPSHOT</version>
      <type>jar</type>
    </dependency>
    <dependency>
      <groupId>com.mycompany</groupId>
      <artifactId>android.util</artifactId>
      <version>1.0.1-SNAPSHOT</version>
      <type>apklib</type>
    </dependency>
    <!-- this project contains a copy of 
             the sdk licensing code -->
    <dependency>
      <groupId>com.mycompany</groupId>
      <artifactId>android.licensing</artifactId>
      <version>1.0.0-SNAPSHOT</version>
      <type>apklib</type>
    </dependency>
    <dependency>
      <groupId>com.google.android</groupId>
      <artifactId>android</artifactId>
      <version>2.2.1</version>
      <scope>provided</scope>
    </dependency>
        <!-- made available to android 
             by "maven android sdk deployer" -->
    <dependency>
      <groupId>android.support</groupId>
      <artifactId>compatibility-v13</artifactId>
      <version>r6</version>
    </dependency>
    <dependency>
      <groupId>oauth.signpost</groupId>
      <artifactId>signpost-core</artifactId>
      <version>1.2</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>oauth.signpost</groupId>
      <artifactId>signpost-commonshttp4</artifactId>
      <version>1.2</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>org.twitter4j</groupId>
      <artifactId>twitter4j-core</artifactId>
      <version>2.1.0</version>
    </dependency>
    <!-- prevent commons-logging from being included by 
         the Google HTTP client dependencies, which creates 
         a truck load of warnings and eventually kills eclipse -->
    <dependency>
        <groupId>commons-logging</groupId>
        <artifactId>commons-logging</artifactId>
        <version>1.1.1</version>
        <scope>provided</scope>
    </dependency>
  </dependencies>
  <scm>
    <connection>scm:svn:svn://repo/project/trunk</connection>
    <developerConnection>
      scm:svn:svn://repo/project/trunk
    </developerConnection>
  </scm>
</project>

Building a Release

Building a release, including running proguard to optimise and obfuscate the apk, and signing the apk from the shared keystore is now available from the maven cmdline with (as you'd expect):

mvn clean package

Its still early days for us, so I'm sure there are still wrinkles to iron out, but so far it seems to be working pretty well.

You can start the emulator and deploy the packaged apk into it using two further commands:

mvn android:emulator-start android:deploy

Enjoy!

]]>
android ADT maven team subversion release Mon, 09 Jan 2012 00:00:00 +0000
<![CDATA[Converting Eclipse ADT Android projects to build with Maven]]> http://steveliles.github.com/converting_eclipse_adt_android_projects_to_build_with_maven.html Getting Android Development Tools (ADT) for Eclipse to play nicely with Maven is quite a fiddle, involving a bunch of plugins for both Eclipse and Maven. Here's how I got it working (details after the comic-strip...). You might also be interested in two follow posts - setting up for team development with Android, Maven and Eclipse and joining a team developing with Android, Maven and Eclipse:

Converting Eclipse ADT projects to build with Maven, a lego comic strip made with Comic Strip It!

Plugins, Tools and Dependencies

  1. The Maven-Android plugin is Maven-3.0.3+ only, so you'll need to upgrade Maven if you are running an older version. The good news for Maven-2 users is the Maven guys worked hard to make 3 backwards compatible - and so far I've had no problems on some pretty complex projects.
  2. Eclipse Helios (3.6) or Indigo (3.7)
  3. The Android Developer Tools and SDK (of course).
  4. The m2eclipse Eclipse plugin (supposedly not required with Eclipse Indigo, but I had to install it)

Setup and Configuration

First, install maven 3.0.3 (or whatever newer maven is available).

Next install the m2eclipse plugin (you might want to check if you have it already - Indigo is supposed to come pre-supplied, but that probably depends on which Eclipse bundle you install. I usually go with Classic, and did not have m2eclipse. YMMV).

Now update your android sdk:

  • Using sdk manager, install all api levels you are interested in, including "google apis by google inc."
  • note: be sure to accept the license agreement for each selected jar (the ? should change to a green tick for ALL).
  • note: I find that the sdk manager either does not install all ticked packages in one go, or incorrectly reports the number of packages remaining to be installed - it "completes" but there are still pending installs (the "install N packages..." button re-enables with N > 0). I find it safest to restart SDK manager between each attempt so that it correctly shows what is installed.

If you want to work with Android 3 you need to perform an additional step. Maven Central does not have the jars available, so you'll need to use sdk deployer to push them into your repository.

  • check out with git (git clone https://github.com/mosabua/maven-android-sdk-deployer.git)
  • install android jars as required by running mvn from inside the sdk deployer project directory, example: mvn install -P 1.6, or install the whole lot with mvn install
  • if you have a shared / remote / central repository as we do, you will want to deploy the android jars there too. To do this you need to fill two fields in the android-sdk-deployer's pom.xml that the creator Manfred Moser helpfully separated out

    <repo.id>kv-repository</repo.id>
    <repo.url>scp://my-repo-host/repository</repo.url>
    

OK, we're done with installing!

Create your pom.xml

There are various ways you can create a pom for your existing Android projects. I went with the simple expedient of using mvn archetype:generate ...

  • from a directory you are happy to create projects in, execute mvn archetype:generate
  • you will be presented with an enormous list of archetypes - type android and hit return
  • the list should have been filtered down to about 3 from "de.acquinet.android..."
  • select "de.akquinet.android.archetypes:android-quickstart" - for me this was option 1
  • follow the prompts to conclusion - this will create a simple android project, including the pom.xml for an apk project.

Once you've done that you can copy the pom to your existing project(s) and modify it manually - this is what I did.

(Note: If you are starting a fresh new project you can just run mvn clean eclipse:eclipse to generate the eclipse project and classpath, then "import" the project into eclipse. After importing your project will just appear as a normal java project (neither maven nor android natures will be ascribed). To remedy that, right-click your project, go to configure->convert to Maven project, both natures are added automatically and you're ready to rock'n'roll.)

Integrate Eclipse and Maven

OK, last part ... Getting Eclipse and Maven to play nicely.

If you open the project in Eclipse now you'll probably find that it doesn't like your pom.xml. When you open the pom with m2eclipse installed it will open with the graphical xml editor. You'll notice that there's an error plugin execution not covered by lifecycle configuration....

error when pom packaging set to apk or apklib

Click the error and some details open up, including two quick fixes. Click the first quick fix ("discover new m2e connectors"). The following dialog pops up and after a short search, shows the m2e-android connector:

discover connectors dialog

Install the connector and the warnings should go away. Actually on one of my two machines they did not - I don't know why, but I had to take the 2nd quick-fix option of turning it off in Eclipse. For me that's just about ok, as I want the maven build to be the master anyway.

Congrats, you should now have a happy Eclipse project, and be able to build it using maven as expected.

What about android library projects?

Well basically its the same deal. I actually started with the library projects. The main difference is element in the pom should be set to apklib instead of apk.

]]>
android eclipse ADT maven Wed, 04 Jan 2012 00:00:00 +0000
<![CDATA[Android Source Code available!]]> http://steveliles.github.com/android_source_code_available.html Android Source Code released, a lego minifigure comic strip made with Comic Strip It!

I'm quite surprised it took this long, but as of 13th December 2011 the source-code for Android is finally available as part of the SDK downloads, and can be grabbed with the SDK Manager.

There's no jar file available yet, just a folder containing the source for API level 14 and upwards only. The Android dev who announced this stated that it is "extremely unlikely" that the source for earlier API levels would ever be made available as part of the SDK.

Android Developer Tools (Eclipse plugin) doesn't automatically link to this folder either, so you need to set it up manually.

First, get the source:

  1. Fire up the SDK Manager - from Eclipse's "Window" menu, select "Android SDK Manager".
  2. Check the boxes next to "Sources for Android SDK" (at time of writing this is available for both API level 14 and 15).
  3. Click the "Install X packages..." button (bottom right corner), and wait while your new goodies download and install.

Once you're done downloading, head back to Eclipse and configure your Android project to use the source directory:

  1. In the package-explorer, locate the Android jar file (you may have to turn on "Show referenced libraries" from the little down-arrow icon in the top-right corner of the Package Explorer toolbar before it will show up in the list).
  2. Right-click the jar and choose "Properties" - a dialog pops up.
  3. Select "Java Source Attachment", then click "External Folder..."
  4. Navigate to the source directory, which will be located inside your Android SDK install directory (mine is at /home/steve/dev/sdks/android-sdk-linux/sources/android-14. Select OK to leave the file-chooser dialog, and OK again to leave the jar Properties dialog.
  5. Hover any Android class and ctrl-click (or place the caret on the class name and hit F3) to enjoy the source code for that class :)

For the full source-code saga check out the issue-tracker entry requesting the source to be made available (be prepared for a long read!)

]]>
Android Source Code SDK Tue, 20 Dec 2011 00:00:00 +0000