I faced this dilemma recently, when I was preparing first release of Cerberus utility for Android. On one hand, in Cerberus I used a tiny subset of Guava features which can be trivially rewritten in vanilla Java in 15 minutes, so maybe I should not force Guava down peoples throat? On the other hand I'm a huge fan of Guava and I think you should definitely use it in anything more complicated than "Hello, world!" tutorial, because it either reduces a boilerplate or replaces your handrolled utilities with better, faster and more thoroughly tested implementations.
The "this library bloats my apk" argument is moot, because you can easily set up the ProGuard configuration which only strips the unused code, without doing any expensive optimizations. It's a good idea, because the dex input will be smaller, which speeds up the build and the apk will be smaller, which reduces time required to upload and install the app on the device.
I found the problem though, which is a bit harder to solve. Modern versions of Guava use some Java 1.6 APIs, which are available from API level 9, so when you try to use it on Android 2.2 (API level 8), you'll get the NoSuchMethodException or some other unpleasant runtime error (side note: position #233 on my TODO list was a jar analyzer which finds this problem). On Android 2.2 you're stuck with Guava 13.0.1.
This extends also to Guava as a library dependency. If one library supports Android 2.2 and older, it forces old version of Guava as dependency. And if another library depends on more recent version of Guava, you're basically screwed.
One conclusion you can draw from this blog post is that you shouldn't use Guava in your open source libraries to prevent dependency hell, but that's spilling the baby with the bathwater. The problem is not Guava or any other library, the problem are Java 1.6 methods missing from Android API level 8! The statistics from Google indicates that Froyo is used by 1.6%, in case of Base CRM user base it's only 0.2%. So more reasonable course of action is finally bumping minSdkVersion to 10 (or event 14), both for your applications and all the libraries.
Friday, December 27, 2013
Thursday, December 26, 2013
Offline mode in Android apps, part 1 - data migrations
This year I gave a talk on Krakdroid conference about offline mode in Android apps. By offline mode I mean implementing the app such way that the network availability is completely transparent to the end users. The high level implementation idea is to decouple the operations changing the data from sending these changes through unreliable network by saving the changes in local database and sending them at convinient moment. We have encountered two major problems when we implemented this behavior in Base CRM: data migrations and identifying entities. This blog post describes the first issue.
It might not be obvious why do you need the data migrations, so let's clear this out. Let's say on your mobile you have some data synced with backend (green squares on left and right) and some unsynced data created locally on mobile (red squares on the left).
Now let's say that we introduce new functionality to our app, which changes the schema of our data models (the squares on the backend side are changed to circles).
The schema of the local database have to be changed as well. The naive way of handling this situation is dropping old database with old schema, creating new one with new schema and resyncing all the data from backend, but there are two issues with this approach: if there is a lot of data the resyncing might take a while, which negates the most important advantage of offline mode - that the app is fully functional all the time.
More serious issue is that dropping the old database means that the unsynced data will be dropped along with it.
The only way to provide the optimal user experience is to perform schema migrations locally for both synced and unsynced data:
Migrating the data doesn't sound like a challenging thing to code, but the combination of obscure SQLite and Android issues complicates the matter. Without proper tools it's quite easy to make your code unmaintainable in the long run. I'll describe this issues and our solutions in the further posts.
Tuesday, December 24, 2013
Krakdroid 2013
At the beginning of the December I had an opportunity to gave a talk on a Krakdroid conference. The organizers outdid themselves this year, the venue, other speakers and the overall event atmosphere was amazing. Definitely a place to be at if you're in Krakow at the end of the year.
This year I talked about the offline mode in Android apps. The talk was 30% sales pitch, 10% shameless plug and 60% describing the pitfalls one can fall into when implementing offline mode. I'm going to describe two major problems with offline mode in details on my blog and here are the slides:
Protip on giving a public speech - take a sip of water every 2-3 slides, especially if you're not feeling well and you have a sore throat.
I got an atrocious headache in the afternoon and went home, so I didn't see all the talks, but I've seen the rare thing - a succesful live coding - by +Wojtek Erbetowski who presented the RoboSpock testing framework and incrementally turned a meh-Java code into concise Groove goodness. +Maciej Górski did the maps framework overview, and although he's the author of excellent Android Maps Extensions he managed to be surprisingly objective. The first talk by +Wojtek Kaliciński triggered an internal discussion and Base about reducing the support for old Android versions. It won't happen overnight, but at least we've moved from the dangerous "c'mon, it's not that hard to support Froyo" mindtrack. I'll definitely write more about this.
To summarise, it was a great event. I've learned a lot, I've met some interesting people and I gave another talk, which completes one of the goals I set myself up for 2013. Good stuff.
This year I talked about the offline mode in Android apps. The talk was 30% sales pitch, 10% shameless plug and 60% describing the pitfalls one can fall into when implementing offline mode. I'm going to describe two major problems with offline mode in details on my blog and here are the slides:
Protip on giving a public speech - take a sip of water every 2-3 slides, especially if you're not feeling well and you have a sore throat.
I got an atrocious headache in the afternoon and went home, so I didn't see all the talks, but I've seen the rare thing - a succesful live coding - by +Wojtek Erbetowski who presented the RoboSpock testing framework and incrementally turned a meh-Java code into concise Groove goodness. +Maciej Górski did the maps framework overview, and although he's the author of excellent Android Maps Extensions he managed to be surprisingly objective. The first talk by +Wojtek Kaliciński triggered an internal discussion and Base about reducing the support for old Android versions. It won't happen overnight, but at least we've moved from the dangerous "c'mon, it's not that hard to support Froyo" mindtrack. I'll definitely write more about this.
To summarise, it was a great event. I've learned a lot, I've met some interesting people and I gave another talk, which completes one of the goals I set myself up for 2013. Good stuff.
Tuesday, December 3, 2013
SQLite views gotcha
tl;dr: don't left join on view, or you gonna have a bad time.
I have investigated a performance issue of the db in Android app today. The symptoms looked like a classic case of the missing index: the performance degraded with adding more data to certain tables. However, the quick check of sqlite_master table and looking at some EXPLAIN QUERY PLAN queries indicated that everything is properly indexed (which is not very surprising, given that we use android-autoindexer).
I started dumping the explain query plans for every query and it turned out that some queries perform multiple table scans instead of single scan of main table + indexed searches for joined tables. It means that the indices were in place, but they weren't used.
The common denominator of these queries was joining with a view. Here's the simplest schema which demonstrates the issue:
Of course this behaviour is documented in the SQLite Query Planner overview (point 3 of the Subquery flattening paragraph), and I even remember reading this docs few times, but I guess something like this has to bite me in the ass before I memorize it.
Everything works fine if you copypaste the views selection in place of the joined view, which makes me a sad panda, because I wish SQLite could do this for me. On the other hand it's a very simple workaround for this issue, and, with a right library, the code might even be manageable.
I have investigated a performance issue of the db in Android app today. The symptoms looked like a classic case of the missing index: the performance degraded with adding more data to certain tables. However, the quick check of sqlite_master table and looking at some EXPLAIN QUERY PLAN queries indicated that everything is properly indexed (which is not very surprising, given that we use android-autoindexer).
I started dumping the explain query plans for every query and it turned out that some queries perform multiple table scans instead of single scan of main table + indexed searches for joined tables. It means that the indices were in place, but they weren't used.
The common denominator of these queries was joining with a view. Here's the simplest schema which demonstrates the issue:
sqlite> create table x (id integer); sqlite> create table y (id integer, x_id integer); sqlite> explain query plan select * from x left join y on x.id = x_id; selectid order from detail ---------- ---------- ---------- ---------------------------------------------------------------- 0 0 0 SCAN TABLE x (~1000000 rows) 0 1 1 SEARCH TABLE y USING AUTOMATIC COVERING INDEX (x_id=?) (~7 rows) sqlite> create view yyy as select * from y; sqlite> explain query plan select * from x left join yyy on x.id = x_id; selectid order from detail ---------- ---------- ---------- ------------------------------------------------------------------- 1 0 0 SCAN TABLE y (~1000000 rows) 0 0 0 SCAN TABLE x (~1000000 rows) 0 1 1 SEARCH SUBQUERY 1 USING AUTOMATIC COVERING INDEX (x_id=?) (~7 rows)
Of course this behaviour is documented in the SQLite Query Planner overview (point 3 of the Subquery flattening paragraph), and I even remember reading this docs few times, but I guess something like this has to bite me in the ass before I memorize it.
Everything works fine if you copypaste the views selection in place of the joined view, which makes me a sad panda, because I wish SQLite could do this for me. On the other hand it's a very simple workaround for this issue, and, with a right library, the code might even be manageable.
Wednesday, November 6, 2013
SQL injection through ContentProvider projection
The SQL injection through query parameters is the common security issue of any system using SQL database. Android is no different than any other system, so if you're using SQLite database in your Android app, you should always sanitize the database inputs.
If you are also using an exported ContentProvider, you need to take care of one more vector of attack: the projection parameter of the queries. Just like SQLiteDatabase, the ContentProvider allows the users to specify which columns they want to retrieve. It makes sense, because it reduces the amount of data fetched, which might improve performance and reduce the RAM footprint of your app. Unlike the SQLiteDatabase, the ContentProvider might be exported, which means that the external applications can query the data from it requesting an arbitrary projection, which are then turned into raw SQL queries. For example:
Basically it means that if you exposed a single uri without sanitizing the projection, you have exposed your entire db.
So how do you sanitize your projections? I've given it some thought and it seems that the only sensible thing to do is allowing only subsets of predefined set of columns.
You cannot allow any expression, because you'd allow any expressions, including SELECTs from other tables and allowing certain expressions is not a trivial task.
You shouldn't ignore the provided projection and return all columns, because one of the benefits of using projections is limiting the amount of data retrieved from database. Besides, certain widely used Google application ignores the existence of Cursor.getColumnIndex method and assumes that the columns will be returned in the same order they were specified in projection. The other app won't work correctly, and the users will probably blame you.
Obligatory XKCD |
'Bobby Tables was here'; DROP TABLE Students; -- * FROM sqlite_master; -- * FROM non_public_table_I_found_out_about_using_previous_query; --
Basically it means that if you exposed a single uri without sanitizing the projection, you have exposed your entire db.
So how do you sanitize your projections? I've given it some thought and it seems that the only sensible thing to do is allowing only subsets of predefined set of columns.
You cannot allow any expression, because you'd allow any expressions, including SELECTs from other tables and allowing certain expressions is not a trivial task.
You shouldn't ignore the provided projection and return all columns, because one of the benefits of using projections is limiting the amount of data retrieved from database. Besides, certain widely used Google application ignores the existence of Cursor.getColumnIndex method and assumes that the columns will be returned in the same order they were specified in projection. The other app won't work correctly, and the users will probably blame you.
Tuesday, November 5, 2013
Android drawables stroke inconsistency
I've run into a funny little problem when creating custom drawables recently - some of the lines were crisp and some were blurred:
After few debug iterations I was able to narrow down the difference to the shapes drawn using the Canvas.drawRoundRect and Canvas.drawPath. The former looked much crispier. I've dug down to Skia classes and it turns out that they reach the same drawing function through slightly different code paths and I guess at some point some rounding is applied at one of them, but I haven't verified this.
The minimal example which demonstrates the issue are two solid XML shape drawables (which are parsed into GradientDrawables), one with radius defined in radius attribute, the other one with four radii defined (can be the same).
Besides satisfying my idle curiosity and honing my AOSP code diving skills, I have learned something useful: do not mix paths and round rects on Canvas and use Path.addRoundRect with radii array when your path contains other curved shapes.
After few debug iterations I was able to narrow down the difference to the shapes drawn using the Canvas.drawRoundRect and Canvas.drawPath. The former looked much crispier. I've dug down to Skia classes and it turns out that they reach the same drawing function through slightly different code paths and I guess at some point some rounding is applied at one of them, but I haven't verified this.
The minimal example which demonstrates the issue are two solid XML shape drawables (which are parsed into GradientDrawables), one with radius defined in radius attribute, the other one with four radii defined (can be the same).
Besides satisfying my idle curiosity and honing my AOSP code diving skills, I have learned something useful: do not mix paths and round rects on Canvas and use Path.addRoundRect with radii array when your path contains other curved shapes.
Sunday, November 3, 2013
Thneed, notes and db design
We're starting to find more and more interesting use cases for Thneed in Base CRM codebase. The first release using it, and few other libraries we recently developed, was released just before the Halloween and we haven't registered any critical issues related to it. All in all, the results look very promising. I won't recommend using Thneed in your production builds yet, but I urge you to star the project on Github and watch its progress.
The Thneed was created as an answer to some issues we faced when developing and maintaining Base CRM, and this fact is sometimes reflected by the API. The example of this is something we internally called PolyModels.
Let's start with a scenario, where we have a some objects and we'd like to add notes to. It's a classic one-to-many relationship, which I'd model with a foreign key in notes table:
Now let's introduce another type of objects, which also can have notes attached to it. We have few options now. The simplest thing to do is to keep these notes in a completely separate table:
The issue with this solution is that we have two separate schemas that need to be updated in parallel, and in 95% of cases would be exactly the same. Another approach is making the objects which contain notes sort of inherit a base class:
These two solutions work perfectly in the "give me all notes for object X" scenario, but it gets ugly if you want to display a single note with the simple "Associated with object X" info. In this case you have to query every model which can contain notes, to see if this particular association references the objects from this model. On top of that, the Noteable table approach requires some additional work to create the entry in
You can always have a several mutually exclusive foreing keys in your notes:
But this solution doesn't really scale well as the number of the models which can contain notes increases. Also, your DBAs will love you if you go this way.
The solution to this problem we used in Base was to have two columns in Notes table: one holding the type of the "noteable" object, i.e. and the other for the id of this object:
The glaring issue with this approach is losing the consistency guarantee - no database I know of support this kind of foreign keys. But when you have SOA on the backend and the notes are stored in a separate database than the noteable objects, this is not your top concern. On mobile apps, even though we have a single database, we use the same structure, because all the other have some implementation issues and worse performance characteristics.
I'm not a db expert, and I haven't found any discussion of similar cases, which means that a) we're doing something very wrong or b) we have just very specific requirements. Let me know if it's a former case.
I needed to model this relationships in Thneed, which tured out to be quite tricky, but that's the topic for another blog post.
The Thneed was created as an answer to some issues we faced when developing and maintaining Base CRM, and this fact is sometimes reflected by the API. The example of this is something we internally called PolyModels.
Let's start with a scenario, where we have a some objects and we'd like to add notes to. It's a classic one-to-many relationship, which I'd model with a foreign key in notes table:
CREATE TABLE some_entity (id INTEGER); CREATE TABLE notes ( id INTEGER, some_entity_id INTEGER REFERENCES some_entity(id), content TEXT );
Now let's introduce another type of objects, which also can have notes attached to it. We have few options now. The simplest thing to do is to keep these notes in a completely separate table:
CREATE TABLE other_entity (id INTEGER); CREATE TABLE other_enity_notes ( id INTEGER, other_entity_id INTEGER REFERENCES other_entity(id), content TEXT );
The issue with this solution is that we have two separate schemas that need to be updated in parallel, and in 95% of cases would be exactly the same. Another approach is making the objects which contain notes sort of inherit a base class:
CREATE TABLE notables (id INTEGER); CREATE TABLE some_entity (id INTEGER, notable_id INTEGER REFERENCES notables(id)); CREATE TABLE other_entity (id INTEGER, notable_id INTEGER REFERENCES notables(id)); CREATE TABLE notes ( id INTEGER, notable_id INTEGER REFERENCES notables(id), content TEXT );
These two solutions work perfectly in the "give me all notes for object X" scenario, but it gets ugly if you want to display a single note with the simple "Associated with object X" info. In this case you have to query every model which can contain notes, to see if this particular association references the objects from this model. On top of that, the Noteable table approach requires some additional work to create the entry in
You can always have a several mutually exclusive foreing keys in your notes:
CREATE TABLE some_entity (id INTEGER); CREATE TABLE other_entity (id INTEGER); CREATE TABLE notes ( id INTEGER, some_entity_id INTEGER REFERENCES some_entity(id), other_entity_id INTEGER REFERENCES other_entity(id), content TEXT );
But this solution doesn't really scale well as the number of the models which can contain notes increases. Also, your DBAs will love you if you go this way.
The solution to this problem we used in Base was to have two columns in Notes table: one holding the type of the "noteable" object, i.e. and the other for the id of this object:
CREATE TABLE some_entity (id INTEGER); CREATE TABLE other_entity (id INTEGER); CREATE TABLE notes ( id INTEGER, noteable_id INTEGER, noteable_type TEXT, content TEXT );
The glaring issue with this approach is losing the consistency guarantee - no database I know of support this kind of foreign keys. But when you have SOA on the backend and the notes are stored in a separate database than the noteable objects, this is not your top concern. On mobile apps, even though we have a single database, we use the same structure, because all the other have some implementation issues and worse performance characteristics.
I'm not a db expert, and I haven't found any discussion of similar cases, which means that a) we're doing something very wrong or b) we have just very specific requirements. Let me know if it's a former case.
I needed to model this relationships in Thneed, which tured out to be quite tricky, but that's the topic for another blog post.
Monday, October 28, 2013
Mobilization 2013
I gave the talk at the Mobilization 2013 conference this weekend. I have presented few libraries created at Base CRM to make the data model maintainable. Here are the slides:
Besides giving the talk, I have attended few other very interesting talks: +Wojtek Erbetowski has shown the way they test Android apps at Polidea using RoboSpock; +Mateusz Grzechociński introduced Android devs to new build system and shared an awesome gradle protip: use --daemon command line parameter to shave off few seconds from gradle startup. +Mateusz Herych described the Dagger basics and warned about few pitfalls. Mieszko Lassota described some UI blunders not only from the programming world. Finally, Krzysztof Kocel and Paweł Urban summarised the security pitfalls.
All in all, this years Mobilization conference was a great place to be at. See you there next year!
Besides giving the talk, I have attended few other very interesting talks: +Wojtek Erbetowski has shown the way they test Android apps at Polidea using RoboSpock; +Mateusz Grzechociński introduced Android devs to new build system and shared an awesome gradle protip: use --daemon command line parameter to shave off few seconds from gradle startup. +Mateusz Herych described the Dagger basics and warned about few pitfalls. Mieszko Lassota described some UI blunders not only from the programming world. Finally, Krzysztof Kocel and Paweł Urban summarised the security pitfalls.
All in all, this years Mobilization conference was a great place to be at. See you there next year!
Labels:
Android,
build system,
conference,
Dagger,
dependecy injection,
gradle,
talk,
UI
Friday, October 4, 2013
More Guava goodies - AbstractIterator
A while ago I wanted to perform a certain operation for every subsequent pair of elements in collection, i.e. for list [1, 2, 3, 4, 5] I wanted to do something with pairs (1, 2), (2, 3), (3, 4), (4, 5). In Haskell that would be easy:
The zip implementation is pretty straightforward:
Prelude> let frobnicate list = zip (init list) (tail list) Prelude> frobnicate [1..5] [(1,2),(2,3),(3,4),(4,5)]The problem is, I needed this stuff in my Android app, which means Java. The easiest thing to write would be obviously:
List<T> list; for (int i = 1; i != list.size(); ++i) { T left = list.get(i-1); T right = list.get(i); // do something useful }But where's the fun with that? Fortunately, there is Guava. It doesn't have the zip or init functions, but it provides tool to write them yourself - the AbstractIterator. Tl;dr of the documentation: override one method returning an element or returning special marker from endOfData() method result.
The zip implementation is pretty straightforward:
public static <TLeft, TRight> Iterable<Pair<TLeft, TRight>> zip(final Iterable<TLeft> left, final Iterable<TRight> right) { return new Iterable<Pair<TLeft, TRight>>() { @Override public Iterator<Pair<TLeft, TRight>> iterator() { final Iterator<TLeft> leftIterator = left.iterator(); final Iterator<TRight> rightIterator = right.iterator(); return new AbstractIterator<Pair<TLeft, TRight>>() { @Override protected Pair<TLeft, TRight> computeNext() { if (leftIterator.hasNext() && rightIterator.hasNext()) { return Pair.create(leftIterator.next(), rightIterator.next()); } else { return endOfData(); } } }; } }; }The tail can be achieved simply by calling the Iterables.skip:
public static <T> Iterable<T> getTail(Iterable<T> iterable) { Preconditions.checkArgument(iterable.iterator().hasNext(), "Iterable cannot be empty"); return Iterables.skip(iterable, 1); }For init you could write similar function:
public static <T> Iterable<T> getInit(final Iterable<T> iterable) { Preconditions.checkArgument(iterable.iterator().hasNext(), "Iterable cannot be empty"); return Iterables.limit(iterable, Iterables.size(iterable)); }But this will iterate through the entire iterable to count the size. We don't need the count however, we just need to know if there is another element in the iterable. Here is more efficient solution:
public static <T> Iterable<T> getInit(final Iterable<T> iterable) { Preconditions.checkArgument(iterable.iterator().hasNext(), "Iterable cannot be empty"); return new Iterable<T>() { @Override public Iterator<T> iterator() { final Iterator<T> iterator = iterable.iterator(); return new AbstractIterator<T>() { @Override protected T computeNext() { if (iterator.hasNext()) { T t = iterator.next(); if (iterator.hasNext()) { return t; } } return endOfData(); } }; } }; }All methods used together look like this:
List<T> list; for (Pair<T, T> zipped : zip(getInit(list), getTail(list))) { // do something useful }
Monday, September 30, 2013
Mobilization 2013 and Android Tech Talks meetup
I'll give the presentation on this years Mobiliztion conference in Łódź on October 26th:
I'll talk about challenges related to ContentProvider and data model in general we faced during 2 years of development of Base CRM for Android. Even if this particular topic does not concern you, the agenda is ripe with other interesting Android topics: dependency injection with Dagger, Gradle, unit testing, continuous integration. It's not Android specific event - there are also several presentations about other mobile platforms.
If you already have other plans for October 26th, you want to share some war stories related to data model on Android or you just want to talk about Android with fellow geeks, I recommend you a MeetUp happening next week in Kraków: Android Tech Talks #3. I'll give a short topic intro, which (I hope) will be followed by deep, insightful discussion.
I'll talk about challenges related to ContentProvider and data model in general we faced during 2 years of development of Base CRM for Android. Even if this particular topic does not concern you, the agenda is ripe with other interesting Android topics: dependency injection with Dagger, Gradle, unit testing, continuous integration. It's not Android specific event - there are also several presentations about other mobile platforms.
If you already have other plans for October 26th, you want to share some war stories related to data model on Android or you just want to talk about Android with fellow geeks, I recommend you a MeetUp happening next week in Kraków: Android Tech Talks #3. I'll give a short topic intro, which (I hope) will be followed by deep, insightful discussion.
Saturday, September 21, 2013
Guava goodies
This is a long overdue post after my Guava on Android post from February. Since then I've been using Guava in pretty much every Java project I was involved in and I still find new stuff that makes my code both shorter and clearer. Some random examples:
Objects.equal()
Lists, Maps and Sets classes contain bunch of newFooCollection, which effectively replace the diamond operator from JDK7, but also allow you to initialize the collection from varargs.
Sets also contain the difference, intersection, etc. methods for common operations on sets, which a) have sane names, unlike some stuff from JDK's Collections, and b) doesn't change operands, so you don't have to make a defensive copy if you want to use the same set in two operations.
Speaking of defensive copying: Guava has a set of Immutable collections, which were created just for this purpose. There are few other very useful collections: LoadingCache, which you can think of as a lazy map with specified generator for new items; Multiset, very handy if you need to build something like a histogram; Table if you need to lookup value using two keys.
The other stuff I use very often are Preconditions. It's just a syntactic sugar for some sanity checks in your code, but it makes them more obvious, especially when you skim through some unfamiliar code. Bonus points: if you don't use the return values from checkNotNull and checkPositionIndex, you can remove those checks from performance critical sections using Proguard.
On Android you have the Log.getStackTraceString() helper method, but in plain Java you'd have to build one from Throwable.getStackTrace(). Only you don't have to do this, since Guava have Throwables.getStackTraceAsString() utility method.
Guava introduces also some functional idioms in form of Collections2.transform and Collections2.filter, but I have mixed feelings about them. On one hand sometimes they are life savers, but usually they make the code much uglier than the good ol' imperative for loop, so ues them with caution. They get especially ugly when you need to chain multiple transformations and filters, but for this case the Guava provides the FluentIterable interface.
None of the APIs listed above is absolutely necessary, but seriously, you want to use Guava (but sometimes not the latest version). Each part of it raises the abstraction level of your code a tiny bit, improving it one line at the time.
Objects.equal()
// instead of: boolean equal = one == null ? other == null : one.equals(other); // Guava style: boolean equal = Objects.equal(one, other);Objects.hashcode()
// instead of: @Override public int hashCode() { int result = x; result = 31 * result + (y != null ? Arrays.hashCode(y) : 0); result = 31 * result + (z != null ? z.hashCode() : 0); return result; } // Guava style: @Override public int hashCode() { return Objects.hashCode(x, y, z); }Joiner
// instead of: StringBuilder b = new StringBuilder(); for (int i = 0; i != a.length; ++i) { b.append(a[i]); if (i != a.length - 1) { b.append(", "); } } return b.toString(); // Guava style: Joiner.on(", ").join(a);ComparisonChain
// instead of: @Override public int compareTo(Person other) { int cmp = lastName.compareTo(other.lastName); if (cmp != 0) { return cmp; } cmp = firstName.compareTo(other.firstName); if (cmp != 0) { return cmp; } return Integer.compare(zipCode, other.zipCode); } // Guava style: @Override public int compareTo(Person other) { return ComparisonChain.start() .compare(lastName, other.lastName) .compare(firstName, other.firstName) .compare(zipCode, other.zipCode) .result(); }
Lists, Maps and Sets classes contain bunch of newFooCollection, which effectively replace the diamond operator from JDK7, but also allow you to initialize the collection from varargs.
Sets also contain the difference, intersection, etc. methods for common operations on sets, which a) have sane names, unlike some stuff from JDK's Collections, and b) doesn't change operands, so you don't have to make a defensive copy if you want to use the same set in two operations.
Speaking of defensive copying: Guava has a set of Immutable collections, which were created just for this purpose. There are few other very useful collections: LoadingCache, which you can think of as a lazy map with specified generator for new items; Multiset, very handy if you need to build something like a histogram; Table if you need to lookup value using two keys.
The other stuff I use very often are Preconditions. It's just a syntactic sugar for some sanity checks in your code, but it makes them more obvious, especially when you skim through some unfamiliar code. Bonus points: if you don't use the return values from checkNotNull and checkPositionIndex, you can remove those checks from performance critical sections using Proguard.
On Android you have the Log.getStackTraceString() helper method, but in plain Java you'd have to build one from Throwable.getStackTrace(). Only you don't have to do this, since Guava have Throwables.getStackTraceAsString() utility method.
Guava introduces also some functional idioms in form of Collections2.transform and Collections2.filter, but I have mixed feelings about them. On one hand sometimes they are life savers, but usually they make the code much uglier than the good ol' imperative for loop, so ues them with caution. They get especially ugly when you need to chain multiple transformations and filters, but for this case the Guava provides the FluentIterable interface.
None of the APIs listed above is absolutely necessary, but seriously, you want to use Guava (but sometimes not the latest version). Each part of it raises the abstraction level of your code a tiny bit, improving it one line at the time.
Thursday, September 12, 2013
Forger library
Sometimes the code you write is hard to test, and the most likely reason for this is that you wrote a shitty code. Other times, the code is quite easy to test, but setting up the test fixture is extremely tedious. It may also mean that you wrote a shitty code, but it may also mean that you don't have the right tools.
For me the most painful part of writing tests was filling the data model with some fake data. The most straightforward thing to do is to write helper methods for creating this data, but this means you'll have two pieces of code to maintain side by side: the data model and the helper methods. The problem gets even more complicated when you need to create a whole hierarchy of objects.
The first step is generating a valid ContentValues for your data model. You need to know the column names and the type of the data that should be generated for a given column. Note that for column data type you cannot really use the database table definitions - for example sqlite doesn't have boolean column type, so you'd define your column as integer, but the valid values for this column are only 1 and 0.
This is not enough though, because you'd generate random values for the foreign keys, which might crash the app (if you enforce the foreign key constraints) or break some other invariants in your code. You might work around this by creating the objects in the right order and overriding generated data for foreign key columns, but this would be tedious and error prone solution.
I have recently posted about my two side-projects: MicroOrm and Thneed. The former let's you annotate fields in POJO and handles the conversion from POJO to ContentValues and from Cursor to POJO:
The returned ModelGraph object is a data structure that can be processed by independently written processors, i.e. they are the Visitable and Visitor parts of the visitor design pattern. The entities in relationship definitions are not a plain marker Objects - the first builder call specifies the interface they have to implement. This interface can be used by Visitors to get useful information about the connected models and, as a type parameter of ModelGraph, ensures that you are using the correct Visitors for your ModelGraph. See my last post about Visitors for more information about generifying the visitor pattern.
In our case the interface should declare which POJO contains MicroOrm annotations and where should the generated ContentValues be inserted:
Check out the code on github and don't forget to star this project if you find it interesting.
The funny thing about this project is that it's a byproduct of Thneed, which I originally wrote to solve another problem. It makes me think that the whole idea of defining the relationships as a visitable structure is more flexible than I originally anticipated and it might become the cornerstone of the whole set of useful tools.
For me the most painful part of writing tests was filling the data model with some fake data. The most straightforward thing to do is to write helper methods for creating this data, but this means you'll have two pieces of code to maintain side by side: the data model and the helper methods. The problem gets even more complicated when you need to create a whole hierarchy of objects.
The first step is generating a valid ContentValues for your data model. You need to know the column names and the type of the data that should be generated for a given column. Note that for column data type you cannot really use the database table definitions - for example sqlite doesn't have boolean column type, so you'd define your column as integer, but the valid values for this column are only 1 and 0.
This is not enough though, because you'd generate random values for the foreign keys, which might crash the app (if you enforce the foreign key constraints) or break some other invariants in your code. You might work around this by creating the objects in the right order and overriding generated data for foreign key columns, but this would be tedious and error prone solution.
I have recently posted about my two side-projects: MicroOrm and Thneed. The former let's you annotate fields in POJO and handles the conversion from POJO to ContentValues and from Cursor to POJO:
public class Customer { @Column("id") public long id; @Column("name") public String name; } public class Order { @Column("id") public long id; @Column("amount") public int amount; @Column("customer_id") public long customerId; }The latter allows you to define the relationships between entities in your data model:
ModelGraph<ModelInterface> modelGraph = ModelGraph.of(ModelInterface.class) .identifiedByDefault().by("id") .where() .the(ORDER).references(CUSTOMER).by("customer_id") .build();See what I'm getting at?
The returned ModelGraph object is a data structure that can be processed by independently written processors, i.e. they are the Visitable and Visitor parts of the visitor design pattern. The entities in relationship definitions are not a plain marker Objects - the first builder call specifies the interface they have to implement. This interface can be used by Visitors to get useful information about the connected models and, as a type parameter of ModelGraph, ensures that you are using the correct Visitors for your ModelGraph. See my last post about Visitors for more information about generifying the visitor pattern.
In our case the interface should declare which POJO contains MicroOrm annotations and where should the generated ContentValues be inserted:
public interface MicroOrmModel { public Class<?> getModelClass(); } public interface ContentResolverModel { public Uri getUri(); } interface ModelInterface extends ContentResolverModel, MicroOrmModel { } public static final ModelInterface CUSTOMER = new ModelInterface() { @Override public Uri getUri() { return Customers.CONTENT_URI; } @Override public Class<?> getModelClass() { return Customer.class; } }The final step is to wrap everything in fluent API:
Forger<ModelInterface> forger = new Forger(modelGraph, new MicroOrm()); Order order = forger.iNeed(Order.class).in(contentResolver); // note: we didn't created the Customer dependency of Order, but: assertThat(order.customer_id).isNotEqualTo(0); // of course we can create Customer first and then create Order for it: Customer customer = forger.iNeed(Customer.class).in(contentResolver); Order anotherOrder = forger.iNeed(Order.class).relatedTo(customer).in(contentResolver); assertThat(anotherOrder.customer_id).isEqualTo(customer.id); // or if we need multiple orders for the same customer: Customer anotherCustomer = forger.iNeed(Customer.class).in(contentResolver); Forger<ModelInterface> forgerWithContext = forger.inContextOf(anotherCustomer); Order orderA = forgerWithContext.iNeed(Order.class).in(contentResolver); Order orderB = forgerWithContext.iNeed(Order.class).in(contentResolver); assertThat(orderA.customer_id).isEqualTo(anotherCustomer.id); assertThat(orderB.customer_id).isEqualTo(anotherCustomer.id);The most pathological case in our code base was a test with 10 lines of code calling over 100 lines of helper methods and 6 lines of the actual test logic. The Forger library allowed us to get rid of all the helper methods and reduce the 10 lines of setup to 1 fluent API call (it's quite a long call split into few lines, but it's much prettier than the original code).
Check out the code on github and don't forget to star this project if you find it interesting.
The funny thing about this project is that it's a byproduct of Thneed, which I originally wrote to solve another problem. It makes me think that the whole idea of defining the relationships as a visitable structure is more flexible than I originally anticipated and it might become the cornerstone of the whole set of useful tools.
Sunday, August 25, 2013
Random musings on Visitor design pattern in Java
First let's have a quick refresher on what is the visitor design pattern. This pattern consists of two elements: the Visitor, which in the Gang of Four book is defined as an "operation to be performed on the elements of an object structure"; the second element is the structure itself.
The classic visitor pattern requires you to keep some kind of state and to write a method for getting and clearing this state. This might be what you want, but if you want to simply process your Visitable objects one by one and return independent computations, you might want just to return a value from visit() method. So the first twist you can add to classic Visitor pattern is to return a value from visit/accept method:
public interface Visitable { void accept(Visitor visitor); } public interface Visitor { }The Visitor interface is for now empty, because we haven't declared any Visitable types. In every class implementing Visitable interface we'll call a different method in Visitor:
public interface Visitable { void accept(Visitor visitor); } public static class VisitableFoo implements Visitable { @Override public void accept(Visitor visitor) { visitor.visit(this); } } public static class VisitableBar implements Visitable { @Override public void accept(Visitor visitor) { visitor.visit(this); } } public interface Visitor { void visit(VisitableBar visitableBar); void visit(VisitableFoo visitableFoo); }Sounds like a lot of work, but there is a reason for it. You could achieve something similar by simply adding another method to the Visitable pattern, but this means you'd have to be able to modify the Visitable classes. The visit/accept double dispatch allows you to write a library like Thneed, which defines the data structure, but leaves the operations implementation to the library users.
The classic visitor pattern requires you to keep some kind of state and to write a method for getting and clearing this state. This might be what you want, but if you want to simply process your Visitable objects one by one and return independent computations, you might want just to return a value from visit() method. So the first twist you can add to classic Visitor pattern is to return a value from visit/accept method:
public interface Visitable { <TReturn> TReturn accept(Visitor<TReturn> visitor); } public static class VisitableFoo implements Visitable { @Override public <TReturn> TReturn accept(Visitor<TReturn> visitor) { return visitor.visit(this); } } public static class VisitableBar implements Visitable { @Override public <TReturn> TReturn accept(Visitor<TReturn> visitor) { return visitor.visit(this); } } public interface Visitor<TReturn> { TReturn visit(VisitableBar visitableBar); TReturn visit(VisitableFoo visitableFoo); }Note that only Visitor interface is parametrized with a return type. The only thing that Visitable.accept() do is dispatching the call to Visitor, so there is no point in generifying the whole interface, it's sufficient to make an accept method generic. In fact, making the TReturn type a Visitable interface type would be a design mistake, because you wouldn't be able to create a Visitable that could be accepted by Visitors with different return types:
public static class MyVisitable implements Visitable<String>, Visitable<Integer> { // Invalid! "Duplicate class Visitable" compilation error. }Another thing you can do is generifying the whole pattern. The use case for this is when your Visitables are some kind of containers or wrappers over objects (again, see the Thneed library, where the Visitables subclasses are the different kinds of relationships between data models and are parametrized with the type representing the data models). The naive way to do this is just adding the type parameters:
public interface Visitable<T> { void accept(Visitor<T> visitor); } public static class VisitableFoo<T> implements Visitable<T> { @Override public void accept(Visitor<T> visitor) { visitor.visit(this); } } public static class VisitableBar<T> implements Visitable<T> { @Override public void accept(Visitor<T> visitor) { visitor.visit(this); } } public interface Visitor<T> { void visit(VisitableBar<T> visitableBar); void visit(VisitableFoo<T> visitableFoo); }There is a problem with the signatures of those interfaces. Let's say that we want our Visitor to operate on Visitables containing Numbers:
Visitor<Number> visitor = new Visitor<Number>() { @Override public void visit(VisitableBar<Number> visitableBar) { } @Override public void visit(VisitableFoo<Number> visitableFoo) { } };You should think about Visitor as the method accepting the Visitable. If our Visitor can handle something that contains Number, it should also handle something that contain any Number subclass - it's a classic example of "consumer extends, producer super" behaviour or covariance and contravariance in general. In the implementation above however, the strict generics types are causing compilation errors. Generics wildcards to the rescue:
public interface Visitable<T> { void accept(Visitor<? super T> visitor); } public static class VisitableFoo<T> implements Visitable<T> { @Override public void accept(Visitor<? super T> visitor) { visitor.visit(this); } } public static class VisitableBar<T> implements Visitable<T> { @Override public void accept(Visitor<? super T> visitor) { visitor.visit(this); } } public interface Visitor<T> { void visit(VisitableBar<? extends T> visitableBar); void visit(VisitableFoo<? extends T> visitableFoo); }Note that the change has to be symmetric, i.e. both the accept() and visit() signatures have to include the bounds. Now we can safely call:
VisitableBar<Integer> visitableBar = new VisitableBar<Integer>(); Visitor<Number> visitor = new Visitor<Number>() { // visit() implementations } visitableBar.accept(visitor);
Tuesday, August 20, 2013
Proguard gotcha
A while ago I wrote about removing the logs from release builds using Proguard. As usual, I've found a gotcha that might cost you a couple hours of head scratching.
Let's say that we have a code like this somewhere:
Let's say that we have a code like this somewhere:
package com.porcupineprogrammer.proguardgotcha; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class MainActivity extends Activity { static final String TAG = "ProguardGotcha"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.d(TAG, doNotRunOnProduction()); } private String doNotRunOnProduction() { Log.e(TAG, "FIRE ZE MISSILES!"); return "Harmless log message"; } }The doNotRunOnProduction() method might perform some expensive database query, send some data over the network or launch intercontinental missiles - anyways do something that you don't want to happen in production app. If you run the code on the debug build you'll of course get the following logs.
08-20 19:31:34.183 1819-1819/com.porcupineprogrammer.proguardgotcha E/ProguardGotcha: FIRE ZE MISSILES! 08-20 19:31:34.183 1819-1819/com.porcupineprogrammer.proguardgotcha D/ProguardGotcha: Harmless log messageNow, let's add Proguard config that removes all the Log.d() calls:
-assumenosideeffects class android.util.Log { public static *** d(...); }We might expect the Log.e() call to be gone as well, but alas, here is what we get:
08-20 19:34:45.733 2078-2078/com.porcupineprogrammer.proguardgotcha E/ProguardGotcha: FIRE ZE MISSILES!The key point to understanding what is happening here is the fact that the Proguard does not operate on the source code, but on the compiled bytecode. In this case, what the Proguard processes is more like this code:
@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); String tmp = doNotRunOnProduction(); Log.d(TAG, tmp); }One might argue that the temporary variable is not used and Proguard should remove it as well. Actually that might happen if you add some other Proguard configuration settings, but the point of this blog post is that when you specify that you want to remove calls to Log.d(), you shouldn't expect that any other calls will be affected. They might, but if your code really launches the missiles (or does something with similar effect for your users), you don't want to bet on this.
Monday, August 19, 2013
Introducing: Merry Cook
After a long hiatus since November 2011 I have released another clone of classic Russian handheld game from the '80s - Merry Cook. I knew that the "Nu, Pogodi!" code wasn't my top achievement, and I had to force myself into diving into it, but I feel it was worth it. Few things I think I did right this time:
- Do not keep *any* game logic in QML. Qt has an excellent state machine framework, which makes writing the game logic in C++ relatively easy.
- Keep the QML/C++ interface as simple as possible. Send signals from QML to C++ when user takes some action and update the QML UI from the C++ side by changing QProperties on some context property object. I've actually used two objects for that, because it made testing a bit easier.
- Unit tests. I've set up the testing harness using gmock/gtest and I've used it to unit test some things. I probably would have been fine without them, since Merry Cook is a very simple but a) it forced me to divide stuff into more manageable classes and b) it gave me a sense of accomplishing something early. It's funny, because even though I'm absolutely conscious of the latter fact, I think it gave me enough boost to get to the point where I had moved forward with implementation and polishing, because I really wanted to publish this game.
- QProperty helper. I wrote an abominable macro for reducing the QProperty boilerplate:
Things still on my TODO list:
- More tests. Besides unit tests I'd also like to write some integration tests for the state machine setup and connections, but I didn't have time to think how this should be done without making too much state public just for testing. Maybe next time.
- Refactor "Nu, Pogodi!". I jumped straight into new project, but I should have started with refactoring the old crap. On the other hand, it might have sucked out all the motivation out of me, and had I done it, I wouldn't have been writing this post right now. So, maybe next time.
- Passing enums to QML. I have no idea what I did wrong, but I couldn't get the QML to see my C++ enums. I've resorted to passing them as simple ints and using magic numbers on QML side, but it's definitely something I should fix. Obviously not now, but next time.
Anyways, I'm really happy with the final results, especially with the gameplay experience, which I think mimics the original game very well. Try it yourself!
Sunday, August 18, 2013
Gradle - first impressions
Android Studio kept nagging me about make implementation deprecation, so I decided to try the new build system based on Gradle. At first I obviously hit the missing Android Support Repository issue, but after installing missing component in Android SDK Manager everything was created correctly (AFAIK the v0.2.3 of Android Studio doesn't have this issue anymore). On Mac I also had to set the ANDROID_HOME env variable to be able to build stuff from command line.
The app templates are a bit outdated, for example you might get rid of the libs/android-support-v4.jar, because gradle will anyways use the jar from aforementioned Android Support Repository. The build.gradle also references older support lib and build tools so you should probably bump it to the latest versions.
Adding the dependency to the local jar is trivially easy - we need just one line in dependencies section:
Let's get back to the good parts. The list of the tasks returned by "gradlew tasks" command contains the installDebug task, but not the installRelease one. This happens, because there is no default apk signing configuration for release builds. The simplest workaround is to use the same configuration as debug builds:
So far so good. Let's declare the dependency on another project (a.k.a. module):
I feel I'm missing something elementary. The way I expect it to work is to define in each project what kind of artifacts are created, define artifacts each project depends on and let Gradle figure out the order of building subprojects. Please drop me a line if what I just wrote doesn't make any sense, I expect too much from the build system, or I missed some basic stuff.
Another thing that's not so great is the long startup time. Even getting the list of available tasks for a simple project takes between 5 and 8 seconds on 2012 MBP every single time. I understand why this happens - build configs theoretically can check the weather forecast and use different configuration on a rainy days - and that this overhead is negligible when compared to the actual build time, but every time I stare a this "Loading" prompt I think that this should be somehow cached.
It's time to wrap this blog post up. The main question I asked myself was: is it worth to move to gradle? I'd say that if you have a manageable Maven build, then you shouldn't bother (yet), but it's a huge step forward when compared to ant builds.
The app templates are a bit outdated, for example you might get rid of the libs/android-support-v4.jar, because gradle will anyways use the jar from aforementioned Android Support Repository. The build.gradle also references older support lib and build tools so you should probably bump it to the latest versions.
Adding the dependency to the local jar is trivially easy - we need just one line in dependencies section:
dependencies { compile files("libs/gson-2.2.4.jar") }You can also define dependency to every jar in libs directory:
dependencies { compile fileTree(dir: 'libs', include: '*.jar') }Using code annotation processors (like butterknife) is also trivial:
repositories { mavenCentral() } dependencies { compile 'com.jakewharton:butterknife:2.0.1' }The fist of the gradle's ugly warts is related to the native libs support. You can add the directory with *.so files, the build will succeed, but you'll get the runtime errors when your app will try to call native method. The workaround found on teh interwebs is to copy your native libs into the following directory structure:
lib lib/mips/*.so lib/other_architectures/*.so lib/x86/*.soNOTE: there is no typo, the top level directory should be a singular "lib". Then you have to zip the whole thing, rename it to *.jar and include as a regular jar library. Lame, but does the trick.
Let's get back to the good parts. The list of the tasks returned by "gradlew tasks" command contains the installDebug task, but not the installRelease one. This happens, because there is no default apk signing configuration for release builds. The simplest workaround is to use the same configuration as debug builds:
android { buildTypes { release { signingConfig signingConfigs.debug } } }But in the real project you should of course define the real signing configuration along the lines:
android { signingConfigs { release { storeFile file("release.keystore") storePassword "XXX" keyAlias "XXX" keyPassword "XXX" } } buildTypes { release { signingConfig signingConfigs.release } } }The other useful setting that goes into the buildTypes section is the Proguard configuration. Proguard is disabled by default in gradle builds so we need to turn it on for release builds; we also need to specify the rules to be used by Proguard:
android { buildTypes { release { runProguard true proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), file('proguard').listFiles() signingConfig signingConfigs.release } } }There are two nice things about this configuration: we can easily specify that we want to use the default rules defined in Android SDK and we can specify multiple additional files. In the configuration above I use all files from 'proguard' directory, but you can defined a simple list of files as well. It allows you to create a reusable Proguard config files for the commonly used libraries like ActionBarSherlock or google-gson.
So far so good. Let's declare the dependency on another project (a.k.a. module):
dependencies { compile project(':submoduleA') }Note that this is also declared in the app project's build.gradle. It's perfectly fine to include this kind of dependency in your app project, but I'm not happy with this solution for declaring dependencies between subprojects, because we're introducing dependencies to main project's structure.
// in build.gradle in main project dependencies { compile project(':submoduleA') compile project(':submoduleB') } // in build.gradle of submoduleB, which depends on submoduleA dependencies { compile project(':submoduleA') }It's especially bad when those subprojects are reusable libraries which should be completely separate from your main project. The workaround I read about, but haven't tested myself is creating a local Maven repository and publishing the artifacts from subprojects. AFAIK you still have to publish the artifacts in the right order, so you still have to kind of manually manage your dependencies, which IMO defeats the purpose of having .
I feel I'm missing something elementary. The way I expect it to work is to define in each project what kind of artifacts are created, define artifacts each project depends on and let Gradle figure out the order of building subprojects. Please drop me a line if what I just wrote doesn't make any sense, I expect too much from the build system, or I missed some basic stuff.
Another thing that's not so great is the long startup time. Even getting the list of available tasks for a simple project takes between 5 and 8 seconds on 2012 MBP every single time. I understand why this happens - build configs theoretically can check the weather forecast and use different configuration on a rainy days - and that this overhead is negligible when compared to the actual build time, but every time I stare a this "Loading" prompt I think that this should be somehow cached.
It's time to wrap this blog post up. The main question I asked myself was: is it worth to move to gradle? I'd say that if you have a manageable Maven build, then you shouldn't bother (yet), but it's a huge step forward when compared to ant builds.
Saturday, August 17, 2013
MicroOrm and Thneed available on Maven Central
I've uploaded my two experimental projects, MicroOrm and Thneed, to Maven Central. If you want to try them out, just add the following lines to your pom.xml:
<dependency> <groupId>org.chalup.thneed</groupId> <artifactId>thneed</artifactId> <version>0.3</version> </dependency> <dependency> <groupId>org.chalup.microorm</groupId> <artifactId>microorm</artifactId> <version>0.2</version> </dependency>Don't hesitate to email me, create an issue on github or propose a pull request. Any form of feedback is welcome!
Thursday, August 15, 2013
Thneed library
The MicroOrm library I started a while ago solves only a tiny part of data model related problems - conversion between strongly typed objects and storage classes specific for Android. We discussed few existing libraries for data model implementation we might use at Base CRM, but we were not fully satisfied with any of them. There are two approaches to this problem:
The first approach is to define the Data Access Objects / entity objects and create SQLite tables using this data. Almost every ORM solution for Android works this way. The deal breaker for those solutions is the complete disregard for data migrations. The ORMLite docs suggest that you should just fall back to the raw queries, but this means that you need to know the schema generated from DAOs, which is a classic case of leaky abstraction.
The completely opposite approach is used in Mechanoid library. You define the database schema as a sequence of migrations and the library generates the DAOs and some other stuff. I was initially very excited about this project, but it's in a very early state of development and the commit activity is not very high. The main problem with this concept is extensibility and customization. For both you probably have to change the way the code is generated from parsed SQLite schema. We also have some project specific issues that would makes this project hard to use.
At the end we haven't found an acceptable solution among existing libraries and frameworks, but something good came out of our discussions. The sentence which came up again and again was "It wouldn't be too hard to implement if we knew the relationships between our models". Wait a minute, we do know these relationships! We just need a way to represent them in the Java code!
And so, the Thneed was born.
By itself the Thneed doesn't do anything useful - it just lets you tell that one X has many Ys and so on, to create a relationship graph of your data models. The trick is, this graph is a Visitable part of Visitor design pattern, which means that you can write any number of Visitors to do something useful with the information about declared relationships (see the project's readme for some ideas). Think about it as a tool for creating other tools.
The project is in a very early stage, but I've already started another project on top of Thneed and at this point the general idea seems sound. I've also learned few tricks I'll write about in a little while. As usual, the feedback is welcome, and if you find this idea interesting, do not hesitate to star the project on Github.
The first approach is to define the Data Access Objects / entity objects and create SQLite tables using this data. Almost every ORM solution for Android works this way. The deal breaker for those solutions is the complete disregard for data migrations. The ORMLite docs suggest that you should just fall back to the raw queries, but this means that you need to know the schema generated from DAOs, which is a classic case of leaky abstraction.
The completely opposite approach is used in Mechanoid library. You define the database schema as a sequence of migrations and the library generates the DAOs and some other stuff. I was initially very excited about this project, but it's in a very early state of development and the commit activity is not very high. The main problem with this concept is extensibility and customization. For both you probably have to change the way the code is generated from parsed SQLite schema. We also have some project specific issues that would makes this project hard to use.
At the end we haven't found an acceptable solution among existing libraries and frameworks, but something good came out of our discussions. The sentence which came up again and again was "It wouldn't be too hard to implement if we knew the relationships between our models". Wait a minute, we do know these relationships! We just need a way to represent them in the Java code!
And so, the Thneed was born.
By itself the Thneed doesn't do anything useful - it just lets you tell that one X has many Ys and so on, to create a relationship graph of your data models. The trick is, this graph is a Visitable part of Visitor design pattern, which means that you can write any number of Visitors to do something useful with the information about declared relationships (see the project's readme for some ideas). Think about it as a tool for creating other tools.
The project is in a very early stage, but I've already started another project on top of Thneed and at this point the general idea seems sound. I've also learned few tricks I'll write about in a little while. As usual, the feedback is welcome, and if you find this idea interesting, do not hesitate to star the project on Github.
Wednesday, June 26, 2013
Guava and minSdkVersion
A while ago I wrote about pre-dexing feature introduced at the end of 2012, which facilitates using large Java libraries like Guava for developing Android apps. Few months later, I'm still discovering stuff in Guava that makes my life easier (BTW: I still owe you a blog post with a list of Guava goodies). But this week, for a change, I've managed to make my life harder with Guava.
I wanted to include the javadocs and source jars for Guava, and when I opened maven central I saw the new version and decided to upgrade from 13.0.1 to 14.0.1. Everything went smoothly except for the minor Proguard hiccup: you have to include the jar with the @Inject annotation. At least it went smoothly on the first few phones I've tested our app on, but on some ancient crap with Android 2.2 the app crashed with NoClassDefFoundError.
The usual suspect in this case is, of course, Proguard. I've also suspected the issue similar to the libphonenumber crash I wrote about in March. When both leads turned out to be a dead end, I decided to run the debug build and to my surprise it crashed as well. And there was a logcat message which pinpointed the issue: the ImmutableSet in Guava 14.0.1 depends somehow on NavigableSet interface, which is available from API level 9. Sad as I was, I downgraded the Guava back to 13.0.1 and everything started to work again.
So what have I learned today?
The scary thing is, the similar thing might happen again if some other class in Guava depends on APIs unavailable on older versions of Android. Sounds like a good idea for a weekend hack project: some way to mark or check the minSdkVersion needed to use a given class or method.
I wanted to include the javadocs and source jars for Guava, and when I opened maven central I saw the new version and decided to upgrade from 13.0.1 to 14.0.1. Everything went smoothly except for the minor Proguard hiccup: you have to include the jar with the @Inject annotation. At least it went smoothly on the first few phones I've tested our app on, but on some ancient crap with Android 2.2 the app crashed with NoClassDefFoundError.
The usual suspect in this case is, of course, Proguard. I've also suspected the issue similar to the libphonenumber crash I wrote about in March. When both leads turned out to be a dead end, I decided to run the debug build and to my surprise it crashed as well. And there was a logcat message which pinpointed the issue: the ImmutableSet in Guava 14.0.1 depends somehow on NavigableSet interface, which is available from API level 9. Sad as I was, I downgraded the Guava back to 13.0.1 and everything started to work again.
So what have I learned today?
- Upgrading the libraries for the sake of upgrading is bad (m'kay).
- Before you start wrestling with Proguard, test the debug build.
- Android 2.2 doesn't support all Java 1.6 classes.
The scary thing is, the similar thing might happen again if some other class in Guava depends on APIs unavailable on older versions of Android. Sounds like a good idea for a weekend hack project: some way to mark or check the minSdkVersion needed to use a given class or method.
Thursday, June 6, 2013
SQLite type affinity strikes back
About a year ago I have wrote about a certain SQLite gotcha on Android. tl;dr: in some cases when you create a view with unions, SQLite cannot determine a type of the column, and since Android binds all selection arguments as strings, SQLite ends up comparing X with "X", concludes those are not the same thing and returns fewer rows than you'd expect.
Recently the same problem reared it's ugly head. It turns out that it's very easy to create in a view a column with undefined type. It might happen in case of joins, using aggregation functions, subqueries, etc. pretty much anything more fancy than simple select. Therefore I recommend checking the columns type using the pragma table_info(table) command for every view:
If the type of a column is undefined and you need to use this column in your selection arguments, you should add the UNION with an empty row with well defined column types:
Recently the same problem reared it's ugly head. It turns out that it's very easy to create in a view a column with undefined type. It might happen in case of joins, using aggregation functions, subqueries, etc. pretty much anything more fancy than simple select. Therefore I recommend checking the columns type using the pragma table_info(table) command for every view:
sqlite> .head on sqlite> .mode column sqlite> pragma table_info (v); cid name type notnull dflt_value pk ---------- ---------- ---------- ---------- ---------- ---------- 0 test 0 0
If the type of a column is undefined and you need to use this column in your selection arguments, you should add the UNION with an empty row with well defined column types:
sqlite> CREATE TABLE types (i INTEGER, t TEXT); sqlite> CREATE VIEW vfix AS SELECT i AS test FROM types WHERE 1=0 UNION SELECT * FROM v; sqlite> pragma table_info (vfix); cid name type notnull dflt_value pk ---------- ---------- ---------- ---------- ---------- ---------- 0 test INTEGER 0 0
Tuesday, June 4, 2013
MicroOrm API established
Last weekend I have found some time again to work on the MicroOrm. Basically it's something like google-gson for Android database types - Cursors and ContentValues.
With help from +Mateusz Herych and +Bartek Filipowicz I have, hopefully, finalized the API of the v1.0. The initial draft of the library supported only basic field types: primitives and their boxed equivalents and of course strings. The current version allows registering adapters for any non-generic types.
+Mateusz Herych added also the @Embedded annotation which allows easy nesting of POJOs which are represented by multiple columns.
Those two mechanisms should allow you to write the entity objects for almost any data structure you have.
The only unsupported cases are generic entities and generic fields in entities. I decided to leave them out of the first release, because due to type erasure in java the implementation is not straightforward and I don't have such cases anywhere in my code anyways.
The next step is using the library in the existing project. I intend to use it in Base CRM, which should be sufficiently large project to reveal any MicroOrm's shortcomings.
Wednesday, May 29, 2013
Android stuff you probably want to know about
About once a month I interview potential employees at Base CRM. The nice thing about this is that I usually learn a thing or two. The not-so-nice thing about it is that sometimes you have to tell someone that there is much they have to learn.
At this point most of the candidates ask "OK, so what else should I know?". I used to give some ad-hoc answer for this question, but it's not the best idea, because I tend to forget to mention about some stuff; and even if I don't miss anything, the candidate probably won't remember half of what I said because of the stress accompanying the job interview.
Anyways, I decided to write down the list of Android learning materials, blogs, libraries, etc. I recommend reading about.
At the very least you should also know about Fragments and Loaders. If you want to persist the data, I recommend using the ContentProvider. It looks like a hassle to implement at first, but it solves all the issues with communication between Services and UI. While we're at the Services: you should know the difference between the bound Service and started Service, and you should know that most likely all you need is the IntentService. You should also know about BroadcastReceivers, and what is the ordered broadcast and sticky broadcast. Pay attention on what thread the different components operate.
Guava
ActionBarSherlock
JodaTime
Commons IO
Dagger
Otto
Gson
HoloEverywhere
Cyril Mottier
Romain Guy
Roman Nurik
Square
Android UI Patterns blog
Android Asset Studio
Android cheatsheet for graphic designers
Grepcode
AndroidXRef
I probably forgot about something very important, so please leave the comment if you thing anything is missing.
At this point most of the candidates ask "OK, so what else should I know?". I used to give some ad-hoc answer for this question, but it's not the best idea, because I tend to forget to mention about some stuff; and even if I don't miss anything, the candidate probably won't remember half of what I said because of the stress accompanying the job interview.
Anyways, I decided to write down the list of Android learning materials, blogs, libraries, etc. I recommend reading about.
Android basics
Some people's Android knowledge can be summed up as "Activities + AsyncTasks". That's not enough to write anything more complex than Yet Another Twitter Feed app, so if you seriously think of being the Android developer, go to http://developer.android.com/guide/components/index.html and fill the gaps in your education.At the very least you should also know about Fragments and Loaders. If you want to persist the data, I recommend using the ContentProvider. It looks like a hassle to implement at first, but it solves all the issues with communication between Services and UI. While we're at the Services: you should know the difference between the bound Service and started Service, and you should know that most likely all you need is the IntentService. You should also know about BroadcastReceivers, and what is the ordered broadcast and sticky broadcast. Pay attention on what thread the different components operate.
Libraries
Support libraryGuava
ActionBarSherlock
JodaTime
Commons IO
Dagger
Otto
Gson
HoloEverywhere
Blogs
Mark MurphyCyril Mottier
Romain Guy
Roman Nurik
Github
Jake WhartonSquare
Design / UI
Android ViewsAndroid UI Patterns blog
Android Asset Studio
Android cheatsheet for graphic designers
Miscellaneous
Google I/O app sourcesGrepcode
AndroidXRef
I probably forgot about something very important, so please leave the comment if you thing anything is missing.
Tuesday, May 28, 2013
Weekend hack: MicroOrm library
Last week I had to write some fromCursor() and getContentValues() boilerplate. Again. I finally got fed up and decided to write a library to replace all the hand rolled crap.
You may ask, why not use some existing ORM solution? There are plenty, five minutes with Google yielded these results:
The problem is, all those solutions are all-or-nothing, full blown ORMs, and all I need is the sane way to convert the Cursor to POJO and POJO to ContentValues.
And thus, the MicroOrm project was born. The public API was inspired by google-gson project and is dead simple:
The elephant in the room is obviously the performance. Current implementation is reflection-based, which incurs the significant overhead. I did some quick benchmarking and it seems that the MicroOrm is about 250% slower than the typical boilerplate code. Sounds appaling, but it's not that bad if you consider that a) the elapsed time of a single fromCursor call is still measured in 100s of microseconds and b) if you really need to process a lot of data you can fall back to manual Cursor iteration. I'm also considering changing the implementation to use code generation instead of reflection, similarly to Jake Wharton's butterknife, which should solve the performance problems.
In the following weeks I'll try to adapt the Base CRM code I'm working on to use the MicroOrm, and I expect this project to evolve as I face the real-life issues and requirements. All feedback, comments, ideas and pull requests are more than welcome. You can also show the support by starring the project on Github.
You may ask, why not use some existing ORM solution? There are plenty, five minutes with Google yielded these results:
- http://www.mobeelizer.com/
- http://ormlite.com/
- http://greendao-orm.com/
- https://github.com/ahmetalpbalkan/orman
- http://hadi.sourceforge.net/
- https://www.activeandroid.com/
- https://github.com/roscopeco/ormdroid
- http://droidparts.org/
- http://robotoworks.com/mechanoid-plugin/
The problem is, all those solutions are all-or-nothing, full blown ORMs, and all I need is the sane way to convert the Cursor to POJO and POJO to ContentValues.
And thus, the MicroOrm project was born. The public API was inspired by google-gson project and is dead simple:
public class MicroOrm { public <T> T fromCursor(Cursor c, Class<T> klass); public <T> T fromCursor(Cursor c, T object); public <T> ContentValues toContentValues(T object); public <T> Collection<T> collectionFromCursor(Cursor c, Class<T> klass); }I'd like to keep this library as simple as possible, so this is more or less the final API. I intend to add the MircroOrm.Builder which would allow registering adapters for custom types, but I haven't decided yet to what extent the conversion process should be customisable.
The elephant in the room is obviously the performance. Current implementation is reflection-based, which incurs the significant overhead. I did some quick benchmarking and it seems that the MicroOrm is about 250% slower than the typical boilerplate code. Sounds appaling, but it's not that bad if you consider that a) the elapsed time of a single fromCursor call is still measured in 100s of microseconds and b) if you really need to process a lot of data you can fall back to manual Cursor iteration. I'm also considering changing the implementation to use code generation instead of reflection, similarly to Jake Wharton's butterknife, which should solve the performance problems.
In the following weeks I'll try to adapt the Base CRM code I'm working on to use the MicroOrm, and I expect this project to evolve as I face the real-life issues and requirements. All feedback, comments, ideas and pull requests are more than welcome. You can also show the support by starring the project on Github.
Wednesday, May 22, 2013
Upgrading to Android SDK Tools revision 22
The Google I/O 2013 has come and gone and one of the many things left in its wake is the new revision of Android SDK Tools and ADT plugin for Eclipse. If you haven't let Eclipse go in favor of new hot Android Studio (which is what Mark Murphy, a.k.a. commons guy, recommends BTW) and you upgraded to the latest Android SDK Tools, you'll probably have some issues with building your old projects.
After installing all the updates from Android SDK Manager and updating the ADT Eclipse plugin your projects will simply fail to build, with the errors pointing to the R class in gen folder. If you try to build the project with ant you'll get more meaningful "no build tools installed" message. After re-running the Android SDK Manager, you should see an additional item in Tools section called Build-tools. Go ahead and install it.
Now your project will build (you might have to restart the Eclipse), but if you use any external libraries from your projects libs directory, your app will crash on the first call using this libs. To fix this you have to go to the project Properties, Java Build Path, Order and Export tab and check the "Android Private Libraries" item. The previous name for this item was "Android Dependencies" and apparently the build rules for those two are not updated correctly.
Of course new projects created with revision 22 of Android tools doesn't require jumping through all those hoops.
After installing all the updates from Android SDK Manager and updating the ADT Eclipse plugin your projects will simply fail to build, with the errors pointing to the R class in gen folder. If you try to build the project with ant you'll get more meaningful "no build tools installed" message. After re-running the Android SDK Manager, you should see an additional item in Tools section called Build-tools. Go ahead and install it.
Now your project will build (you might have to restart the Eclipse), but if you use any external libraries from your projects libs directory, your app will crash on the first call using this libs. To fix this you have to go to the project Properties, Java Build Path, Order and Export tab and check the "Android Private Libraries" item. The previous name for this item was "Android Dependencies" and apparently the build rules for those two are not updated correctly.
Of course new projects created with revision 22 of Android tools doesn't require jumping through all those hoops.
Wednesday, April 24, 2013
Android gotcha: CursorAdapter constructors
I just spend few hours analyzing and fixing a memory leak in Android application. With every orientation change the full Context, including the whole Activity was leaked. Long story short, the problem was caused by misuse of CursorAdapter: in subclass constructor we called CursorAdapter(context, null, false) instead of CursorAdapter(context, null, 0).
The difference is quite subtle. If you use the second constructor, you have to take care of handling content updates yourself. If you use the first constructor, the CursorAdapter will register an additional ContentObserver for you, but you need to manually reset the Cursor.
The funny thing is, this behavior is described in javadocs, but the documentation is spread between the constructor and FLAG_REGISTER_CONTENT_OBSERVER flag documentation. The second part contains most crucial information: you don't need to use this flag when you intend to use your adapter with CursorLoader.
If for some reason you want to use the adapter without CursorLoader, you should use the CursorAdapter(context, null, false) constructor, and call swapCursor(null) when leaving the Activity or Fragment.
The difference is quite subtle. If you use the second constructor, you have to take care of handling content updates yourself. If you use the first constructor, the CursorAdapter will register an additional ContentObserver for you, but you need to manually reset the Cursor.
The funny thing is, this behavior is described in javadocs, but the documentation is spread between the constructor and FLAG_REGISTER_CONTENT_OBSERVER flag documentation. The second part contains most crucial information: you don't need to use this flag when you intend to use your adapter with CursorLoader.
If for some reason you want to use the adapter without CursorLoader, you should use the CursorAdapter(context, null, false) constructor, and call swapCursor(null) when leaving the Activity or Fragment.
Saturday, March 23, 2013
libphonenumber crash on Android 3.2
Few days ago I saw the most peculiar crash on a tablet with Android 3.2:
Long story short, it turns out that some old version of libphonenumber, which doesn't support this particular phone number format, is included in the Android build on my 3.2 device. You can verify such thing by calling:
I started to wonder if any other libraries are affected, so I picked up some class names from the most popular libraries (as reported by AppBrain) and it seems there might be a similar issue with Apache Commons Codec jar. Fortunately there are no issues with stuff like Guava, GSON or support lib.
What's the workaround for this issue? Fork the library and change the package name.
java.lang.NoSuchFieldError: com.google.i18n.phonenumbers.PhoneNumberUtil$PhoneNumberFormat.RFC3966Of course there is such field in this class, it works on every other Android hardware I have access to. This was a debug build, so it couldn't have been the Proguard issue. The other functionality from the com.google.i18n.phonenumbers package worked fine, the issue only appeared if I wanted to format a phone number using this specific format.
Long story short, it turns out that some old version of libphonenumber, which doesn't support this particular phone number format, is included in the Android build on my 3.2 device. You can verify such thing by calling:
Class.forName("com.google.i18n.phonenumbers.PhoneNumberUtil");inside a project without any libs - on this single 3.2 device it will return the valid Class object, on every other device I tried it throws ClassNotFoundException.
I started to wonder if any other libraries are affected, so I picked up some class names from the most popular libraries (as reported by AppBrain) and it seems there might be a similar issue with Apache Commons Codec jar. Fortunately there are no issues with stuff like Guava, GSON or support lib.
What's the workaround for this issue? Fork the library and change the package name.
Android UI struggles: making a button with centered text and icon
Every time I work on the UI of Android app I get the feeling that there is either something terribly wrong with the Android UI framework or with my understanding of how it works. I can reason about how the app works on the higher level, but I cannot apply the same methodology to Android UI, except for the simplest designs. I have read a lot of Android source code, I have written few dozens of sample-like apps, but I still cannot just think of the views structure, type it in and be done - for complicated layouts with some optional elements (i.e. which are sometimes visible and sometimes gone) I need at least few attempts and, I confess, sometimes I'm desperate enough to do the "let's change this and see what happens" coding. Extremely frustrating.
I'm going to describe my struggles with Android UI on this blog, so if I'm doing something terribly wrong, hopefully someone will enlighten me by posting a comment; and in case something is terribly wrong with Android UI framework, I might be able to help other programmers in distress.
Today I have a simple task for you: create a button with some text and icon to the left of the text. The contents (both icon and text) should be centered inside the button.
That's simple right? Here's the XML layout which comes to mind first:
Unfortunately, no cookie for you:
Someone decided that compound drawables should be always draw next to the View's padding, so we have to try something else. For example TextView centered inside the FrameLayout.
Almost there, but the text has a wrong size and color. There is something called "textAppearanceButton", but apparently it's not what the Buttons use:
OK, so let's use the buttonStyle again, this time on TextView:
Now we need to get rid of the extra background, reset minimum height and width and make it not focusable and not clickable (otherwise tapping the caption won't have any effect):
Lo and behold, it works!
We'd really like to use is something like textAppearance="?android:attr/buttonStyle.textAppearance", but there is no such syntax. How about extracting all the attributes from TextView into some "buttonCaption" style with "?android:attr/buttonStyle" parent? No can do either: you can only inherit your style from the concrete @style, not from the styleable attribute.
But what we can do is to use Button and create a style with no parent: Android will use the default button style and apply our captionOnly style:
I'm going to describe my struggles with Android UI on this blog, so if I'm doing something terribly wrong, hopefully someone will enlighten me by posting a comment; and in case something is terribly wrong with Android UI framework, I might be able to help other programmers in distress.
Today I have a simple task for you: create a button with some text and icon to the left of the text. The contents (both icon and text) should be centered inside the button.
That's simple right? Here's the XML layout which comes to mind first:
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical" > <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:drawableLeft="@android:drawable/ic_delete" android:gravity="center" android:text="Button Challenge" /> </LinearLayout>
Unfortunately, no cookie for you:
Someone decided that compound drawables should be always draw next to the View's padding, so we have to try something else. For example TextView centered inside the FrameLayout.
<FrameLayout style="?android:attr/buttonStyle" android:layout_width="match_parent" android:layout_height="wrap_content" > <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:drawableLeft="@android:drawable/ic_delete" android:gravity="center" android:text="Button Challenge" /> </FrameLayout>
Almost there, but the text has a wrong size and color. There is something called "textAppearanceButton", but apparently it's not what the Buttons use:
OK, so let's use the buttonStyle again, this time on TextView:
Now we need to get rid of the extra background, reset minimum height and width and make it not focusable and not clickable (otherwise tapping the caption won't have any effect):
<FrameLayout style="?android:attr/buttonStyle" android:layout_width="match_parent" android:layout_height="wrap_content" > <TextView style="?android:attr/buttonStyle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@null" android:clickable="false" android:drawableLeft="@android:drawable/ic_delete" android:focusable="false" android:gravity="center" android:minHeight="0dp" android:minWidth="0dp" android:text="Button Challenge" /> </FrameLayout>
Lo and behold, it works!
We'd really like to use is something like textAppearance="?android:attr/buttonStyle.textAppearance", but there is no such syntax. How about extracting all the attributes from TextView into some "buttonCaption" style with "?android:attr/buttonStyle" parent? No can do either: you can only inherit your style from the concrete @style, not from the styleable attribute.
But what we can do is to use Button and create a style with no parent: Android will use the default button style and apply our captionOnly style:
<style name="captionOnly"> <item name="android:background">@null</item> <item name="android:clickable">false</item> <item name="android:focusable">false</item> <item name="android:minHeight">0dp</item> <item name="android:minWidth">0dp</item> </style> <FrameLayout style="?android:attr/buttonStyle" android:layout_width="match_parent" android:layout_height="wrap_content" > <Button style="@style/captionOnly" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:drawableLeft="@android:drawable/ic_delete" android:gravity="center" android:text="Button Challenge" /> </FrameLayout>
Saturday, March 16, 2013
Android nested Fragments in practice
Last November I wrote about the new feature in rev11 of Android support package - Fragments nesting. Recently I had an opportunity to use this feature in practice and I'd like to share my experience with it.
The basics are simple: each FragmentActivity and each Fragment has it's own FragmentManager. Inside the Fragment you may call getFragmentManager() to get the FragmentManager this Fragment was added to, or getChildFragmentManager() to get the FragmentManager which can be used to nest Fragments inside this Fragment. This basic flow works fine, but I have found two issues.
If you have a Fragment with nested Fragments and you save its state with saveFragmentInstanceState() and try to use it in setInitialSavedState() on another instance of this Fragment, you'll get the BadParcelableException from onCreate. Fortunately it's an obvious bug which is easy to fix: you just need to set the correct ClassLoader for a Bundle containing this Fragment's state. There is a patch for it in support library project Gerrit, and if you need this fix ASAP you may use this fork of support lib on Github.
The second issue is related with the Fragments backstack. Inside each FragmentManager you may build stack of Fragments with FragmentTransaction.addToBackStack() and later on use popBackStack() to go back to the previous state. Pressing hardware back key is also supposed to pop the Fragments from the back stack, but it doesn't take into account any nested Fragments, only Fragments added to the Activity's FragmentManager. This is not so easy to fix, but you may use the following workaround:
Note that there are two issues with this workaround: it allows adding only one backstack entry and this setup won't be automatically recreated from state saved by saveFragmentInstanceState() (fortunately it does work with orientation change). Both issues probably can be solved by some additional hacks, but writing workarounds for workarounds is not something I do unless I really have to, and in this case I neither issue affected me.
Besides those bumps the nested Fragments are a real blessing which allows much more cleaner and reusable code.
The basics are simple: each FragmentActivity and each Fragment has it's own FragmentManager. Inside the Fragment you may call getFragmentManager() to get the FragmentManager this Fragment was added to, or getChildFragmentManager() to get the FragmentManager which can be used to nest Fragments inside this Fragment. This basic flow works fine, but I have found two issues.
If you have a Fragment with nested Fragments and you save its state with saveFragmentInstanceState() and try to use it in setInitialSavedState() on another instance of this Fragment, you'll get the BadParcelableException from onCreate. Fortunately it's an obvious bug which is easy to fix: you just need to set the correct ClassLoader for a Bundle containing this Fragment's state. There is a patch for it in support library project Gerrit, and if you need this fix ASAP you may use this fork of support lib on Github.
The second issue is related with the Fragments backstack. Inside each FragmentManager you may build stack of Fragments with FragmentTransaction.addToBackStack() and later on use popBackStack() to go back to the previous state. Pressing hardware back key is also supposed to pop the Fragments from the back stack, but it doesn't take into account any nested Fragments, only Fragments added to the Activity's FragmentManager. This is not so easy to fix, but you may use the following workaround:
String FAKE_BACKSTACK_ENTRY = "fakeBackstackEntry"; getFragmentManager() .beginTransaction() .addToBackStack(null) // call replace/add .setTransition(FragmentTransaction.TRANSIT_FRAGMENT_OPEN) .commit(); final FragmentManager rootFragmentManager = getActivity().getSupportFragmentManager(); rootFragmentManager .beginTransaction() .addToBackStack(null) .add(new Fragment(), FAKE_BACKSTACK_ENTRY) .commit(); rootFragmentManager.addOnBackStackChangedListener(new OnBackStackChangedListener() { @Override public void onBackStackChanged() { if (rootFragmentManager.findFragmentByTag(FAKE_BACKSTACK_ENTRY) == null) { getFragmentManager().popBackStack(); rootFragmentManager.removeOnBackStackChangedListener(this); } } });Quick explanation: together with the actual backstack entry we want to add, we also add the fake backstack entry with empty Fragment to top level FragmentManager and set up OnBackStackChangedListener. When user presses hardware back button, the fake backstack entry is popped, the backstack listener is triggered and our implementation pops the backstack inside our Fragment. The backstack listeners are not persisted throughout the orientation change, so we need to setup it again inside onCreate().
Note that there are two issues with this workaround: it allows adding only one backstack entry and this setup won't be automatically recreated from state saved by saveFragmentInstanceState() (fortunately it does work with orientation change). Both issues probably can be solved by some additional hacks, but writing workarounds for workarounds is not something I do unless I really have to, and in this case I neither issue affected me.
Besides those bumps the nested Fragments are a real blessing which allows much more cleaner and reusable code.
Tuesday, March 5, 2013
Weekend hack: viewing markdown attachments in GMail on Android
Recently I wanted to open a markdown email attachment on my Nexus 4, but after clicking "readme.md" instead of seeing the file contents I saw this message:
I downloaded few apps from Google Play, but the message was still appearing. The same applications could open a local markdown file, so I went back to GMail app to download the attachment, but another unpleasant surprise awaited me:
There is no "overflow" menu on the attachment (see the screenshot below), which means I couldn't access the "Save" option, so I could open it as a local file.
At this point I was:
My first idea was to create an "GMail Attachment Forwarder" app which registers for any content from GMail, gets the attachment mail by querying the DISPLAY_NAME column on the Uri supplied by GMail, save this information along with original GMail Uri in public ContentProvider, and start the activity using Uri exposed by my ContentProvider which does contain attachment name. This ContentProvider should also forward any action to original GMail Uri.
Unfortunatly I was foiled by the ContentProvider's permissions systems: the Activity in my app was temporarily granted with the read permissions for GMail's ContentProvider, but this permissions did not extend to my ContentProvider and the app I was forwarding the attachment to failed because of the insufficient permissions.
This approach didn't work, but having a catch-all handler for GMail attachments unlocked the attachment actions. I also noticed that when the attachment is downloaded, the GMail uses a slightly different intent:
I downloaded few apps from Google Play, but the message was still appearing. The same applications could open a local markdown file, so I went back to GMail app to download the attachment, but another unpleasant surprise awaited me:
There is no "overflow" menu on the attachment (see the screenshot below), which means I couldn't access the "Save" option, so I could open it as a local file.
At this point I was:
- Pissed off, because, cmon, GMail is probably the most used app working on the mature operating system and I can't download a fucking file with it.
- Curious, because it looked liked an interesting issue with GMail app.
03-04 21:12:50.477: W/Gmail(13823): Unable to find supporting activity. mime-type: application/octet-stream, uri: content://gmail-ls/jerzy.chalupski@gmail.com/messages/121/attachments/0.1/BEST/false, normalized mime-type: application/octet-stream normalized uri: content://gmail-ls/jerzy.chalupski@gmail.com/messages/121/attachments/0.1/BEST/falseNote the Uri: there is no file name and no file extension, and the mime-type is a generic application/octet-stream (most likely because the "md" extension is not present in libcore.net.MimeUtils). The markdown viewers/editors I downloaded probably register intent filters for specific file extensions, so they don't know they could handle this file. It sucks big time, because it means that the applications for viewing files with non-standard extensions would have to register for application/octet-stream mime-type, and even though they handle very specific file types they all appear in the app chooser dialog for many different file types, which defeats the whole purpose of Android Intent system and reduces the UX.
My first idea was to create an "GMail Attachment Forwarder" app which registers for any content from GMail, gets the attachment mail by querying the DISPLAY_NAME column on the Uri supplied by GMail, save this information along with original GMail Uri in public ContentProvider, and start the activity using Uri exposed by my ContentProvider which does contain attachment name. This ContentProvider should also forward any action to original GMail Uri.
Unfortunatly I was foiled by the ContentProvider's permissions systems: the Activity in my app was temporarily granted with the read permissions for GMail's ContentProvider, but this permissions did not extend to my ContentProvider and the app I was forwarding the attachment to failed because of the insufficient permissions.
This approach didn't work, but having a catch-all handler for GMail attachments unlocked the attachment actions. I also noticed that when the attachment is downloaded, the GMail uses a slightly different intent:
03-04 23:05:34.005: I/ActivityManager(526): START u0 {act=android.intent.action.VIEW dat=file:///storage/emulated/0/Download/readme-1.md typ=application/octet-stream flg=0x80001 cmp=com.chalup.markdownviewer/.MainActivity} from pid 3063
This led me to plan B: have an app which enables the attachment download and use other apps to open downloaded attachments. I renamed my app to GMail Attachment Unlocker, cleared the manifest and source folder leaving only a single, automatically closing activity:
The full source code is available on my Github (althought there really isn't much more than what is posted here). In the end I also ended up writing my own markdown viewer (source code in another repo on my Github), because none of the apps I have downloaded properly rendered <pre> tags (hint: you have to use WebView.loadDataWithBaseUrl instead of WebView.loadData).
<application android:allowBackup="true" android:label="@string/app_name" android:theme="@android:style/Theme.NoDisplay" > <activity android:name="com.chalup.gmailattachmentunlocker.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:host="gmail-ls" android:mimeType="*/*" android:scheme="content" /> </intent-filter> </activity> </application>
public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); finish(); } }
The full source code is available on my Github (althought there really isn't much more than what is posted here). In the end I also ended up writing my own markdown viewer (source code in another repo on my Github), because none of the apps I have downloaded properly rendered <pre> tags (hint: you have to use WebView.loadDataWithBaseUrl instead of WebView.loadData).
Subscribe to:
Posts (Atom)