Adding shared libraries to your Android NDK Build: Insanity

Stressful Guy

The Android NDK build system may make you want to choke something. (source: the eating disorder blog)

I’ve been pulling my hair out for the past couple days dealing with Android NDK build sensitivities.  I hope this blog posting saves you some time and stress. If it does, you can send me flowers …. or money.  Or both.  It’s all good.

So, let’s say you have an existing Android NDK project and it works file.  It’s complete.  You want to add a shared library into the application, but it’s not required to build your application. That is, you need it during runtime, but you don’t need it to build your C/C++ code using NDK.

It should be simple, right, but the documentation for the Android NDK seems to suggest that you have to use LOCAL_MODULE and friends to accomplish this.  But you’ll find that can be recipe for disaster.  I sure did.   It’s been tormenting me for the past several days and I think I finally solved the problem.

This blog presents one way to accomplish this.  This is something I just discovered (by spending hours ruling out everything that didn’t work – modifying module definitions in files, using APP_MODULES, and setting up module dependencies – you name it, I’ve tried it) which I have not seen explicitly discussed about in the existing android documentation.

The Goal

The goal is simple. You have an existing NDK project and you want to:

  1. add a shared library to your Android NDK build that gets packed up into your .apk file. The library is one that is not needed to compile any other C/C++ module in your project. But it’s one that your application will rely upon indirectly.  You just need it to be packed with the build.
  2. Load that library at runtime with a call to

The Problem

If you read the documentation, you’re going to realize two things: a) the Android NDK build system is built on top of gmake, and b) it relies very heavy on makefile fragments. You’ll see things like this, from PREBUILTS.html (from the Android NDK docs directory):

II. Referencing the prebuilt library in other modules:

Simply list your prebuilt module's name in the 
in the of any module that depends on them.

For example, a naive example of a module using would be:

    include $(CLEAR_VARS)
    LOCAL_MODULE := foo-user
    LOCAL_SRC_FILES := foo-user.c
    LOCAL_SHARED_LIBRARIES := foo-prebuilt

You might be thinking that your shared library will just be pulled into the build.  But then when you run the build you suddenly get linking errors for your application that was working properly before.

You find that not only is your build failing due to linking errors, your shared library is not getting incorporated into the .apk file.  That’s what happened in my case.

Here’s Something That Works

I was able to accomplish this without build errors, after several days of fighting with the build, by using the following approach:

1. Do NOT add a module to your make files.  Leave that out.  Apparently (and this is just a guess), you only should define prebuilt shared modules in your build file that your native application actually needs to be linked against.

2. Build your shared library in its own project.  Make sure the code is cross-compiled for the platform of Android you want the library to run on.  At this point you have two separate, independent projects: Your Android project with native code, and your shared library somewhere on the filesystem.

3. With your native project built, you’re going to see a libs directory at the top level of your project.  In that directory, you’ll see something like this:


4. Copy your shared library into the directory that has the name of the architecture you’ve compiled it for.  So, if you’ve built for armeabi, you’ll want something like this;


With that setup you should see your shared library as part of your apk build file.  You can use

jar xvf YourApplicaton.apk

to verify it’s there.

There should be no mysteries, unanswered questions, or undocumented use cases when it comes to something as critical as a build system. Without that, your project can come to an untimely screeching halt.

Time Machine + My Book Live = COLD molasses

In my previous post I talk about how slow My Book Live is when backing up with Time Machine on Mac OSx.  I started my latest backup at about 11 PM, March 10.

It’s taken Time Machine about 46 hours to backup about 225 GB of data to My Book Live.

That’s so slow I could have worked enough to earn the money to buy a new Time Capsule by now.  Geez.

Time Machine + My Book Live = molasses

In today’s posting, I present some simple performance data for My Book Live when used to backup Mac OSX using Time Machine.  Spoiler alert: your initial backup might take a day or three – and that’s no exaggeration.

But if you have the patience to wait, I’ve found so far that it’s a good substitute for the $500 Time Capsule that comes with the same drive space (3 terabytes).

My Book Live (MBL) is a great product if you just backup files here and there.  But, power users need to have some patience.  While it’s reliable Network Attached Storage (NAS), it’s not made to chew on mounds of data quickly.  If you keep your data backups to under a couple gigabytes a day (max), you won’t be disappointed with its performance.  But any more than that and it might be worth looking around for a more enterprise NAS solution if you want to see your backups finish quickly.

Now if only Western Digital had a My Book Live version that came with 2GB (gigabytes) of memory, they’d have a sure fire winner of a product on their hands … I’m convinced.

I have been noticing my Time Machine backups for my Mac have gotten considerably slow when backing up to My Book Live (MBL).  I’m guessing … only guessing … that it’s because the MBL only comes with 256 megabytes of memory.  That makes the MBL a quite a bit underpowered for its advertised task.   Especially when you’re me and you need to backup several gigabytes a day.

At any rate I thought I would share some simple benchmarking results.  I calculated these numbers manually – by timing backup rates with a stopwatch.

The Time Machine backup is very slow.  I’ve isolated the problem to the Time Machine server running on the My Book Live.  Here’s how:

  1. My mac and the MBL NAS are connected to a switch using hardwire … no WiFi involved.
  2. I transferred a 1 gigabyte file manually from the Mac (Mountain Lion 10.8.2) to the Public shared drive on the MBL.  Took 54 seconds.  That’s pretty good.  That’s 8 billion bits in 54 seconds, or about 148 megabits per second.
  3. I can get similar rates when I transfer from the mac to a time machine that is hardwired to the same switch.
  4. I have the iTunes and the other media server off on the MBL.
  5. The MBL is set to not sleep.
  6. When backing up with Time Machine, the rates drop significantly:
    • took about 7.25 minutes for the drive to allow the mac to connect to it and prepare the backup.
    • The backup numbers in Time Machine seem to stall … just when you think the service on MBL has stalled, the numbers tick up rapidly for a time, then stall again.  When backing up to Time Capsule, the numbers don’t stall that much.
    • I timed a Time Machine backup and watched it for six minutes.  714 megabytes were backed up in 6 minutes.  That’s 5.7 gigabits in 360 seconds, or 16 megabits per second.

The overhead the Time Machine server gives us when running on My Book LIve is pretty big, 148 megabits straight file copy drops down to 16 megabits for Time Machine.  I haven’t run the same comparison for overhead for an Apple Time Capsule yet.

Running Hudson on My Book Live

The Western Digital My Book Live is a product that is optimized to do one thing: serve files.  And for that, I love it.  What it does, it does fairly well.  The setup is super easy too.  The best $150 I’ve spent on computer hardware in a long time.

When I realized the device was actually a simplified Linux server, I had to see if Hudson would run on My Book Live.  If Hudson ran well, I reasoned, I’d be able to use it as my build machine.  If not, then that’s O.K. too.  At least I found out.

So, I installed Hudson using apt-get, just like you would on a regular Linux server.

I even got the web page to come up:

Although My Book Live will run Hudson, the scaled down Linux server is best at serving up files.  Not much else.

Although My Book Live will run Hudson, the scaled down Linux server should be relegated to what it does best: serve up files.

That’s pretty cool.  The app ran, not surprisingly, very slow.  Apparently, running Hudson or any other Linux application not  installed on My Book Live at the factory runs the risk of interrupting your backup experience.  After all, the device is built just right for serving up files and its simple web interface. And for that Western Digital gets 10 awesomeness points.

And at that – it does it’s task very well.

Now if only Western Digital made a beefier machine – even if they just added 16GB of memory to the drive.  That would open up all kinds of possibilities for the My Book product line.

Facebook Patch for Dalvik

If you can solve a problem by re-engineering your app or modifying Android, why would you modify Android?  That’s the burning question at the heart of this blog entry.

Picard to Warf: "Android 2.3 was written that way for a reason, Mr. Warf." (credit:

Picard to Warf: “Android 2.3 was written that way for a reason, Mr. Warf.” (photo credit: CBS/; blog source: Paramount)

Engineering mobile applications is embedded engineering.  It’s the art of using less code to accomplish something rather than more code.  And in this case, it’s also the art of avoiding a change that risks impacting every Android user that uses Android 2.3, and every carrier with Android 2.3 phones.

Recently, David Reiss, a Facebook engineer, posted on the Facebook blog information about a Dalvik “patch” that they were working on.  Interesting. My Android Spidey Sense tells me that the problem is more of an application issue, as opposed to an issue within Android OS itself.  So, I decided to find out with some quick analysis.

I’ve been working on Dalvik the past year, so you could say I’m a Dalvik “virtual machinist.”  This engineering problem is pretty juicy, and I couldn’t pass up looking at it.  Here’s the Facebook blog posting, and a quick summary of the issue David is seeing.

Under the Hood: Dalvik patch for Facebook for Android

The suggested fix does not consider that if Android 2.3 is updated from 5MB to 8MB, it may force all mobile phones using Android 2.3 to go through a round of regression testing.  That’s a lot of work for the carriers.  Depending on the hardware configuration on Android phones that run 2.3, the potential side effects far outweigh a more straightforward solution: Facebook simply re-engineers its application.  But that aside …

The issue that is reported by Facebook and others is not a “bug” in the sense that dexopt is crashing.  Nor is it the case that the failure is created by “deep” interface hierarchies, as the Google bug suggests.  In this case, dexopt is simply complaining that the application that Android is trying to load is too big: the code size is too big for the class loader to handle.  This kind of limit is nothing new to Android or any other mobile app.  It’s been there ever since mobile phones could run applications.  So, it could be said that this is an application level problem, as opposed to an OS problem.  This could be resolved using two approaches:

  1. re-engineer the application, especially in cases where you are trying to port code from one form (Javascript) to another (Java).
  2. modify the operating system to utilize more resources – which will affect all applications

For reasons I explain below, using #2 is a slippery slope.  It’s important to remember that 5MB is a lot of memory to  chew on for a device that was designed primarily to answer phone calls. And, as you’ll see, it’s not just 5MB versus 8MB anyway. The difference is potentially 10MB versus 16MB.  And, that my coder roadies, is a whole other ballgame.

Other Factors

The Facebook blog mentions that this limit was bumped into because code was ported from Javascript to Java.  So, this occurred when Facebook decided to add more features to their native Android application.

The Quick Conclusion/Upshot

The basic problem being reported is that when the Android phone loads your program to run, there are limits placed on the amount of code you can have in your application.  That’s a good thing.  If you exceed your limits, it’s much wiser to re-think your application design as opposed to tweaking the OS.

First of all, let me dis-abuse you of the notion that if dexopt fails in this way, it must be a “bug” in the Android platform.  Android was pretty thoroughly tested, as far as mobile operating systems go, so it’s just important to consider that your application design might just need to be re-worked.  In this case, while it appears that a Google engineer has signed on to address this “issue”, it doesn’t mean that Google thinks it’s a bug either.

The limit that is relevant here is a 5MB limit placed on all Android applications for Android 2.3.  These kinds of limits are there for a reason.  This is especially important to respect in the case of Android 2.3 because devices that run that version of the OS tend to be more limited on resources than Android 4.0 devices.

Bumping up the amount of memory that LinearAlloc uses will increase the amount of memory that ALL android applications consume on the outset when loading your classes. Each application that starts will have this amount of memory allocated to the class loader.

So, if you have 20 apps on your phone, each of those apps are going to allocate that amount of memory (5MB on Android 2.3, and 8MB on Android 4.0).  This is very important to consider because Android allows any of its applications to start background services.  For planning purposes, you must take into account the fact that every one of the apps on your phone may be running simultaneously because they may be running services.

More Details and Analysis

Here are the basic working of how LinearAllocHdr (LinearAlloc) is managed by Android:

LinearAllocHdr is a data structure in Android that so far appears to be used only by the class loader.  But, nothing says it can’t be used for other things in the future.  Here’s the structure:

29 * Linear allocation state.  We could tuck this into the start of the
30 * allocated region, but that would prevent us from sharing the rest of
31 * that first page.
32 */
33typedef struct LinearAllocHdr {
34    int     curOffset;          /* offset where next data goes */
35    pthread_mutex_t lock;       /* controls updates to this struct */
37    char*   mapAddr;            /* start of mmap()ed region */
38    int     mapLength;          /* length of region */
39    int     firstOffset;        /* for chasing through */
41    short*  writeRefCount;      /* for ENFORCE_READ_ONLY */
42} LinearAllocHdr;

mapAddr points to the block of memory that is allocated.

This structure is instantiated by the function dvmLinearAllocCreate.   The 5MB that David’s post talks about is actually a 5MB file that gets memory mapped. The length of the file is defined by DEFAULT_MAX_LENGTH:

/* default length of memory segment (worst case is probably "dexopt") */
72 #define DEFAULT_MAX_LENGTH  (5*1024*1024)

Memory mapping files is commonly done in Android.  In fact Dalvik maps your entire application code into memory this way.  This means two things for the problem at hand:

  1. 5MB of disk space is used to store the underlying data, and
  2. 5MB of memory is taken up to store the file’s contents in active memory.

So, the impact on the operating system is actually 10MB (worst case). So, we’re not just talking about 5MB here, we’re talking about potentially using twice that.  If you increase that to 8MB, you’re impacting OS with potentially a 16MB memory allocation.  Now, we’re getting into some serious memory for a mobile device to manage.  Remember, it’s an embedded system on a slow processor – especially in the case of Android 2.3 and OMAP.

dvmLinearAllocCreate is called by dvmClassStartup (Class.c), so that confirms that the only place that this is used is by the class loader (for now).  But, this is a very critical part of memory. The more memory used by the class loader, the more overhead you cause for Linux, and the slower your applications might boot up.  Again, perhaps not noticeable on Android 4.0, but it might be noticed on an Android 2.3 device – especially a cheap one that uses lower end hardware.  Especially when that is applied to all applications that run on the device.

Show iTunes in the Cloud Disabled in iTunes 11

I’ve been going nuts wondering why my iCloud purchases in iTunes 11 won’t show up.  I had the checkbox set to make iTunes show my iCloud purchases, and they were being shown, but they suddenly disappeared.

And, so did the option in iTunes preferences called “Show iTunes in the Cloud Purchases.”  It’s not there!

iTunes 11:  iTunes in the cloud ... gone!

iTunes 11: iTunes in the cloud … gone!

I’m going to go kick the neighbors dog now, and pretend that dog is the product management at Apple.

Update: Others have reported that the fix for this is to log out and log back in of iTunes.  But that turn-your-head-and-cough style of workaround probably isn’t intended.  But, it does work.

The REAL Apple iTunes 11 Release Notes

It’s 3:48 AM, and I’ve spent the last hour trying to get a simple thing done on iTunes.  It’s not working.  Reading the release notes didn’t help either, but I thought I would at least re-write them for people who are faced with the same thing:

About iTunes 11.0.2

The new iTunes includes a dramatically simplified player, a completely redesigned Store, and iCloud features you’ll love (if you can actually use the product due to us leaving out the obvious usability use cases) — despite all our hard work and stunning design, this is the most useless best iTunes yet.

  • Completely Redesigned. We left out the part where we don’t make it obvious how to download all your iCloud purchases at once. Some of you can expect to spend a good hour trying to find a simple “Download all purchases stored in the iCloud” function.  You’ll probably give up in complete frustration.
  • A new opportunity to rant on the Apple forums, and A New Store.
  • Play purchases from iCloud, but just don’t download them all at once to your new overpriced Mac you spent good money on. We would prefer you click on the little cloud icon on EACH and EVERY song you’ve purchased. Your music, movie, and TV show purchases in iCloud now appear inside your library. Just sign-in with your Apple ID to see them. Double-click to play them directly from iCloud or download a copy you can sync to a device or play while offline.
  • Up Next. It’s now simple to see which songs are playing next, but it doesn’t really matter because that’s useless until you go through hundreds, perhaps thousands of songs to click that cute little cloud button to download each one.
  • New MiniPlayer. You can now do a whole lot more with a lot less space, except download all your iCloud purchases at once.
  • Improved search. That little cloud next to each song is a great reminder that you get to click on it hundreds or perhaps thousands of times to download each and every song – one at a time.  We hope you don’t go crazy in the process, but it’s never been easier to find what you’re looking for in iTunes. Just type in the search field and you’ll instantly see results from across your entire library. Select any result and iTunes takes you right to it.
  • Playback syncing. iCloud now remembers your place in a movie or TV show for you. Whenever you play the same movie or episode from your iPhone, iPad, iPod touch, or Apple TV, it will continue right where you left off.

This update adds a new Composers view for music, improves responsiveness when syncing playlists with a large number of songs, and fixes an issue where purchases may not show up in your iTunes library. This update also includes other stability and performance improvements.

For information on the security content of this update, please visit:

Windows SHA1= c247ece76d06101867ec11191aead1cebc46ea32

Windows 64 SHA1= 14ccca67b9ba181bfb126de028d3e6aa4df3b684

Mac SHA1= e8eba6c2b83b9e24116a9944c808525bed260aa0