Patent Announcement: Segment Based Cellular Network Performance Monitoring

An approach to monitoring cellular networks by specific segments.

I’m happy to be able to finally share a patent of mine that is in process:

Mechanism for facilitating dynamic and segment-based monitoring of cellular network performance in an on-demand services environment

As always feel free to comment and ask questions.

Government To People: You’re Fired.

Unknown-2

Donald Trump.  source: nbc.com

October 1st of 2013 marked the U.S. Government’s start of a nation wide shutdown for many services.  It’s like the entire nation was called in to see Donald Trump on a bizarre episode of “The Apprentice“.

It’s a very serious thing.  Lots of people have gone home and I feel for them.

But at the same time it’s worth seeing what can be done with mobile technology to save costs. Forbes reports how Missouri is doing just that to cut costs by about $500 million.

It begs the question, when the shutdown ends, will it be necessary to put offices back to their old funding levels, or can this kind of approach make that un-necessary?

Facebook Patch for Dalvik

If you can solve a problem by re-engineering your app or modifying Android, why would you modify Android?  That’s the burning question at the heart of this blog entry.

Picard to Warf: "Android 2.3 was written that way for a reason, Mr. Warf." (credit: appsgeyser.com)

Picard to Warf: “Android 2.3 was written that way for a reason, Mr. Warf.” (photo credit: CBS/startrek.com; blog source: Paramount)

Engineering mobile applications is embedded engineering.  It’s the art of using less code to accomplish something rather than more code.  And in this case, it’s also the art of avoiding a change that risks impacting every Android user that uses Android 2.3, and every carrier with Android 2.3 phones.

Recently, David Reiss, a Facebook engineer, posted on the Facebook blog information about a Dalvik “patch” that they were working on.  Interesting. My Android Spidey Sense tells me that the problem is more of an application issue, as opposed to an issue within Android OS itself.  So, I decided to find out with some quick analysis.

I’ve been working on Dalvik the past year, so you could say I’m a Dalvik “virtual machinist.”  This engineering problem is pretty juicy, and I couldn’t pass up looking at it.  Here’s the Facebook blog posting, and a quick summary of the issue David is seeing.

Under the Hood: Dalvik patch for Facebook for Android

The suggested fix does not consider that if Android 2.3 is updated from 5MB to 8MB, it may force all mobile phones using Android 2.3 to go through a round of regression testing.  That’s a lot of work for the carriers.  Depending on the hardware configuration on Android phones that run 2.3, the potential side effects far outweigh a more straightforward solution: Facebook simply re-engineers its application.  But that aside …

The issue that is reported by Facebook and others is not a “bug” in the sense that dexopt is crashing.  Nor is it the case that the failure is created by “deep” interface hierarchies, as the Google bug suggests.  In this case, dexopt is simply complaining that the application that Android is trying to load is too big: the code size is too big for the class loader to handle.  This kind of limit is nothing new to Android or any other mobile app.  It’s been there ever since mobile phones could run applications.  So, it could be said that this is an application level problem, as opposed to an OS problem.  This could be resolved using two approaches:

  1. re-engineer the application, especially in cases where you are trying to port code from one form (Javascript) to another (Java).
  2. modify the operating system to utilize more resources – which will affect all applications

For reasons I explain below, using #2 is a slippery slope.  It’s important to remember that 5MB is a lot of memory to  chew on for a device that was designed primarily to answer phone calls. And, as you’ll see, it’s not just 5MB versus 8MB anyway. The difference is potentially 10MB versus 16MB.  And, that my coder roadies, is a whole other ballgame.

Other Factors

The Facebook blog mentions that this limit was bumped into because code was ported from Javascript to Java.  So, this occurred when Facebook decided to add more features to their native Android application.

The Quick Conclusion/Upshot

The basic problem being reported is that when the Android phone loads your program to run, there are limits placed on the amount of code you can have in your application.  That’s a good thing.  If you exceed your limits, it’s much wiser to re-think your application design as opposed to tweaking the OS.

First of all, let me dis-abuse you of the notion that if dexopt fails in this way, it must be a “bug” in the Android platform.  Android was pretty thoroughly tested, as far as mobile operating systems go, so it’s just important to consider that your application design might just need to be re-worked.  In this case, while it appears that a Google engineer has signed on to address this “issue”, it doesn’t mean that Google thinks it’s a bug either.

The limit that is relevant here is a 5MB limit placed on all Android applications for Android 2.3.  These kinds of limits are there for a reason.  This is especially important to respect in the case of Android 2.3 because devices that run that version of the OS tend to be more limited on resources than Android 4.0 devices.

Bumping up the amount of memory that LinearAlloc uses will increase the amount of memory that ALL android applications consume on the outset when loading your classes. Each application that starts will have this amount of memory allocated to the class loader.

So, if you have 20 apps on your phone, each of those apps are going to allocate that amount of memory (5MB on Android 2.3, and 8MB on Android 4.0).  This is very important to consider because Android allows any of its applications to start background services.  For planning purposes, you must take into account the fact that every one of the apps on your phone may be running simultaneously because they may be running services.

More Details and Analysis

Here are the basic working of how LinearAllocHdr (LinearAlloc) is managed by Android:

LinearAllocHdr is a data structure in Android that so far appears to be used only by the class loader.  But, nothing says it can’t be used for other things in the future.  Here’s the structure:

28/*
29 * Linear allocation state.  We could tuck this into the start of the
30 * allocated region, but that would prevent us from sharing the rest of
31 * that first page.
32 */
33typedef struct LinearAllocHdr {
34    int     curOffset;          /* offset where next data goes */
35    pthread_mutex_t lock;       /* controls updates to this struct */
36
37    char*   mapAddr;            /* start of mmap()ed region */
38    int     mapLength;          /* length of region */
39    int     firstOffset;        /* for chasing through */
40
41    short*  writeRefCount;      /* for ENFORCE_READ_ONLY */
42} LinearAllocHdr;

mapAddr points to the block of memory that is allocated.

This structure is instantiated by the function dvmLinearAllocCreate.   The 5MB that David’s post talks about is actually a 5MB file that gets memory mapped. The length of the file is defined by DEFAULT_MAX_LENGTH:

/* default length of memory segment (worst case is probably "dexopt") */
72 #define DEFAULT_MAX_LENGTH  (5*1024*1024)

Memory mapping files is commonly done in Android.  In fact Dalvik maps your entire application code into memory this way.  This means two things for the problem at hand:

  1. 5MB of disk space is used to store the underlying data, and
  2. 5MB of memory is taken up to store the file’s contents in active memory.

So, the impact on the operating system is actually 10MB (worst case). So, we’re not just talking about 5MB here, we’re talking about potentially using twice that.  If you increase that to 8MB, you’re impacting OS with potentially a 16MB memory allocation.  Now, we’re getting into some serious memory for a mobile device to manage.  Remember, it’s an embedded system on a slow processor – especially in the case of Android 2.3 and OMAP.

dvmLinearAllocCreate is called by dvmClassStartup (Class.c), so that confirms that the only place that this is used is by the class loader (for now).  But, this is a very critical part of memory. The more memory used by the class loader, the more overhead you cause for Linux, and the slower your applications might boot up.  Again, perhaps not noticeable on Android 4.0, but it might be noticed on an Android 2.3 device – especially a cheap one that uses lower end hardware.  Especially when that is applied to all applications that run on the device.

Sensor Watch 2012

I love sensors.

Once they’re developed and mass produced, they can be put into things like mobile phones.  Here’s a quick survey

And, in the ultra cool category we have:

“Use the (Sales)force, Richard …”

For those of you who don’t already know, I have an announcement.  On Monday, July 18, 2011, I officially started my new job at Salesforce.com.  I’m working in Salesforce’s mobile containers group.  And, I’m totally stoked.

One of the reasons I only have to post to my professional blog every so often is simply because on the day-to-day I’m already making an impact on mobile software products you see in the market.  The marketing departments of the companies I work for have the job of talking about the work, and they do a great job.  You can actually see my work in a lot of places already.  It’s just released under brands like Cloudtix, Amazon, Photobucket, and now Salesforce.

Some projects I can talk about, and others not.  But I assure you I’m out there, out in the wild, burning my soul into some pretty national products you most likely are already be using.  As I get permission to talk about those projects, which often have a shroud of secrecy that would make the U.S. National Security Agency envious, I’ll blog about them here when I can.  If it makes sense to use some other forum to talk about it, like through a corporate blog, I’ll at least post the link.

Salesforce will give me the opportunity to continue my tradition of working on great products.  And work on them in so many great ways I can’t even talk about it. Not just because it’s Salesforce (and not me) that will decide what information about my projects goes public or not (those decisions are above my paygrade). I just flat out get too excited to put the projects into words sometimes.

Did I mention how stoked I am about the whole transition? Oh, yeah I guess I did….

You Are Wary With Native Troll (or not)

It seems that some HTML5 people have developed a serious case of “Mobile Envy.”  Some medieval looking evangelists from endofnative.com have even recently circled Moscone with signs:

Monks descend on Moscone.

Sasha Aickin, the Search Team Lead at redfin.com has pictures of these creative cats in her presentation on the Redfin corporate blog: “HTML 5 vs. Native”.

Web developers want mobile devices to handle HTML5 applications as if they were native apps.    Access to mobile markets, in-app payments, the file system, etc.  But that can’t happen without native code until magical HTML5 fairies can fly through the air and control the hardware on your phone.

But that aside, this debate is a good opportunity to look at mobile architecture and see how to work with HTML5.

“Speed Kills.  Performance is where abstractions go to die.”

That’s a phrase Sara Aickin wrote in her presentation, and it’s very true.  The bridge between mobile hardware and HTML5 web-apps can happen, but not without some serious engineering and attention to the limitations of the hardware.  If you do a quick survey of what’s already being done in this area, you’ll see some pretty cool options.

How Could HTML5 be Handled?

HTML5 is a standard designed for the display of information.  Javascript is browser scripting language.  The contents of a web app are usually delivered across the network, but can also come from other places (e.g. the file system).  Still, giving the arbitrary web developer the ability to use HTML5 and Javascript to interact with hardware is a non-trivial task.  And there are a few places in the hardware/os/application stack that this can be supported:

In the hardware: there’s nothing preventing the design of a microchip that could support HTML5 at the hardware level, but I haven’t seen anyone working on it.

Pros: fast processing; Cons: no upgrade path, and ever more complex hardware.

In the OS: the operating system can be designed to handle HTML5 applications in an optimized way and not just in the browser.  WebOS is a good example: https://developer.palm.com/ .

Pros: faster than the web browser and easier access to the entire phone.  Cons: the hardware is limited in what it can do – many mobile phones might not handle the functionality well.

In a 3rd party library: libraries are how mobile developers give web developers access to a mobile device’s hardware.   THey can be designed around a standardized API specification.  PhoneGap is a good example of this approach: http://www.phonegap.com/

Pros:  allows developers the ability to optimize the code that web apps would rely on. Cons: each phone model is different, it’s not possible to write a single API spec to build a library around that would apply equally to all phones.

With a code generator: a good, generalized design tool can generate source code for multiple devices that can be compiled using a mobile devices’ native toolkits.  Check out ReadWriteWeb’s blog for a list of some of these applications.

Pros: allows non-developers like graphic designers the ability to generate useful code for a mobile developer.  Cons: non-developers have to become comfortable using a developer tool.


With a develpment toolkit/framework
: tools exist that generate applications and or code that run on multiple phones.  There are a number of ways to do this.  Appcelerator Titanium is an example of this approach http://www.appcelerator.com/

Pros: ease of use and lots of hand holding for non-developers; Cons: a lowest common denominator of phone and user interface must be relied upon which is often not as attractive as just a straight web experience, and the resulting applications tend to be way to big.

On the server side: There’s nothing preventing someone from building a server-side pre-processing scheme that compiles all your web code into something secure and save that a mobile device can utilize.  Nothing would change for the web developers.  I haven’t seen any projects that are taclking this.

Pros: would help improve performance of mobile applications, especially with respect to networking and allows web developers to still write mobile applications.  Cons: increases the complexity of web services and adds to the time requiredx to manage a web server.

The difference between these kinds of tools is the audience they are written for and the quality of the application that they produce.

So, what’s best?

Well, that’s up for you to decide.  If I had my choice, I would explore integrating HTML5 into the native phone stack in all those areas if I could.  Advances in any one of those areas has the potential to make one of the others obsolete, so you have to cover your bets.

One thing is for sure in the meantime, mobile phones handle HTML5 apps in the way they do simply because that’s the best the mobile devices can do.  And it won’t change until the hardware and/or application stack on mobile devices itself changes.

Android apps running on RIM’s Playbook?

The RIM press release says it all … that developers will be able to repackage and resign existing Android apps to run on PlayBook.

Yeah … That’s what the developer community needs … more undocumented RIM SDKs to add to the existing, poorly documented and cumbersome family of RIM development SDKs!

The cost of writing mobile apps for RIM devices is already about (I estimate) 30% higher than any other device just due to the challenges of dealing with the RIM SDKs and hardware.

Is adding another build step really going to help?