Should Facebook Transfer Money?

Facebook just announced that it is allowing people to send money to each other:

Facebook’s Post

News Article

I’m wondering if this is a good idea or not.  Well, I don’t mean I’m wondering if a company should transfer money between people.  I’m just wondering if it’s a good idea that Facebook does it.

Facebook Messenger’s money transfer feature is another reason to uninstall the Facebook App from your phone.  Here’s why.

The Spy Who Facebooked Me

If a phone is programmed to call people without your knowledge or consent, it’s because somebody added the code to do that.  It’s not an accident the phone was programmed to make phone calls and listen to your room without your knowledge.

Well, that’s what Facebook did.  Facebook can call it a “bug” all it wants, but the fact that the feature was added in the first place is a admission prima-facie by Facebook that the company was intent on turning your phone into a surreptitious listening device.  There’s no other conclusion you can come to.

An obsession with protecting people’s privacy is baked into my DNA somehow.  Working in healthcare helped cement the belief that third party data is inherently risky.  And companies have uncommon responsibility to protect private information.

Facebook trying to use your phone without your knowledge is simply reprehensible.   I feel bad for unsuspecting users and wish that adding secret phone calling as a “feature” would be outlawed, honestly.

My Facebook Interview

Prior to Facebook’s Spy-Phone-Gate, I had interviewed at Facebook.

In preparation of my interview, I wrote a blog article about one of Facebook’s slip-ups in the privacy department, and how I would advocate for changes to prevent that kind of thing from happening.

Long story short, I showed up for an interview.  Got a tour .. the place looked like it did in The Social Network and everything.  I was excited.  And it was obvious that they knew how I felt about privacy, cause I was there after posting my blog on privacy.

The white boarding portion went awesome, and things seemed fine.  The guy interviewing me seemed excited I was there.  Then it came my turn to ask questions.  This one seemed to stun the interviewer:

Q. So, who owns the product specification and standards?  Who signs off on the mobile software before release?

And I got a surprising answer (paraphrased):

A. Nobody “owns” them.  The developers just work on something they think is ‘cool’, and if the group likes it that feature is released.

I took that to mean that the only oversight of mobile applications were the engineers. Executive oversight, in particular, didn’t seem to be that important to Facebook employees.    I took that also to mean there were no independent code audits of Facebook’s mobile applications.

Judging by the phone calling issue that popped up in 2014, I can’t say I was wrong in thinking Facebook had a massive problem.

And it seems to have gotten worse.  Everything Facebook has to say about its money transfer feature lacks any sense of security for financial transactions. That’s bad.

Facebook’s Code Release Oversight …

You can also read about how Facebook describes it’s process, which is only about half the work that should be done to ensure a mobile app even functions, let alone respect user privacy:

How Facebook Ships Code

Facebook’s Ryan McElroy on Code Reviews

More Questions Than Answers

I’m left to wonder:

Who reviewed the code for the payment feature? Well – nobody except the developer apparently and that’s not good enough for a public company to release a public app.

Security standards used? Like ones that credit card processors rely on? None, apparently. That’s bad. Very bad.

Is Facebook security good enough to handle financial transactions through messenger?  The magic eight-ball says no.

No other conclusion can be possible because these questions are ignored in Facebook’s press release.

Perhaps if you’re lucky, when the money transfer feature screws up, the Facebook app will dial Zuckerberg so he can listen in.  Not that you’d know the phone dialed of course.

References

I’ll let you read these links to decide whether or not you want to trust Facebook with your credit card data:

Facebook data exposed.

Facebook app blacklisted.

Some issues with Facebook’s app.

Seven Annoying Attacks Facebook Misses

Facebook Phone-Gate

Will Swatch Corner the Wearables Market?

Bacon Watch.

Bacon Watch (heart monitor not included). Source: walkyou.com

Up until February 5, 2015 it seemed like all companies who could make a wearable computer “put their cards on the wearables table” except Swatch Group.  Swatch had everything needed to do it, but it chose not to until it’s 5 Feb announcement: a watch with an NFC chip that can make payments.

And now Swatch’s competitors may be screwed.

The amount of money spent by other companies on wearables to design, market and produce smart watches runs into the many millions … perhaps billions of dollars.  Apple, Microsoft, Samsung, Motorola, and Nike, etc are at risk: they won’t get their money back if they can’t sell enough  product.

Those companies’  sales plans can be destroyed by a competitor who finds a way to sell a cheaper watch that does enough of what people want.

And that might easily happen with a company like Swatch Group doing the disrupting; this company has already mastered market penetration for all demographics in wearable devices.

While other companies have to ramp up production lines and find shelf space, Swatch’s changes to watches will be incremental to its existing product line.  Much easier and potentially less costly by comparison.

And now finally after all those other companies committed vast treasures bringing smart watches to market, they have to compete a space that is new to them, and old hat to Swatch.  And I think last month’s announcement will do the trick for Swatch.

Are you a watch snob who thinks Swatch Groups’ watches will fail because they’re inferior to Apple’s?  It doesn’t matter.  It’s not about what you value.  It’s about how many watches a company can sell.  Swatch’s success will be Apple’s torment.  Simple.

Swatch is in such a sweet position market wise it’s not even funny.

It’s choosing to limit it’s wearable tech to a few core functions.  The other companies didn’t seem to understand the importance of that.   Swatch Group seems to want watches to remain watches, but add some basic functions that make sense – like payments, NFC and bluetooth.  Nike took a similar approach with its Fuel Band, which is a basic smart watch.  But Nike Fuel Band lacks NFC capability so far, and I’m guessing that Nike wants to focus on licensing Nike Fuel to other companies … as opposed to helping people buy Big Macs.

Swatch Group’s idea is simple.  Turn watches into payment machines.  Literally cash generating devices.  And that’s all.  Simple.  Genius.

People are going to ask themselves,

Should I spend $400 on an Apple watch, or ..

$150 on a Swatch and put $250 on the payment chip so I can buy stuff?

At the end of the day, any wristband device is limited by what it can do.  It doesn’t make sense to add bulky batteries or huge screens.  Add too much functionality, and you risk making the user feel like they’re doing surgery on their wrist just to pull up stock quotes.

The more complicated watches like Apple’s watch and Microsoft’s band pose that risk.

But it’s also helpful to see other technologies that have been successful with the minimalist approach.  Take Disney’s Magic Band for example.

Disney Band

Disney Magic Band. Source: disney.com

Disney Magic Band. Source: disney.com. Image copyright not by me.

Disney’s Magic Band is an incredible device.  It’s a proven incredible device, that lets consumers get what they want … an easy, less involved shopping experience.  It cuts down on waiting in lines and talking to cashiers.

And it helps Disney transfer money from consumer to Disney.  So, it’s a win-win for everyone.

And now with Swatch entering the arena, we could see something like the Disney experience expand throughout the planet.

Know what Oso Needs? BEER!

The mudslide disaster in Oso, WA is tragic.  Upstanding citizens have responded by giving what they can and now it looks like the effort has more shovels, pick-axes, food and fuel than people know what to do with.  Snohomish county has also been slow in the uptake on asking for help, according to The Seattle Times.

So, what are we told will help? Cash, according to mynorthwest.com.

Image

Jesus. Patron saint of beer. Source: FreeWilliamsburg.com

The people who have perished in that tragedy have already passed on.  What’s left is supporting those who are trying to help save anyone miraculously alive still alive.  So, I have a great idea.

Send BEER!

Those hard working people deserve it. Take it from me, a guy who grew up in the outback of Washington’s Olympic Penninsula.  Beer is worth a lot.

Dial ‘M’ for Millisecond

Grace Kelly starred in "Dial M For Murder," a 1954 Hitchcock film.  (source: imdb.com)

Grace Kelly starred in “Dial M For Murder,” a 1954 Hitchcock film. (source: imdb.com)

There are times when, if you look close enough, milliseconds matter.  Functions in your mobile app that don’t run as fast as they could will incrementally steal processor cycles … harass the garbage collector … like some shadowy process stalker on the other end of the phone.

Given a specific algorithm you often have a choice.  One choice will keep runtime overhead in check as much as possible.  Another may add incremental amounts of runtime and memory usage with each use.  Next thing you know  …

You’ve dialed ‘M’ for murderous milliseconds.  Just don’t be surprised when the monster comes calling.

The problem is – how do you know that’s what you’ve done in the first place? How do you get to the processor cycle killing beast before it shows up during the QA cycle … laughing at you as it adds three, four, five … no six … seconds to the time it takes your app do to something like de-serialize binary data from a network connection.

One place that can help is to look at the functions in your app that get run repeatedly throughout it’s life.  Functions that perform data marshaling of Java primitives to and from byte arrays are pretty obvious.  I recently had some time to take a deep dive into a couple ways of doing this on Android, and I thought I would share some of the hard data.

So, here’s the problem I was looking at: reading and writing  Java primitives to/from byte arrays.  These kinds of functions are relied upon a LOT in many mobile applications making them good candidates for optimization.  Shaving even a few hundred milliseconds off a function that, say saves Double values to a byte array, could mean that the user is waiting several seconds less for the app to complete a task.  In other words, the faster you can get your primitives into and out of a byte array the more responsive your application is going to be.

So, I wrote a bit of code to compare two different ways of saving double values into a byte array.  In one version I used DataOutputStream:

    public static byte[] toByteArray(Double[] array) {
        ByteArrayOutputStream os = new ByteArrayOutputStream(array.length * 8);
        DataOutputStream dos = new DataOutputStream(os);

        try {
            for (int i = 0; i < array.length; i++) {
                if (array[i] == null)
                    dos.writeDouble(0);
                else
                    dos.writeDouble(array[i]);
            }
            dos.close();
        } catch (IOException e) {
            throw new PersistException(e);
        }

        return os.toByteArray();
    }

In another version I relied on straight byte manipulation using bitwise operators:

 
    public static final byte[] toBytes(Double[] array) {

        byte[] result = new byte[array.length * 8];
        for (int i = 0; i < array.length; i++) {
            if (array[i] != null) {
                toBytes(array[i], result, i * 8);
            }
        }

        return result;
    }

    public static final void toBytes(double value, byte[] dest, int start) {

        toBytes(Double.doubleToLongBits(value), dest, start);

    }

    public static final void toBytes(long value, byte[] dest, int start) {

        dest[start] = (byte) (value >>> 56);
        dest[start + 1] = (byte) (value >>> 48);
        dest[start + 2] = (byte) (value >>> 40);
        dest[start + 3] = (byte) (value >>> 32);
        dest[start + 4] = (byte) (value >>> 24);
        dest[start + 5] = (byte) (value >>> 16);
        dest[start + 6] = (byte) (value >>> 8);
        dest[start + 7] = (byte) value;

    }

The interesting thing about these two examples is this: If you look at the code for DataOutputStream you will notice that at its core, DataOutputStream essentially does the same thing as the function I wrote by hand.  You will also notice as well, that using DataOutputStream will potentially require more memory overall because the classes for DataOutputStream and ByteArrayOutputStream both need to be loaded into memory and they obviously have internal state that needs to be stored.

So, I calculated the time it took for each function to run:

 
        Double[] doubles = new Double[1000];
        for (int i = 0; i < 1000; i++) {
            doubles[i] = (double) i;
        }
        Debug.startMethodTracing("trace");

        long start = System.currentTimeMillis();
        byte[] resultBytes1 = toByteArray(doubles);
        long elapsed = System.currentTimeMillis() - start;
        Log.i("TRACE", "elapsed using buffers : " + elapsed + " ms to generate "
                + resultBytes1.length
                + " bytes");

        start = System.currentTimeMillis();
        byte[] resultBytes2 = toBytes(doubles);
        elapsed = System.currentTimeMillis() - start;
        Log.i("TRACE", "elapsed using my code : " + elapsed + " ms to generate "
                + resultBytes2.length + " bytes");

        Debug.stopMethodTracing();

I ran this several times to make sure the Dalvik optimizer had a chance to do its job:

 
09-19 11:02:07.308: I/TRACE(11146): elapsed using buffers : 328 ms to generate 8000 bytes
09-19 11:02:07.498: I/TRACE(11146): elapsed using my code : 189 ms to generate 8000 bytes
09-19 11:02:34.038: I/TRACE(11196): elapsed using buffers : 310 ms to generate 8000 bytes
09-19 11:02:34.238: I/TRACE(11196): elapsed using my code : 196 ms to generate 8000 bytes
09-19 11:02:59.778: I/TRACE(11226): elapsed using buffers : 289 ms to generate 8000 bytes
09-19 11:02:59.948: I/TRACE(11226): elapsed using my code : 166 ms to generate 8000 bytes

Conclusion

It could be said that if you take the lazy way out and stick with DataOutputStream, you’re making a choice to build a user experience delay right into your application.  A delay that is visible to the user.

Using DataOutputStream to serialize 1,000 doubles ran consistently about 120 milliseconds slower than using shift operators directly. 120 milliseconds isn’t a lot of time until you consider the compounding effect those 120 milliseconds will have.  1,000 doubles isn’t a lot of data – only 8,000 bytes to be exact.  You will only have to serialize the array of Doubles using DataOutputStream about three times (e.g.  24,000 bytes of data) before that 120 milliseconds becomes noticeable.

Introducing delays into your application that serve no purpose is generally a bad idea.

Take into account your application will be serializing/de-serializing megabytes – perhaps even gigabytes of data while it runs, and the additional runtime overhead DataOutputStream requires becomes that lurking monster in your app that will, pardon the phrase, byte you in the end.

I haven’t taken into account the increased in-memory requirement that DataOutputStream will require as well, nor the additional work you’re setting up the garbage collector to do.  In short, the number of objects instantiated and destroyed by DataOutputStream undoubtedly adds more stress on the garbage collector.  That, in turn, could actually begin to slow down your application as well compared to using the other method.