Now I’ve been playing around with Java for a few weeks. Last time I looked at Java, it was version 1.1 (I still have a book from that time). It has changed since then – some features notably templates have been added. Back then I tried to use it for interface purposes: having one solution for Windows and Unix. But there was a lot of overhead, runtime environments and I didn’t like Swing very much. So for projects we preferred QT at that time. There were some other experiments by colleagues to use Java, but if you have either huge amounts of data and/or demanding algorithms – like in the area of optimization, hardware verification, Java (with or without JIT) didn’t quite scale up to the task.
I can understand the Android approach: you are basically hardware independent, most of the compution needed is interface specific (graphical) and can be delegated to builtins. Other demanding algorithms can be delegated to the cloud, whereas controlling these graphic actions can be done in Java. Should there be essential bottlenecks, one could add new classes with builtins in the next firmware edition.
With my two toy Apps, I didn’t think I would have any performance issues, as scaling bitmaps and decoding codecs is done by the firmware and everything else is trivial. Nevertheless I looked at the Performance section of the Android Dev Guide. It wasn’t really a surprise to read that in contrast to C++ you do have to compromise between a clean textbook software design and a performant program. Worst thing probably is creating temporary structures (new), which can be directly equated to long GC runs, and using access functions (get/set) instead of direct field access (JIT currently even increases the difference!).
The lesson? Think about, which projects might be reasonable for an Android phone. If you have a lot of computation to do, delegate it to the cloud!
For my projects – there are different problems… fix all the FCs, that result from casual coding.