Archive for August 2011 | Monthly archive page
Let me start out by saying that the weekend at the first Cocoaconf was thoroughly inspiring and very well worth my time. It lived up to its reputation as a technical conference – hands-on tutorials, code walkthrough of various concepts and even the keynote address by Daniel Steinberg was a very entertaining spin on Objective-C. It was a community where people were eager to share their knowledge and to learn from each other. Bill Dudney spent time reviewing a pesky bug that I was facing and gave me some useful pointers – how great is that!
And now to the curious subject of this blog:-
In this congregation of approximately 80 techies, I noticed that there were only four women (including myself) attending the conference. Of course this wouldn’t be the first time that there was a stark disparity in the number of women and men at a technical conference. Attendees of the recent Apple WWDC 2011 mentioned to me that there were probably a dozen or so women among several thousand attendees! Conferences aside, the number of men in development related roles have significantly outnumbered women in all the companies that I’ve worked for. I’m generally the lone woman in a technical meeting. To be clear, I’m talking specifically about roles that involve programing and building software. I’m not talking about management related roles in technology organizations where there are many women making significant inroads. I’m not including the designers (web, graphics etc) , technical writers either. I’m talking about folks who develop. So why is there a significant shortage of women in this area? It definitely isn’t true that women can’t program, and I don’t believe that it something to do with our genetic makeup, so what is the reason.
I’ve pondered over this topic many times in the past but it was rekindled over the weekend at Cocoaconf when I noticed that nothing much has changed over the years in this regard. I chatted about this with some of the men who attended the conference and it was interesting to get their perspective on this as well. This is a summary of my thoughts-
It is no secret that the number of women pursuing a Computer Science /Engineering degree is still very small (and some reports indicate that its dwindling). So why aren’t women inclined to pursue this degree in the first place? Is it because (judging by the fact that there aren’t many women in this area), they perceive this as a “guy thing” and automatically assume that this would be something they wouldn’t enjoy? Or does this start much earlier during elementary and middle school, where summer camps related to topics like “ Building Robots” and “programing” are mostly attended by boys which dissuades even the girls who are interested in such topics (or over-protective parents of girls) from participating in such camps?
Even the women who enter the industry as a developer tend to switch to non-development or non-technical roles/jobs within a few years. Is it because women inherently prefer to work in roles which involve more “socializing” such as management, marketing, sales? As a developer, one does not have to interact a whole lot with others- you are pretty much in your office/cube for most part (well- there is “pair programing” but that’s not for everyone). Some women have indicated that they switched careers because “it was hard to keep up”. Like many other jobs these days, programing is not a traditional “9 to 5 job”. Technology changes at a rapid phase and in order to do well, one needs to constantly upgrade their skills. There is continuous learning required. This may require spending evenings and weekends learning the new skill. Depending on the demands of the family, that is not always feasible, especially for women. Some women have indicated that switching to management related roles was essential to climb up the corporate ladder within their company. I disagree with that. Most companies offer separate management and technical tracks. One can be in a position of significant influence even while serving a purely technical role.
A very interesting point that was raised by some one and with which I agree, is the men in development roles tend to possess traits which may not appeal to many women. They are generally “arrogant” ,”brash” or “over confident” . Hold on- let me explain this a bit. As developers, we can’t just “give up” on bugs. We are adamant about fighting them and proving that our code works. Developers are not necessarily the most social people and many tend to lack “people skills”- we are not shy about "calling a spade a spade". Very often, developers tend to trivialize a complex task. For instance, even if fixing a problem took a lot of time /effort, men tend to downplay the effort and claim that “Oh that was so easy. Figured it out in no time ”. This may be intimidating to women who may have lower self-confidence in their programing abilities to begin with.
Women tend to discuss family matters more often than for instance, the latest gadget or the cool stuff that they discovered while hacking into their new gadget. Sure- men probably discuss sports a lot as well but they also talk a lot more about technology, even if they are not directly working in that space. There is nothing wrong in discussing family, but my point here being that we are influenced a lot by our company.
One of the pioneering programmers was Lady Ada Lovelace. It is sad not to see more women following her legacy.
The list of mobile-specific security exploits that were discussed at this week’s BlackHat Technical Security conference got me a paranoid again. I did a bit of security related work a while ago. I didn’t attend the conference, so no- this isn’t a blog about the conference- sorry!
Security has always been an afterthought. Back in the day when I did some Internet related standards work, the section on “Security Considerations” was typically the most sparse chapter in the specification.
With the proliferation of connected devices ranging from smart phones, tablets, TVs, STBs, game consoles to cars, toasters, washing machines, refrigerators, we are susceptible to security threats more than ever. But are we taking it seriously enough?
There is no denying that mobile computing is the present and the future, so I’d like to specifically discuss mobile devices and in particular, smart phones and tablets in this context.
Wireless networks are ubiquitous –homes, the coffee shops, airports, airplanes, trains, maybe your entire city. Of course, this was true even in the “pre-smartphone, laptop era”. But now, there is a huge difference in terms of the number of actively connected devices. Anything you want to do, “there’s is an app for that”. A lot more people are performing a whole lot more of sensitive transactions (banking, ticketing, shopping) from their mobile devices.
It’s not an unknown fact that wireless networks are not very secure. Sure, with 802.11n we have come a long way from the vulnerable WEP and WPA security of 802.11a /b/g days, but there is no guarantee that all the wireless networks we traverse are upgraded to the latest and greatest and besides, many folks who setup their home wireless networks may not take the necessary precautions to secure their network. In places where there is no sufficient monitoring of the wireless networks, it wouldn’t be hard for someone to set up a rogue Access Point that unsuspecting users would connect to or for an attacker to launch a Denial of Service attack by exploiting RF interference.
If you are thinking that the issues I mentioned thus far are old-school, then you may be interested in the more sophisticated form of “baseband attacks” . In this case, an attacker could potentially gain control of the device memory through malicious code installed on the device’s radio signal transmitter/receiver by posing as a legitimate cell tower!
I’d like to draw some comparisons between the iOS and Android platforms.
Both the Android and the iOS platforms have a sandbox model for running applications, which limits the extent of damage by a malware-app.
Apple to its credit has a rigorous code signing process that ensures that certificates issued by Apple are used to sign apps. Android on the other hand allows for self-signed certificates and so there is no guarantee of the identity of the signer of the app.
The approval process by Apple, while in no means intended to scrutinize app code for security breaches, at least provides some level of assurance about the quality of the application. There is no Android Marketplace approval process.
Apple disallows installation of any app that is not downloaded through its App Store (and consequently signed by Apple) and in order to allow that, one would have to jailbreak the device. On the Android, it’s very easy to install apps that are not available on the Android marketplace- just check the “Unknown Sources” box under settings to allow installation of any app and you are done.
Of course, there are security holes in both the iOS and Android kernels that can be exploited quite easily on jailbroken phones with root-level access. Attackers can then use many freely available tools to disable kernel-level security patches on jailbroken phones in order to launch their attacks.
Another point to note is that mobile devices are often connected to laptops ( and desktops?) for purposes of backup/restore/sync services. This makes the mobile devices as vulnerable as the platform that they are hooked up to.
Furthermore, the growing relevance of cloud based services for mobile devices poses significant security risks. What prevents an attacker from harnessing the “infinite” resources on the cloud for launching DDoS attacks?
There is significant variation in the demographic of mobile device users ranging from the tech-savvy geek to the grandmother who has never used a computer to the teenager who is always online. Educating such a diverse population of the security risks involved is a daunting task. This implies that security has to be integrated into the platform -the device, the infrastructure/networks and the services. The end-user is an integral part of the solution but the hardest to manage. In addition to the consumer space, many businesses allow access to corporate services from (personal) mobile devices, making the corporate resources susceptible to security attacks by compromised devices. Security is an expensive investment for both individuals and enterprises. It’s similar to insurance- You never realize how absolutely important it is until your systems are compromised. Now that I’ve shared my thoughts, I think I will relax a little!
This is probably one of the most commonly asked questions by folks looking to mobilize their business (probably right after what mobile platform to target).
While I have discovered a lot along the way from my own experience as a (native) mobile developer, I must add the disclaimer that this blog is also a result of discussions with a lot of smart people in this space. I thought it would be worthwhile sharing it to a potentially larger audience.
The list below is by no means exhaustive. It is an attempt to highlight the major advantages of the two
Why a Web App?
1) “Develop once, Run everywhere”
This implies that the web app is for most part platform agnostic. This has been widely touted as the selling point for web apps, however the statement is not entirely true. HTML5 is still in the process of standardization (http://dev.w3.org/html5/spec/Overview.html ) and as we know all too well, despite standardization efforts, ,there will undoubtedly, be variations in browser implementations across platforms. These variations will impact the behavior, performance and appearance of your web app on the various platforms. Still, while I cannot quantify this statement, one can infer that the development effort/cost will be lower than building native applications independently for each of the platforms of interest.
2) App is primarily “network data driven"
By this, I mean that the app communicates heavily with backend data servers for its various functions. Communicating large volumes of data across a bandwidth constrained wireless network is not practical (unless you have a cheap, unlimited data plan – if there is such a thing!). In this case, hosting the app in the network in the proximity of the servers will alleviate the problem.
3) Application developers want to “be in greater control"
Today, the fate of the apps is dictated (to a large extent) by the terms and conditions imposed by Apple App Store , the Android Marketplace or any of the other app storefronts. As an example, Apple’s “subscription model” imposed major restrictions on in-app purchases made from native apps which led many app developers to remove the “Buy button” from their apps. Web apps will allow you to bypass those restrictions allowing consumers to directly make purchases from their web apps. This provides developers the flexibility they need to deliver the desired service to their consumers without being encumbered by policies set forth by the application store owners. That said, while there isn’t a concept of a “web app storefront” today, one can envision that something like that would be in place when web apps become more ubiquitous. So it remains to be seen if there would be any restrictions that can impact the services rendered by web apps.
Why a native app?
1) Performance, Performance, Performance!
This statement probably needs no further explanation. If performance is an important criteria, which is typical of game apps, then native app is the way to go
2) Superior User Experience/ Interface
Native Apps leverage the hardware acceleration support for graphics available through specialized GPUs and use customized/optimized platform-specific graphics libraries , thereby resulting in a vastly superior UI experience that is hard to match by web apps using JS/HTML/CSS. I earlier mentioned “Develop once, Run Anywhere” as an advantage for web apps- However, if UI is an important consideration for your app, then note that the same “one size fits all” model will result in a sub-standard user experience on certain platforms and this would be unacceptable to users who are used to a particular level of user experience on a given platform.
3) Support for Remote Notifications
Most platforms provide some sort of remote notifications framework (eg- Apple’s PUSH Notification Framework, Android’s C2DM ) that allow registered apps to receive asynchronous from their application servers via centralized notification servers. Only native apps can register for push notifications. If this is a requirement for your app, then native apps are definitely a better fit. Alternatives for web apps like SMS or emails are not as seamless or compelling.
4) No Network Connectivity
If your app does not require network connectivity for its various functions, then offering it as a web app would impose that unnecessary requirement for it to run. Of course, HTML5 supports Application Caching that can be used to locally cache apps and run it even without a network connection. So this is probably not an issue, but the level of support may vary depending on the browser’s HTML 5 implementation.
5) Better Hardware Access & Control
Native Apps can access and control platform hardware resources like camera, accelerometers through native APIs exposed by the platform. This can be leveraged to build unique, compelling features into the apps. Although HTML5 is aimed at standardizing access to various hardware resources on the platform, the level of support is likely to be “inferior” compared to the options available to Native Apps – and by ”inferior” I mean that the platform vendors are more than likely to support access to a particular hardware resource natively prior to supporting it on their browsers. In some cases, the vendor may choose not to (for security reasons) provide access to certain hardware resources via their browsers. Besides, the level of hardware access support can vary across browser implementations.
There is a place for both types of apps out there. The choice greatly depends on the objective of the application and the targeted audience.