Tuesday, October 20, 2015

My (Short) Wish List For Windows Azure

These are some of the features that I want(ed) to see in Windows Azure.

Ditch Web Role and Worker Role: These two services were a half-hearted attempt at Platform as a Service (PaaS), while trying to retain the control afforded by Virtual Machines (VM). Also, the debugging and deployment of these services were complex and time consuming. Microsoft has finally rectified this with Web Sites and Scheduler (previously called Web Jobs. can't believe the name is dead already!) to replace them. Hopefully the web and worker roles will be sunset soon.

I told my friend a few years ago that PaaS will succeed only if there is mobility, i.e., a developer is able to write code once, then take it to any cloud provider and run it without any modifications. That day seems to have arrived with the new technology called "containers". As usual the hype is deafening. Never mind that containers are promising the same portability that Java did, circa 1995, they are the rage now. Docker is the front-runner in the container space. This is what the Docker web site states: "With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere". Sounds familiar? If a developer codes his/her application to a container, then theoretically, they can move their application to any cloud provider as long as that provider supports that container.

If I'm allowed to oversimplify, containers provide the child processes (the applications) with, ahem, a "container" or a "virtual shell" that captures their standard input (stdin), standard output (stdout) and standard error (stderr). The containers can run only command line programs, no GUI, at least not yet. That is how they remain lightweight (Of course, don't quote me on this one, I'm no container expert)

Docker (in fact, all containers) are targeted only at Linux operating system. This could be a disadvantage to Docker, or a sinister implication for Microsoft. I'm worried that latter is the case and the world is ignoring Windows. Even then, if the containers live up to their promise of "write once, run anywhere", then the industry would be well served.

I digressed. Let me continue with my list.

Rationalize Compute Power: Remember the days when computer manufacturers used to one-up each other with mega FLOPS, giga flops and tera flops? Today nobody cares, even a wrist watch can provide a few hundred mega FLOPS of compute power. FLOPS could be a good way of rationalizing the compute power of the hundreds of CPUs available in the market. Of course, the net compute power could vary based on the amount of memory, address and data bus size, solid state disks etc..., but a formulae can be evolved to take them in to account.
In any case, CPU power has to be rationalized, somehow, for better CPU utilization and better profits! Once that is done, we can allocate guaranteed max CPU percentage to a process.

Containerize IIS: I believe this should be the next step in creating containers for Windows. Given that vast majority of the applications are going to be web (and mobile, of course!) applications in future (yes, I get it, the future is already here), instead of porting Docker, IIS should be converted to a container. The IIS Application Pool might already offer this functionality to some extent. This container should be settable with max CPU percentage (as discussed above), and can be sold instead of web sites. This is essentially process virtualization.

Once process virtualization succeeds, may be VMs will become redundant and everyone will start using physical machines again?

Let me know your thoughts. Thanks for reading.

(The Unpredictable) Continuous Evaluation System In US High Schools

It is that phase of my life. My kids are in high school. You might have guessed it from the title. Being an immigrant to US (from India), I constantly compare my life in high school with my kids', and they cannot be more different.

The education system in India operated differently. Each grade had one final exam at the end of the year, and that's what counted! Every other test that we took or the home work that we did, such as class test, mid-term test, monthly test, quarterly test, etc... didn't count. They were just practices for the final exam. This is true from first grade to twelfth grade. Colleges provided admissions based on the final exam scores of the twelfth grade (and optionally a nationwide test for certain courses such as engineering and medicine was conducted by the government). It was a one time evaluation. We crammed for the final exam, but that's about it. In fact, the same methodology was followed in many colleges as well!

I was introduced to the continuous evaluation system only when I entered post-graduate degree course. Our university followed the semester system. In each semester, for each course, we had three assignments (or projects), three tests, one lab and one final exam. The schedule was given to us ahead of time. We had seven courses each semester. It was hard, but the hard life was at least predictable. We knew ahead of time when an assignment was due, or when a test was looming. We were able to prioritize and plan. The number of tests, assignments, lab is standardized across the university, for all courses. All colleges in that university followed the same continuous evaluation methodology.

Now, let me compare that with what's happening in my kids' lives. They are introduced to continuous evaluation in middle school, and it gets really serious in high school. They have six courses per semester and each teacher hands out varying numbers of assignments, projects, and tests. When I look at the "StandardScores Progress Report" of the Lake Washington School District, I do not see much standard. Grade for each course is comprised of different number of line items. Some of the courses have only five to ten line items, while some of them have more than fifty! Each grade is a weighted average of home work, assignments, projects, class participation, labs, tests and final exam. Each teacher has his/her own weightage system. Some teachers club home work and assignments together and give the weightage of 10%, while others combine assignments and projects together and give a weightage of 20%. Some teachers conduct pop quizzes, some don't. Some teachers provide for extra credits or retries, some don't. Imagine six different courses with varying number of tasks to complete each day! Yes, I know, this keeps the kids on their toes, but believe me, the amount of hard work a kid has to put in, just to maintain a reasonable grade is huge.

Sometimes I wonder if this is making them better citizens, may be or may be not. May be such a grueling schedule is required to make them knowledgeable, prepare them to compete in this dog-eat-dog world, or may be not. I don't know. Can this be standardized, with set schedules and make everyone's lives easier? Yes, I believe so, but I'll leave that one for the experts to answer.

Of course India doesn't want to be left behind. India's central board of education has introduced continuous and comprehensive evaluation system. See here: http://en.wikipedia.org/wiki/Continuous_and_Comprehensive_Evaluation Based on Wikipedia's information, I see that it is more standardized, with respect to number of assignments, quizzes and tests.

Feel free to give me feedback. Am I getting this high school system wrong?

Windows 10 is a free upgrade, but Why?

Microsoft announced last month that Windows 10 is a free upgrade to Windows 7, Windows 8 / 8.1 consumers (Enterprises still have to pay). This is a tectonic shift that was either not noticed by the industry or the industry has decided to ignore.

There could be two reasons. First, Microsoft has started believing its bashers, who say that Windows doesn't matter any more. This is patently false. Windows still powers billion+ devices, and more importantly all productivity work is being done only on a windows laptop or a tablet. The non-Microsoft platforms have a long way to go when it comes to productivity applications. If Microsoft starts believing this, then this might become a self-fulfilling prophecy.

Or Microsoft wants everyone to upgrade to Windows 10, which is the first release of Windows coming after Satya's Cloud-first, Mobile-first strategy. Windows 10 is a unified platform across all form factors. Quicker the users upgrade to Windows 10, better it is for Microsoft to forge ahead, without worrying about incompatibilities. Windows 10 is going to sport a new browser, codenamed Spartan. A web browser is already a platform, and will continue acquire platform capabilities in the future. The new browser is a fresh start, a modern standards compliant browser, which is required if Windows is to run on other hardware architectures, such as ARM. Continuing to invest in the current Internet Explorer (IE) will only hold Microsoft back. I believe this is the most likely reason for offering Windows as a free upgrade.

What do you think?

Visual Studio Community Edition, .NET Core for Linux, Mac OSX

Everyone covering the Microsoft Connect event (on Nov 12th and 13th, 2014. Videos are available here: http://channel9.msdn.com/Events/Visual-Studio/Connect-event-2014) will start with the title "OpenSourcing .NET", but I think there are two other announcements that are the real deal. They got subdued by the phrase "open source". Here is what I think are important.

Visual Studio Community Edition: This should have been celebrated with fire works. Until now, small businesses and startups were using the various "Express" editions of Visual Studio. Developers have to use one edition for web development, use a different edition for desktop development, and yet another edition for Windows 8 development. This was really silly. Finally someone in Microsoft realized it and came out with a single full-featured community edition that can be used for any type of development. Take a look at the features here: http://www.visualstudio.com/en-us/products/visual-studio-community-vs It includes support for Python, Node.js, and Apache Cordova (for writing cross-platform mobile applications). What more can we ask for? This free edition will surely bring joy to millions of Microsoft developers, especially the ones living outside the US, where paying money for software is considered anathema. Now everyone can enjoy the world's best Integrated Development Environment (IDE).

.NET Core for Linux, Mac OSX: This is yet another underplayed, but significant announcement. Of course, Java did this in 1995! Though .NET is late by 20 years, I'm sure it will beat every other technology stack on Linux to a pulp. What exactly is Core, only Microsoft knows it. For e.g., ASP.NET MVC framework is NOT part of Core in Windows, does it mean it won't be available for Linux? That would be a deal breaker. Vast majority of .NET applications are based on frameworks and components that are available through the "nuget" package manager. These frameworks and components that are not part of the Core should be made available to Linux/Mac OSX as well. I'm sure there will be a lot of two-way traffic between http://www.mono-project.com/ and the Microsoft .NET team, portending to good times ahead.

The only thing left to do is to come up with a Visual Studio Community Edition for Linux, Mac OSX that will complete the ecosystem. If there can be an Office for iPad, why not a Visual Studio for Linux?

OpenSourcing .NET Source Code: This is the least impressive of all, but garnered the most attention. The only business value is the "marketing hype" generated by the phrase "open source". No one can deny it, that hype is actually huge. Non-Microsoft developers get a sense of security, the warm fuzzies, when they hear the phrase "open source", and Microsoft Marketing knows it. As long as the required tools and the runtime are provided, the availability of source code will have little effect on the development of applications. Anyway, that's a debate for another day.

Let me know your thoughts. Thanks for reading!

Saturday, November 9, 2013

Hype, thy name is "Cloud"

To be honest, I'm tired of the word "cloud". The hyperbole about cloud computing has now reached the stratosphere. When a speaker mentions cloud computing, the first question in my mind is "what now?" I lose interest in what the speaker has to say, because I have already heard it before! What is worse is that cloud computing pundits are so wrapped up in themselves, they don't seem to notice the "audience fatigue". May be they don't need an audience, may be they only want to pat each other's backs.

What is cloud computing? The simplest definition of cloud computing is "it is a bunch of computers available for rent, on demand, over the internet". There are a few intricate features built on top of that, but this definition will suffice for all practical purposes.

Cloud computing is indeed useful under certain circumstances, but what riles me is that it is being prescribed as the single solution to all the problems. I have come up with a few scenarios where it can be put to good use:

- For small companies (or departments) working on a short term IT projects, cloud computing can provide the temporary hardware (and software) for development and testing purposes. Previously the companies have to buy hardware, especially for web applications, rack and stack them, install the requisite software and then use the servers for testing. Today they can just rent the servers from one of the providers, use them for the duration of their project (usually a few months) and then shut them down. They can even shut down their servers during weekends and nights when no one is working, to save some more money. This will definitely make the small companies more productive, free up their time to concentrate on the problem at hand. They can always include the cost of renting the servers in to the contract with their clients. This cost is usually very small when compared to the overall cost of project execution, the clients would gladly agree.

- For 24x7 companies to augment their existing infrastructure. The keywords here are 24x7 and augment. Let me explain. The 24x7 companies are businesses who have the necessity to run their servers 24x7, for e.g. Netflix. These companies should have a base level of infrastructure, i.e., servers running in their own data centers. During nights, weekends, and holidays the number of people accessing Netflix would increase. At that time, they can temporarily rent the servers from the providers and join them to their network. This will augment their capacity and enable them to service the increased demand. Once the demand goes down, they can shut down the rented servers.

What is cloud computing not good for? It is not good for 24x7 companies to run their base infrastructure from the cloud providers. For e.g., Netflix, unfortunately, rents all of its computers from a cloud provider. They do not have a base infrastructure of their own. Renting the computers is way more expensive than buying and installing the servers, and paying for the electricity, bandwidth and other utilities in the data center. Read Jeff Atwood's calculation here: http://www.codinghorror.com/blog/2012/10/building-servers-for-fun-and-prof-ok-maybe-just-for-fun.html The cloud computing prices are falling, and one day might be cost effective to rent all the computers all the time, but we haven't reached that situation yet.

Remember, cloud computing does not eliminate the need for system administrators, engineers who are required to install and configure the servers and maintain the systems. Companies would still have to employ the same number of people. There are no savings there. Hence the 24x7 companies should always have their own base infrastructure in their own data centers, and use cloud providers only to augment their capacity during the times of increased demand.

To counter the human resource cost, the industry has created a new role called "devops", in which developers also double as system engineers, DBAs and network engineers, but it is too early to tell whether it will save money or introduce its own set of complexities. (Note: Microsoft has already eliminated the role of testers. Developers do the testing as well)

- 24x7 small companies and startups: The same cost argument is even more applicable to smaller companies and startups running server applications. The cost of renting servers takes the lion's share of the operational expenses in smaller companies. Hence smaller companies should go for hosting their own servers, and slowly move to the hybrid model described above in the Netflix example.

Comments are welcome. Please let me know your thoughts.

Friday, September 7, 2012

Apache contributes to reduction of Consumer Privacy in Do Not Track (DNT) debate

First things first, definition of Do Not Track (aka DNT): Lets say that you went to a retailer's web site and searched for an LCD Television. Few minutes later, lets say that you are on a different web site. Have you noticed that "LCD TV" advertisements magically come up on that site? This is because you are being tracked across the web sites! What you do in one place, is now visible to many other web sites and they can tailor their offerings according to your taste. This is called "personalization". This happens without the user's consent. To prevent this, a Do Not Track (DNT) option is available in most of the web browsers today. Once the user sets it, the browser tells the web site(s) that the user doesn't want to be tracked. Beauty of the Web is that as users have a choice, so do the web sites, in that they are free to ignore it! There is no law forcing the web sites to obey the DNT choice of the users. (Official standards are published by Tracking Protection Working Group). Only a few web sites honor this user choice.

For the technically oriented, this is a HTTP header named DNT, sent by the browser when it accesses a web site. If DNT = 1, then the user has opted out of tracking, i.e., doesn't want to be tracked, if DNT = 0 then the user has opted in, i.e., wants to be tracked and if the header is not sent all, then the user hasn't expressed a preference. The default behavior of the browsers is to not send the header. From this behavior we can see that the user not expressing a preference has the same result on the user's privacy as the user opting in. The web sites will track the user in both of these cases. They will not track, only if the user has opted out. Of course, this conforms to the standards published by the Tracking Protection Working Group.

Now comes Microsoft with their release of Windows 8 and Internet Explorer 10 (IE10). They did an awesome thing. They turned on (DNT=1), i.e., opt out by default. The user has the option of turning off DNT (i.e., opt in) as part of Windows 8 Setup. Technically, this is a violation of the published standard, because as per the standard, the browser is supposed to remain neutral, and not send any header. But we have already seen that remaining neutral, is not actually neutral, but is equivalent to opting in!

Lets digress. Have you ever received in the mail a 10 page booklet explaining the privacy policy of your credit card company? The privacy policy would be published in a 0.1 font size, and if you manage to read it, you will find something akin to this: "we will share your information with our business partners and affiliated companies for business purposes". No one will tell you what these business purposes are, but the behavior of all these financial institutions is "opt in" by default. This is wrong. The behavior should be "opt-out" by default, just like the IE10 from Microsoft. Today, you have to specifically send them a signed letter in the mail, asking them not to share your information. Most of us don't do it, and hence our data is very easily discoverable. If a business has enough money, say a few tens of thousands of dollars, it can buy the entire data of the entire US consumer population. And all this is legal. Believe me folks, this is true.

Then comes Roy Fielding, scientist par excellence. I looked at Mr.Fielding's bio, and respect him for what he is. He is one of the architects of HTTP protocol, one of the founders of the Apache Web Server (aka HTTP Server) project and one of the proponents of the DNT standard itself! But guess what, just like many of the luminaries, he also has a holier-than-thou attitude. He has come up with a patch for the Apache Web Server (note: Apache Web Server is the most widely used web server in the world) that will ignore the DNT option if the browser is IE10. He wants to do this, because Microsoft has violated the DNT standard by not being neutral. His argument is that DNT option does not protect anyone's privacy unless the web sites respect it (as I have said at the beginning of this post). That is correct, but spending time and energy and coming up with a software patch to defeat one particular browser's setting? I call this crazy! He is probably one of the many people who hate Microsoft for no reason. IE10's default setting of opt-out is a small step in the direction of increasing consumer privacy and we should all support it (even though many web sites don't respect that option). The right course of action for Mr.Fielding (and his esteemed colleagues at the W3C) would be to change the DNT standard with the default setting of opt-out. Do what is good for the consumers, don't let that chip on your shoulder come in the way.

The argument from Internet Advertisers and the web sites is this. They are providing a service free of charge (like most of our email, photo storage, blogs etc... are free). Hence they are entitled to track the user's behavior, sell it and make money off of it. Ok, I agree that a business should make a profit, but the users should have the option of protecting their privacy and pay for the services if they so wish. Not giving the users choice, ignoring the users choice or disabling the users choice by creating ingenious software patches is reprehensible.

I implore the Apache Foundation to reject Mr.Fielding's patch. I implore Microsoft to not budge and continue with the current setting of opt-out as default.

Thursday, May 17, 2012

The morphing of Facebook

I don't use Facebook often, because of privacy concerns, but whenever I use it, I find that Facebook is slowly morphing in to a group or family discussion forum. The reason I'm saying this, is because nowadays I find that Facebook contains (or shows) only the posts and photos of my closest family members and friends.

I'm sure Facebook has a complex algorithm to figure out what (whose) status updates to show when I login. I'm assuming it is calculated by the things I "liked" and the posts I "responded" to. If the algorithm is right, then it give us a startling conclusion: After fervently adding gazillion friends and the novelty died down, we are capable of interacting only with family members and a few friends on a daily basis. And status updates and photos from only those people appear when we login to Facebook.

Our family (extended family including my cousins, nephews, in-laws etc...) have always shared information thru' email, using yahoo groups. Now I get all that family information when I login to Facebook. Of course, this conclusion applies only to my demographic: the middle aged, middle class, middle income voter!

To prevent this automatic coalescing in to a small group, I see that many people always click the "like" button on almost all of the status updates, hoping that this will trick the Facebook algorithm to show more variety on their home page when they login.

At least for me personally, Facebook has replaced Yahoo Groups. Is Facebook any more useful than this? I don't think so, but only time will tell, may be it will morph in to something else in the future.

NodeJS vs IIS : IIS is faster at dishing out static HTML

I wanted to check if NodeJS would be the correct tech for one of my upcoming projects, hence I did a rudimentary benchmark of NodeJS against IIS on windows for dishing out static HTML. IIS does come out ahead of NodeJS, IIS is about 2.5 times faster than NodeJS on windows.

Details of my benchmark can be found here, on one of my answers at stackoverflow: http://stackoverflow.com/questions/9290160/node-js-vs-net-performance/10641377#10641377

Updated: Tomcat appears to be the fastest server dishing out STATIC HTML on WINDOWS. Tomcat is about 3 times faster than IIS in responding to the same request

Updated (5/18/2012) Previously I had 100,000 total requests with 10,000 concurrent requests. I increased it to 1,000,000 total requess and 100,000 concurrent requests. IIS comes out as the screaming winner, with Nodejs fairing the worst. I have tabularized the results below:

Friday, October 28, 2011

Will the Desktop ever be dead?

Though the pundits have been proclaiming the death of the Desktop for quite a few years now (Virtualization), the desktop has continued to survive and doesn’t show any sign of weakness, if not gaining strength. If a quad core processor with 8 GB of RAM, top of the line video chip and a 1920 x 1080 HD screen can be made available in a 5 pound laptop, why would people not use it? Why would people abandon such a rich user interface and awesome processing power? It will be naive to expect people to give it up, be it Linux or Windows.

The topic "Death of the desktop" has been given new lease of life, thanks to some of the cloud computing gurus, who predict that after "everything" moves to the cloud, the user needs only a browser, and hence they don't need a powerful desktop. They seem to confuse the browser and the desktop.

If the browser becomes the all-in-one program, where the user edits all documents and presentations, works with audio and video files, writes code and performs myriad other tasks which are today performed by separate applications, then that browser would be a humongous amalgamation of all the applications and would require all the processing power in the world to run. Performing all tasks inside a browser simply means that the user has only one application to deal with, but it doesn't mean the need for processing power will go down.

The question "will the desktop ever be dead" itself is silly. It's like asking "will the computer ever be dead". All this debate about desktop is simply fueled by unjustified hatred of Microsoft. In an effort to unseat Microsoft's dominant position, more and more features are being crammed in to the browser, so that it becomes the de-facto operating system. Many people, including those who work in the technology industry, have this mistaken notion that if the software is accessible on the internet (for e.g., Microsoft Office 365 or Google Docs), then it must be running on the server (or the cloud!). No, such software is written in JavaScript, downloaded by the browser and executed in the local machine. The software is skinny, but so are the features! In the future, when the time comes, where the software downloaded from the internet has the same features as the desktop installed software, the browser would have bulked up and will be as heavy as the operating system.

The "death of the desktop" philosophy is counter-intuitive to the evolution of the tech industry and consumer behavior. The hardware (CPUs, storage, network) capacity has increased manifold over the years. And so is the software's complexity and the users' hunger to do more and more things sitting on the couch.

Of course, the hardware will change its form factor. The processing power of the main frames became available in the desktops, which is now available in a laptop or a netbook, and very soon it will be available in the pads and the slates. It doesn't mean that Desktop is dead, it just means those devices are the new Desktops. Our future is filled with increasingly powerful devices of all shapes and sizes. If you find evidence to the contrary, let me know.

Saturday, May 14, 2011

Facebook: few observations

I registered this blog quite a while ago, but never really found the time to regularly post on it. (Of course, the lack of talent to write interesting and useful information is the real reason why thousands of blogs, including this one, have gone stale on the Internet. Lets not publicize that). I created my Facebook account last year. I know I'm late to the party, but hey, better late than never, right?

With great frenzy, I added as many friends as possible. The reason was two-fold; one, it was a popularity contest. I wanted to have more friends than my friends! It was the web 2.0 (or is it 3.0 now?) version of "mine is bigger than yours". I added everyone in my family, close friends, distant friends, faint acquaintances from my childhood and the complete strangers who smiled at me while passing in the office hallway. Second, I didn't want to miss out on a networking opportunity. I didn't want to be less connected than my friends! What if I lose my job? These friends would help me get one. What if I wanted to start my own business? These friends would provide me with leads! Friends would provide me with handyman advice, travel tips. Friends would give me this and tell me that. They would help me get ahead!

The time tested axiom that friendship is more about quality than quantity was lost on me. At first, I was hooked on to a bunch of games. I logged in to Facebook whenever time permitted and raced around, hunted treasures and fought the mafia until my fingers ached. But over a period of time, I slowly started disengaging from Facebook, not only because I didn't have the time to read the deluge of status updates, (not to mention the symptoms of carpal-tunnel), but most of the status updates were useless and some of them were actually irritating.

Many of the people I see on Facebook are like me, not very creative. Because they are on Facebook, they want to write something clever. I routinely see status updates like "sleepless in seattle", "is it friday yet?", or "someone has got the case of mondays" or "back to back meetings". Seriously people, if you cannot come up with a pithy quote or a witty sentence, it is perfectly alright to not write anything!

Apparently one disadvantage I noticed is that we have all of our friends and relatives in there. Work friends, school friends, neighbors, close relatives, distant relatives and everyone else in one big group. Facebook does allow the creation of private groups, but I highly doubt that people are using it. The reason being that almost all of the messages posted by my friends are a common denominator, i.e., people post only those messages that are acceptable to every one of their friends. Sometimes the office politics spills over and on other occassions, I have seen people express their personal problems or insecurities.

Facebook is definitely a great way to keep in touch with the "social circle", even if there is nothing to say, saying something commonplace like "great weather today", will keep us in our friends' memories. Probably that is why people indulge in meaningless smalltalk, knowing fully well it has become a necessary evil. One good thing I noticed is that Facebook is a convenient way to share photos and videos with a large group of people.

Feature request to Facebook: Is there a way to separate messages based on the usefulness of the information. For e.g., if my friend says "i'm at the xyz restaurant", I dont want to see it, but if he says "I went to xyz restaurant, but it was closed. what a bummer", I want to see it, because the second message has more useful information than the first. I believe we will eventually get there.

Update May 26th 2011: I came across this blog post by Dave Pell, very similar, but from a publisher's view point: http://www.npr.org/blogs/alltechconsidered/2011/05/26/136654846/i-don-t-care-if-you-read-this-article (Note: I removed the "visitors" counter and "followers" gadget from the blog after reading this post !)

Update June 28th 2011: Google+ project released in limited beta, which contains the concept of "Circles" to alleviate the "common denominator" problem I mention above.