When it comes to Humans (I mean humans) interacting with computing devices, the friendship is slightly more than two decades old. From the flow of endless text in black and green colored hypothetical screens to amazing designs of today spread across any screen size you can think of, Graphical User Interfaces have come a long way on the only available canvas; screens.
The Genesis of Graphics
Evolution of Graphical User Interface has deep roots, not limited to merely a change of color pallette in your favorite web site or a new theme of latest version of your Operating System of choice, as deep as late ‘60s. Command Line never attracted masses and that was single big hinderance to bringing computers to consumers at large. Initial attempts of pointing device based interfaces done by Stanford Research Institute led to further great inventions made by Xerox PARC (Palo Alto Research Center), which was later famously adopted by Apple (and copied by Microsoft) and became a revolution.
Design, and Iterate, Iterate, Iterate.
Closely looking into the avalanche of changes that have occurred to interfaces and various form-factors of devices, it is both, fast and slow. We were not used to look at screens for hours, and were not easily adaptable to work with computers. Though GUIs eradicated the need of having a “science knowhow” for using computers and added a human touch, the transition was not smooth and learning curve was steep, and that’s the reason why we have dedicated discipline of studies on Human-Computer Interaction.
To ease and comfort the users, designers started combining real world objects into the interface, having Floppy Icon for Save action, Trash Can for Delete, Magnifying glass for search, and the list goes on. People could associate the action with the design just with a glance, without having to read what it actually does.
From having a sense of familiarity to being a pleasure to look at, Graphical Interfaces went through a lot of research and innovations over the years. Spend an hour on Dribbble or Behance and return back to real world, everything starts to look ugly.
Loved Hated Skeuomorphism
Defining Skeuomorphism in simplest set of words; A faux design that resembles real world objects. Be it screen buttons that look like those light switches hooked on walls of your home, a paper tear-like edges of Notepad app on your phone, and many such examples. In the beginning it added to accessibility to the users, as they could easily associate it with real world objects, and thus, they already know how to use it.
But in last two years, this design language got exaggerated on famous platforms (particularly, iOS) and unnecessary elements became part of interface, which aided absolutely nothing to usability of application, not to mention, stitched leather and fine grain wooden veneers. While many designers were obsessed with this design language (Apple played a key role behind promoting Skeuomorphic designs), a small set of designers started hating it.
There’s a thin line between smartness and dumbness of using this pattern of design, and it lies on the app idea itself, there are certain apps that still require those knobs and switches in it to express greater sense of familiarity to its user, rather than distracting him. Virtual DJ for iPad is an example that makes an efficient use of skeuomorphic design, as DJ professional knows exactly how Turntables work in real world, so operating it on touch screen is similar, but unfortunately, not all apps in the App Store have follow same idea, and you may love to see those.
Meet the Flat
Yes, the reason of hatred is a result of our usage experience with computing devices for last 20 years. We are already familiar how a button works, since we’ve been seeing it since our childhood, so we don’t need a resemblance of real world switch. The hatred has now become mainstream, and we call it, the Flat Design (or may be, almost-Flat Design). Digging into the origins of so-called Flat design, it turns out that it is as old as GUI itself, we did have first GUIs very close to what Flat designs are today, but not as pleasant and with smart choice of colors used.
Microsoft was actually the first to re-introduce the new design language to consumer market than anyone else in the industry. It was with Windows Media Center, which Microsoft released for the first time back in 2001 with Windows XP Media Center Edition (final release in 2002) to make way for computers into your drawing room from your boring office desk.
But for almost a decade, this design language (dubbed as Metro Modern UI) remained boxed in Media Center, though Microsoft implemented it again on their Xbox Gaming Console and Zune HD Music Player (sounds familiar), it didn’t make into Microsoft’s flagship products. But Metro UI had a potential, huge potential, or at least Microsoft believed it.
With the launch of Windows Phone (or rather a rename/rebrand of Windows Mobile) back in 2010, Microsoft was not only betting on its own smartphone platform again, it was betting company’s whole future on it, understanding the fact that Apple took industry with the storm with its iPhone and later on the iPad, while Google was steadily gaining grounds with Android, as we entered into post-PC era. Adoption of Metro UI finally made into Microsoft’s flagship products Windows and Office (2013). But what it all means for designers? Well, infinite inspiration for creating beautiful interfaces, and a few more reasons to hate skeuomorphism.
Solid colors are back into the picture, Nokia and Microsoft offsprings are great example of such. And surprisingly, this breakthrough change of great design is not coming from design demigod Apple, and what’s more it is exactly opposite to Apple’s philosophy of design that Apple had to rethink and adopt it, if not create it; the famous Scott Forstall to Jonathan Ive change.
While Microsoft both created and mastered this new way of interface design, Apple sort of overdid it, as Android remains somewhere in the middle (almost-flat segment) playing it safe.
Leather-Stitch Faux, to Almost Flat, to Flat
Defining what this faux design is should be clear by now as made through this long article. But what is “almost flat”? you know how Gmail, Google Docs, newer Google Maps look like? that’s “almost flat”.
There is no presence of artificial resemblance anywhere and yet you can figure out what an icon is for; shadows are trailing yet subtle, slightly beveled edges, solid color divisions instead of gradients, and paying attention and respect to geometry. This method of design looks appealing while it is both simple and complex on its own.
Flat on the other end doesn’t even involve shadows and bewels, they focus particularly on shape and color, and that holds to true for both, Typography and Iconography, Windows Phone is all around this idea. And surprisingly, this design is not only a “visual masturbation” for designers but carry greater accessibility and usability on its own. Text is pure white on deep solid colored background, to some it looks cool, while for users with poor eyesight, it has immense readability, hitting two birds with one stone.
Gone are those days when a functional application that serves the purpose is enough to sell and earn millions, a pleasurable user experience is equally important to keep users engaged. No matter how great your algorithm to process that large chunk of data is, if it has got shitty UI, nobody’s gonna use it, remember we’re to deal with those humans. From startups to big established companies, hirings give equal importance (or sometimes even primary) to UI/UX designers along with software engineers; Pintrest, Path and alike are great example of the same.
Its been a year since I’m using GitHub and all I have to say is, how the hell I was able to code earlier! Too loud? you may reconsider after reading the entire article. Unlike my earlier posts, this post is for programmers ( and also my first brief tutorial), so if you’re NOT one of us (developers/programmers/coders/nerds/geeks), thanks for visiting my blog, you may want to read my previous articles. :)
DISCLAIMER: This article is not a brief tutorial on Git Version Control System (or VCS in general), there aremanyplaces on web where you can learn it from beginning. Instead, I’ll be focusing here on GitHub (with some parts of Git, obviously) and how to make it work for your next project.
Git and GitHub - A Primer
Git is a distributed version control system, distributed in a sense that many developers (irrelevant of their location) can work on same code, since there is no central server where developers have to push or pull the changes to and from. It was created originally created by Linus Torvalds (yes! the same awesome guy who created Linux) primarily for using it as VCS for Linux Kernel Development. Git is open source and fairly easy to learn and use, so it became popular over the course of time and currently large number of projects use it (including Android, Chromium, and many more).
Now, GitHub is a service built on top of Git. It adds the social capabilities to Git which makes it easy for many developers to work on a common project. It hosts all your code in a public repository which then can be shared with the world. Developers can collaborate on a project or they can fork a repository and contribute to the project. So, you get the point here? GitHub is more about “socially coding”, of course you can keep your code private as per your needs.
You can sign-up on GitHub for free and get virtually unlimited number of repositories and unlimited collaborators. While Git is all about command-line stuff, there are nice Windows and Mac apps (Linux users, you’re already awesome!) that can take care for most of the work for you via a decent GUI. But, we’ll do our stuff here on command-line as its fast and you have more control.
Creating a Repository
Creating a repo in Git is a matter of running following commands in terminal; git init followed by git add . within your project folder that you want to track, but creating a repository in GitHub is not a command line stuff. There are two ways you can follow, first create a repository locally using standard Git commands and link it to online GitHub repository, or doing it in reverse, creating an online repo first and then clone it in your computer and add your files. In either of the ways, you need to create repo on GitHub via its web interface. I’ll cover the second way, which I believe is the ideal approach.
You visit your GitHub homepage (the http://github.com/<your user name> URL) and create a repo from “Repositories” section with “New” button. Give a unique name, to your repo and hit “Create Repository”, in my case I’ve given it “MyProject”.
There’s an option to check called “Initialize this repository with a README”. Readme files in Git repository are usually markdown-formatted files (.md extensioned files) which are shown when your repo’s page is visited on GitHub. You may want to check this option while creating repository and add some description for your project in the README file later (It’ll have your repo’s name and added description by default).
Once the the repository is created, you’ll be presented with a page showing your code and README file’s description.
On this page you get an HTTP and SSH URL of your repository which you can use to manage your files on repo on GitHub. If your network allows SSH access (and if you prefer more raw access to your repo online), you can click on SSH button and get the URL, otherwise HTTP URL is enough to get things done. In our case it is https://github.com/kushalpandya/MyProject.git, copy the URL and open the terminal (if you’re on Windows you can use your GitHub app to clone the repo, it’ll automatically show it) and run to clone entire repo into your hard-drive.
It’ll start cloning your repo within your current working directory and copy all the files and folders in it (for MyProject, there’ll be only README.md file). Once all the files are cloned from the repo, you should see it in your current working directory under “MyProject” folder.
As you can see, we have only “README.md” here. Now, we want to add something to this repo, here’s a lesson, the true power of Git comes with a concept of Branches, think of a branch as a shadow copy of the entire repo where you can add changes to a single branch which won’t affect other branches.
Every Git repo has a main branch known as “master”, this branch is considered as highway where only stable and final code should be kept. Now considering above example, by default we have only master branch here which has only single file “README.md”, now we want to add a new file to this repo, but we don’t want to make these changes directly into master, remember, everytime you start working on a new part or functionality of your project, DO NOT make those changes into master branch, always, always create new branch first and then proceed. So let’s create a new branch using following command.
git branch feature/HelloWorldProg
The above command creates a new branch called “feature/HelloWorldProg” by copying everything the repo has in its master branch (it doesn’t actually copy all the files, remember, Git doesn’t track real files, it tracks content within the files). It made new branch out of master because we’re currently working within master branch. So what’s the catch here? Git always creates new branch from current branch that you’re working on when such command is run.
Now question arises, how to name a branch? Short answer: Be concise, clear and short with branch name. Long answer: Naming a branch properly is extremely important as once a project starts with its full course development and you begin to create many branches in your repo, not having a concise name will leave you wondering what code exists in which branch. What I follow? If I’m working on a new feature for a project, I prefix branch name with “feature/” followed by actual feature name (slashes are allowed in branch names). If I’m working on a bug fix, I prefix my branch name with “bug<id>/”, where ID is the issue number that I’m working to fix (learn more about bug-tracking on GitHub later in this article). But you can have your own naming conventions for branches. In fact, there’s a great discussion on StackOverflow regarding the best practices followed in naming Git branches. You can read more on what’s allowed syntactically for a branch name in Git here. Also, concept of Branching and Merging them (yes, you can merge branches in Git, more on that later) is vast and beyond the scope of this article, but you can refer to this excellent guide on the same which will help you understand just that.
If I’m working on a new feature for a project, I prefix branch name with “feature/” followed by actual feature name (slashes are allowed in branch names). If I’m working on a bug fix, I prefix my branch name with “bug<id>/”, where ID is the issue number that I’m working on to fix.
Now back to our example, upon running the command, it just creates a new branch called “feature/HelloWorldProg”, but we’re still active on our master branch, so in order to start working in a new branch, we can run git checkout feature/HelloWorldProg, this will switch to our newly created branch. Also, you can create a new branch and immediately switch to it by running git checkout -b feature/HelloWorldProg instead of running two commands as we did earlier. Since we’ve created a new branch from master, there’s no difference for now, it’ll have the same README file. Now let’s add a new file called “HelloWorld.c” to our project.
Since we added the file, you can go back to terminal and run git status, this command will show you that you’ve just added a new file in your feature/HelloWorldProg branch which is not currently tracked by Git.
In Git, before it can track any file, we need to add that file to the repo first, just keeping it in folder is not enough. So let’s add it by running git add HelloWorld.c. This will add the file to our newly created branch. Now let’s commit the changes we’ve made using following command.
git commit "HelloWorld.c" -m "Hello World program added for C Language"
Above command commits the newly added file to the branch with the message (called, commit log). Another lesson, be wise with commit messages you give, don’t be overly verbose or highly brusque. You can have longer commit messages, but instead of writing everything in detail, be concise with the message and include what’s essential in the commit. As we commit the changes and run git log we’ll see list of all the commits that have made so far in the branch (note that we’re still working in feature/HelloWorldProg branch).
Observe that there’s one another commit with message “Initial commit”, this was made by GitHub when repo was created and README file added to it, being very first commit to repo. And remember, you cannot switch between branches until your current branch has all the changes committed. Now that we’ve completed the feature we were working on, it’s time to push the changes back to our online repo on GitHub.
Before I proceed with it, let us understand the concept of “remote” in Git. Since Git is a distributed VCS, there’s no central server involved where all the code is placed, rather, we can have remote repositories which are “linked” to our local repositories. They can act as our backup with all branches and commits intact. Remote repositories can be linked to our local ones with a URL, and that URL is identified via an alias. Since we cloned MyProject from GitHub into our hard-drive, our local copy of MyProject is already linked with its GitHub URL (remember http://github.com/kushalpandya/MyProject.git?), since git clone does that for us while cloning the repo. And that URL has an alias already set called “origin”. You can check it by running git remote -v see the remotes available for this repo.
It shows we have same URL for origin with “fetch” and “push”. These are actions available for that particular remote suggesting that we can push (upload) the changes we make and also can fetch (download) the changes from the same online repo. You can add as many remotes as you want, each with unique alias using command git remote add followed by its URL. Note that while you can fetch from any Git remote (if it is accessible to everyone) you may not have rights to push to it, since a remote URL gives you access to master branch of the remote repo (and master branch is not available for everyone to write into).
Coming back to our example, let us now push our changes (the new branch that we created) back to our online repo at GitHub using following command.
git push origin feature/HelloWorldProg
It says to Git that, push branch “feature/HelloWorldProg” to the URL with alias “origin”. Once upload is performed, you can go to your repo’s page and see in the branch dropdown that new branch got created.
Selecting this branch will show you the file that we’ve added. Remember, we didn’t make any changes in the master branch, so file is only shown in the branch we created.
So, we’ve successfully created branches and synchronized it online. But, GitHub is all about working in teams and so far in the example, we only saw how to work individually, so let’s jump into the most powerful feature of GitHub, Forking.
When a repo is created in GitHub, it is by default public, and anyone can view its code and all commit activities that have took place. And one can fork such public repo into his own GitHub account using “fork” button available at the repo’s page. What happens in this case is that an entire repo is cloned into another account with same name, everything in “master” branch of the repo is cloned. Once a repo is forked, you’ll see it available into your own GitHub homepage, with same repo name. It’ll also have similar URL available, which you can use to clone it back into your hard-drive just like we did in our example earlier. You can then start working into your own fork of the project.
But how to contribute to the original author of the repo? well, that’s what forking is intended for. When you fork a repo, you get your own master branch of the same repo (with everything that existed in the master branch of original repo at the time of forking). Whenever you’ll make any changes to your own fork or project and commit it, an option called “Make Pull Request” will appear on your repo’s page. Pull Requests are the real magic behind all collaborative projects on GitHub.
So, you’ve forked a repo, you have all its code and now you want to add your own code to it such that it becomes a part of the original repo you forked from. That’s what pull requests does. Whenever you have changes in your own repo (which are committed of course) that differ with the code on original repo, GitHub will show you an option to make a “pull request”. Upon clicking it, you’ll see following page.
Note that for demo purposes, I’m showing you pull request dialog on same repo, otherwise, on the left you’ll see branch or the original repo and on the right you’ll see your repo’s branch. In the pull request dialog, you need to specify about the changes that this pull request will bring with it. Again, be clear and concise here too. Now, when you click on “Send pull request”, it notifies the original repo’s author that you wish you merge some new code to the repo.
He can then see the difference between his master branch and the code you’re pull request has.
If he thinks that your code is fine, he can merge it, and all your code will be added to original repo’s master branch, cool right? but what if he rejects your pull request? and it’ll happen often on large projects. So let’s learn about some basics and best-practices to follow with forking and making pull request.
As mentioned earlier, your fork is exact copy of master branch of original repo (generally called upstream), but by default, when you fork a repo and when changes are made into original repo’s master branch post-forking (new code added after you forked repo), your forked repo gets outdated and will not be synced automatically. And its important to keep your fork up-to-date with upstream, you can do it with remotes, that we learned earlier. As we already know that GitHub adds a remote named “origin” by default to a repo, we do have origin here in our fork too. But, it points to our own fork, not the URL or original repo. So we need to add it manually by following command.
Here you can see that we created a new remote named “upstream” with URL of original repo (usually, term upstream is used for remote to original repo when it is forked). Once added, you can run git remote -v and see that it has upstream and its URL available for both, push and fetch. But the catch here is that you technically cannot push into upstream as you do not have write access to original repo. However, you can fetch from upstream using following command.
git fetch upstream
It’ll fetch whatever is changed in the master branch of original repo, but remember it’ll not add those changes to your repo, you’ll have to run following command right after above command to see the changes that were made in upstream.
git merge upstream/master
What this command does is that, it merges, whatever is in the upstream’s master branch, into your current working branch (note the phrase “current working branch”). But what if you want to fetch the changes and merge it directly? you can do it with single command (although not recommended).
git pull upstream [branch name you want to merge with]
What above command will do is, it will first run git fetch on upstream and get all the changes from its master branch, and will try to merge it with the branch name you’ve given in the command (omitting branch name in the command will merge it to current working branch). So basically, it does fetching and merging in a single command. It is convenient than our previous flow but may not be something you always want. I prefer to run git fetch rather than git pull especially on forked projects.
Another important thing to keep in mind while working with forked projects is that never make changes into your own master branch, since it has to stay always in sync with master of upstream. Because when your pull request gets rejected for some or the other reason, your master branch will have changes which do not exist in upstream, and that will cause conflicts (merge conflicts, in official terms) and you’ll no longer be able to sync your repo with upstream. So following is the ideal flow:
Run git fetch followed by git merge and keep your repo up-to-date with upstream.
Create a new branch in your own repo and work on it.
Push your new branch on GitHub using git push origin.
Make a pull request to original repo.
This ensures that your repo’s master branch is never conflicted with upstream. But what if you didn’t follow the above flow and things screwed up? you can’t re-fork the repo and start everything all over again, so what to do? in such horrendous situations, you need to rebase your repo (discard your changes and revert back to upstream”s state). Run following to reset your repo by fetching from upstream.
git reset --hard upstream/master
Note that it just resets your local repo, but we also have to fix our online broken fork, do that using following command.
git push --force
But why doing such dirty work where you can always work with branches right?
In ideal circumstances when your pull request is accepted and merged into upstream, you’ll not only need to update your fork again using git fetch and git merge (which just updates local repo on your hard-drive) but also need to update your fork on GitHub by running git push origin followed by branch name (“master” in this case, since you’ve pulled changes from upstream’s master branch).
Creating and Fixing Issues
GitHub is not only about sharing your code, it has an excellent bug-tracking system available. Users can “create” an issue and assign it to fellow developers, fix an existing issue by going through its details, create “milestones” comprising one or more issues (a milestone is be a set of features to be achieved in a certain timeline). Issues can be viewed from “Issues” tab in your repo’s page.
You can add labels to issues depending on its category, see who’s assigned with that issue (a person supposed to fix it), or you can close the issue. You can create new issue simply by clicking its button.
Note that each issue has a unique ID available, which can be used wisely. Not long ago, GitHub added the ability to close an issue just by including its ID within commit message as “fix #12” or “fixes #12”. When this commit is pushed into repo on GitHub, issue with ID 12 is automatically closed, pretty handy.
Other Powerful Features of GitHub
What if you are working on a project where each developer need to have a full access to one common repository? You can add “collaborators” in repository from Settings tab and he’ll get full access to your repo (unlike forking, where a user has only read access).
Branches are great and you can view all the branches that have been created in a particular repo from its “Network” tab. Where it shows form where a new branch is created, what is committed by whom in the branch (by hovering over the dots on lines).
You can even see statistics of how developers are working on the project, frequency of commits, top contributors, etc.
You can also create so-called Wiki Pages for a repo (from Wiki tab) which provides rich documentation about the project (Markdown formatted text). If you’re a large team working on multiple projects, you can create “Organizations” on GitHub and add members to it such that each member has access to all the repos created within the organization. And apart from these great built-in features. GitHub has many apps (called “Service Hooks”) available which are tailored for certain tasks. Also, be sure to checkout GitHub Help which covers wide range of topics that you need to know.
Remember I mentioned earlier that by default all repositories that you create on GitHub are public and you need to buy private repositories if you want to keep your code private? well, if you’re a student, you can claim 5 free repositories for 2 years by requesting GitHub Educational Account. Don’t worry, you’ll not have to create a new account, just visit the link and verify that you’re a student. Happy coding. :)
Let’s set up some grounds before I proceed, and shift ourselves back in early, early Nineties of Technology. WWW was the-next-big-thing, getting “online” was cool. More than 50% of the first-world nations (as they like to call themselves) had internet access. People started spending more time in chat rooms within their own rooms, thanks to AIM and ICQ. IRC (Internet Relay Chat) was every nerd’s underground of dark discussions. Computers left laboratories and entered living rooms.
Open Source Arena
Not long before (early 80s), Richard Stallman began first-of-its-kind movement, creating Open Source software which is free for everyone to modify and use under the project GNU (recursively abbreviated for; GNU’s Not Unix), which included handful of applications and an Operating System. On the other hand, a champ from Finland, Linus Torvalds, was hacking around with computers during the college project and created a portable Kernel that only supported the hardware he originally had. The result was Linux. Actually, he created it due to facing licensing issues with then popular OS MINIX, and he wanted it to be called Freax (the letter “x” was bizarrely present in name of every Unix-like OS name), but thanks to the Internet age, the OS was titled as Linux (coming from his surname Linus, and the ubiquitous letter “x”).
Since then, thousands of developers joined in and Linux become one hell of an OS that could run virtually on any hardware. Ubuntu, Fedora, RedHat, etc. took over on desktops, Linux was no longer a terminal-only thing. Here’s how the first GUI variants of Linux looked like (Slackware 1.01, year 1993) running XFree GUI, a variant of X Window System.
Over last couple of years Linux has been in constant evolution, added support for numerous hardware combinations. Desktop Linux is taking on mainstream computing by Linux variants like Ubuntu and its derivatives (I’ll be focusing on Ubuntu, specifically). Earlier we used to hear that Linux meant for servers but this no longer is the case.
Back in 2004, when Ubuntu (version 4.10) was released, it was not more than yet-another-linux, with usual GNOME desktop and set of its default apps, just with different theme and wallpaper in every release it made. Till year 2010, Ubuntu remained all the same, shipped with latest version of GNOME desktop, just with a different skin. Let’s face it, but GNOME copied many, many of its UI elements directly from Mac OS X.
So what is was stopping Ubuntu, or any Desktop Linux variant for that matter, from being mass adopted in personal computing ruled by Microsoft Windows and Mac OS X? Answer is, lack of originality. Yes having something original and unique is difficult these days. While GNOME was already heavily inspired from Mac and Windows in many ways, hundreds of Linux distributions (or distros) were shipping it as is with just different set of icons and color schemes, and above all, if there exists any instability in core GNOME app, it is by default exists on every distro it ships with, which is annoying, and Ubuntu was no exception to this.
But then came Unity, Canonical’s (company behind Ubuntu project) own desktop shell built on top of GNOME and Qt. It was originally targeted for Netbooks (yes those cheap and small laptops which are dead now) but eventually made into standard Desktop starting with Ubuntu 11.04. It was hated, rejected, many loyal Ubuntu users (including me) either changed default desktop back to GNOME or jumped to another distro. But, Unity had the future, bright future, Canonical continued to support its development and it became mature, more polished, and more over, it was something unique in its own way. People could distinguish Ubuntu running Unity from other popular distros which all looked same. It wasn’t only about how Unity looked, with each version from 11.04, Ubuntu became more stable rather than shipping buggy new features. Again this approach matches a lot with Mac OS X, but Ubuntu releases are frequent (every 6 months) and free (community-driven open source development) which makes it special.
Above all, Canonical announced Ubuntu OS during Mobile World Congress this year, which aims not only to get the OS in our smartphones but make Ubuntu a unified OS that runs across different hardwares and form-factors. This surely is the move to keep up in handheld device race which is declining PC sales to a greater extent in the Post-PC era (as Apple calls it).
On the other hand, Canonical has already partnered with big names to bring Ubuntu-ready devices in the market which is a bold move where proprietary software vendors (Microsoft, Apple, BlackBerry to name a few) are having large consumer share.
Where everything’s going?
Above phenomenon was just so much about Ubuntu, but if we look at only Linux and its prevalence, then Android needs no introduction here. It is open and growing really fast, 1.5 million daily activations as of April 2013 as pointed by Eric Schmidt in Dive Into Mobile 2013, that even any other Linux-based counterpart will get a tough competition in merely a survival, ask Nokia for that matter, which killed all its open source efforts to create smartphone OS, Meamo, Moblin and Meego (all are different names of exact same project) which never got a commercial success. A matured platform like Symbian also saw its end of life when Nokia finally adopted Windows Phone (which is gaining fair share of success already). But that is not keeping Linux and OSS movements back, projects like Ubuntu, Tizen, Sailfish, etc. have its own share of believers and users.
Press has put mixed responses over Facebook Home, some say its an incredible leap of Facebook in mobile space while some say it is a disaster. Some users complained Facebook to stabilize existing facebook apps for Android and iOS first, rather than working on newer products. But Facebook Home is indeed more than just an App, a launcher or Facebook Phone as some name it.
Back in 2007 when iPhone launched, so is the iOS (formerly, iPhone OS), Apple redefined the way people see smartphones, it doesn’t have to be a device with obscurely confusing keys, barely readable text, and a cluttered screen. While before 2 years in 2005, Google acquired Android Inc. and Andy Rubin joined Google, the acquisition was a part of Google’s Mobile strategy and the result was Android OS and Open Handset Alliance in 2007 which not only was a bugle call of war against Apple’s iOS but also against whole “proprietary” software dynasty. People argue that Google copied iOS thoroughly, which upto a certain extent, is also true, but Google did some things (read Notifications, App Development for beginners, etc.) better with Android than what Apple did. Over the years, both giants behind these platforms have copied best of each other and also have put worst of their own into the product. But the good part is, Google allowed others to dig into Android and fix what’s not right while Apple thinks its always right!
Why explaining so much of the origins of Android and iOS in an article about Facebook Home? Well, its important since both these platforms are going in a constant direction of icon-driven monotonous interface with identical design & user experience in one way or the other. Facebook Home is not only an app but an inspiration about how one can push any platform to its limits and create something wonderful in terms of a functional design. If not the vendors, developers can take over the platform and set new trends in app designing.
Cover Feed, Chat Heads and Launcher can make you forget that you’re on good ol’ Android for a moment, it makes you use Facebook more than you do on a regular basis. Apart from some unexpected behaviors and limitations in the Chat Heads and Launcher, like, Chat Heads just don’t appear if you’re on any full-screen app which hides the notification bar (eg; Games, Flipboard and other alike), Launcher may not work the way you sometimes expect it to, the overall experience of Home is stupendous. And this is just a starting point of how Android can be made more usable, more rich. While iOS users are left in the dark for its own reasons as far as Facebook Home is concerned, but apps like Path, Passbook and Haze are a step forward to a new horizon of mobile app designing.
Apple deeply integrated Facebook within iOS starting with version 6, having a Home-like experience might still be a dream, but still this could be a reason that Apple thinks again regarding the restrictions it puts for developers creating apps for iOS, and a reason more people adopt open software for its own good.
Just before two days Google announced that it is forking WebKit for using in its Chromium open source web browser (thus, everything that follows it, Chrome, Chrome OS, Android Browser and others ), naming it Blink. This initiated a great round of talks all over the web on how its going to change Chrome, and other browsers in general. So I decided to write a blog post (and probably the first ever blog post of mine) on this.
The KHTML, started by KDE project back in late 90s led to great deal of developments and turned into WebKit with support of many giants has now a new fork called Blink, as Google calls it. But WebKit was already doing so great then why Google realized a need to have separate fork and alienate its development? The first and foremost reason is that while Chromium uses WebKit, it is essentially not exact same WebKit as we know it. Let’s first see, in its simplest form, what WebKit consists of.
But does it mean that Google and WebKit are no longer friends? what will happen to browsers who are already using WebKit or have plans to switch to WebKit? Well, Google said that developers working on Blink can contribute the same feature set to upstream if they wish to, and sharing of features (and fixes) in both the streams may be easy in the early stages but as Blink moves forward, situation may not be the same. As far as Opera is concerned, it said that it’ll be using Chromium as its base, and not the WebKit, so this isn’t going to affect Opera since it’ll also be using Blink anyway. However, WebKit’s development might be affected since Google was a major contributor to the project if we look at the commits made to the WebKit in last couple of years.
And this is what happened to original KHTML when WebKit was forked from it, and it happened for good, so this is a good sign for innovation. I’ll not be generous here but WebKit was so prevalent and every other browser was using it hence, it somehow monopolised the browser market, as after Opera killed its Presto, Trident (IE’s layout engine) and Gecko (Mozilla) were the only contenders WebKit had. But this decision reignites browser wars.
And what about us Web Developers? I’ll get back on that one in the end of this article.
Not popular as Blink became even before its birth. Servo is a brand new browser engine that Mozilla and Samsung is working on for a while. This isn’t a fork of Gecko or any other engine, but it is written from scratch in a programming language called Rust (an open source programming language by Mozilla itself). We know that in its early days, Firefox was amazing browser (it is still amazing for large number of users) but let’s face it, in last couple of years, it has become a huge bloat, cold-starting Firefox took forever, extensions still need a restart to complete installation, it is still not a true multi-process browser (referring to Stable channel of Fx). However, situation started to improve with Fx 4.0 but Mozilla realized it very late, Chrome came and took a huge toll in its global usage share.
Servo is being developed with keeping modern hardware in mind, where we have multiple processors with multiple computing cores, high-end GPUs to render HD graphics, and screens with trillions of pixels, none of which existed in early days of browsers.
So Gecko will die too? No. As old as the internet is, browsers too were made considering same old conventions, hardware/software limitations. Servo is said to be developed with keeping modern hardware in mind, where we have multiple processors with multiple computing cores, high-end GPUs to render HD graphics, and screens with trillions of pixels, none of which existed in early days of browsers. But Servo is still in its very early stages (GitHub repo available here) and so is the Rust (v0.6 as of writing this). So technically, Gecko and Servo will not compete each other in earlier stages, since Gecko is itself a matured and stable engine and giving acute competition to its counterparts but this will change obviously once Servo comes to mainstream.
Also, Mozilla has no plans to integrate Servo into Firefox anytime soon. But, Servo is a direct competitor to Blink in the future, even if its still nowhere close to it now.
I’m a Web Developer, is this a storm coming?
Sounds like a nightmare when you get to know that new browser engines are coming and you’ll have to support them too. Vendor prefixes, version checking, graceful degradation, list goes on in our todos. But this will not be the case, as far as Blink is concerned, vendor prefixes will be a thing of the past and all you’d need to do is to enable a certain setting of experimental features and all your code will work as per the standard syntax. Also keep in mind that while browser vendors are different, they work together closely when it comes to web standards, hence life of web developers is going to be easy anyway and only thing we need to worry for now is increasing number of device form-factors, and not browsers.