Sunday, October 16, 2016

Google Penguin 4.0 Updates!

Latest Updates about Google Penguin 4.0 Released on Sep 23, 2016 

Latest Google Penguin 4.0 UPDATE:

Google Penguin Real Time Algo which had started rolling out on Sep 23, 2016 has now fully rolled out to all Google Data centers

FACTS About Google Penguin 4.0 Algorithm:

Penguin is coined a “web spam” algorithm, but it indeed focuses mostly on “link spam.” Google has continually told webmasters that this is a web spam algorithm, but every webmaster and SEO focuses mostly around links. Google’s Gary Illyes said their focus is right, that they should be mostly concerned with the links when tackling Penguin issues.

What Google says here is that Penguin algo is based on source site and not on the target site. You must be concerned from where the links are coming the source. 
The links must be of quality when compared to the low quality sites.

Gary says Google just devalues those links, so instead of demoting a site, google just ignores that links, doesn't count those links at all. 

However Garry explains further here:

Gary Illyes: It’s not just links. It looks at a bunch of different things related to the source site. Links is just the most visible thing and the one that we decided to talk most about because we already talked about about links in general.

But it looks at different things on the source site, not on the target site, and then makes decisions based on those special signals.

But he also mentions that if you overdo then there can be Manual Penalty on your site, so its best to find good link resources and list bad links via  disavow tool.

read more here :source:

Google Penguin Algo Real time:

Penguin becomes more page-specific, not sitewide only.
Google has said "Penguin is now more granular. Penguin now devalues spam by adjusting ranking based on spam signals, rather than affecting ranking of the whole site"

As per Searchengineland:

Penguin might impact specific pages on a site, or it might impact sections or wide swaths of a site, while other pages are fine.

Google Penguin Algorithm Full Updates List:

  • Penguin 1.0 on April 24, 2012 (impacting ~3.1% of queries)
  • Penguin 1.1 on May 26, 2012 (impacting less than 0.1%)
  • Penguin 1.2 on October 5, 2012 (impacting ~0.3% of queries)
  • Penguin 2.0 on May 22, 2013 (impacting 2.3% of queries)
  • Penguin 2.1 on Oct. 4, 2013 (impacting around 1% of queries)
  • Penguin 3.0 on October 17, 2014 (impacting around 1% of queries)
  • Penguin 3.1 on November 27, 2014 (confirmed by Google, no impact given, Google considers part of Penguin 3.0)
  • Penguin 3.2 on December 2, 2014 (not confirmed by Google but based on publisher reports)
  • Penguin 3.3 on December 5, 2014 (not confirmed by Google but based on publisher reports)
  • Penguin 3.4 on December 6, 2014 (not confirmed by Google but based on publisher reports)
  • Penguin 4.0 & real-time on September 23, 2016 ( as penguin 4.0 is real time updates and it happens constantly so the percentage changes accordingly.

List of Google Algorithms Updates:

Here is a list of updated Google Algorithm links 
  • Google Hummingbird
  • Google Mobile Friendly Update
  • Google Panda Update
  • Google Penguin Update
  • Google Pigeon Update
  • Google Payday Update
  • Google Pirate Update
  • Google EMD (Exact Match Domain) Update
  • Google Top Heavy Update

Source: Read more here Google updates Penguin, says it now runs in real time within the core search algorithm

Sunday, March 27, 2016

What is Google AMP?

Google AMP aims at Making your Internet Life Easier with Faster Browsing!

We all use the internet more and more in our daily lives. Now with the smartphones, we need the internet almost every hour. And most notably, we need to go back to certain pages more often than the others. Thus, there is basically a repetition of a few of the webpages on a particular device, and quite naturally, the user desires these webpages to open almost instantaneously.

What is AMP?

Generally, how fast these pages will open depends on the internet connectivity, the efficiency with which these webpages open up, the browser quality and so on. So, there are several external factors that come in to play. There is hardly any control to optimise all or many of them so that ultimately the user suffers.

Google AMP

That is why the AMP is here. It is the Accelerated Mobile Pages project undertaken by many of the worldwide leaders of the internet, notably Google. The Google AMP project is an extension of the Facebook Instant Articles that we had heard of a while ago. The FB initiative was to make sure that the web contents are instantly accessible to a user much quicker than reaching it through a browser. But due to various reasons, although the FB initiative got stuck, Google and a few other major players have taken it up to extend similar services to users, as the intention behind the initiative was very positive!

Accelerated MobilePages Project 

The AMP developed so far is a subset HTML of JavaScript with only a few specific components available. It is meant for read-only purpose and not being interactive. Thus not the full HTML of JavaScript is required to engineer the project. Only the specific ones that are necessary for the reading purpose have been chosen. This ensures the project is lean and consumes much less of memory space of any device wherein it is loaded. It also can operate faster and help a long way in achieving the main objective of making the specific webpages open faster than usual for the users.  

How do Accelerated Mobile Pages (AMP) from Google Work?

The Google AMP is here with the sole objective to cache, pre-load and pre-render the various webpages frequently visited by a user. It has three essential components in its build- AMP HTML, AMP JS and Google AMP Cache.

Without going much into the technicalities, it  can be easily summed up that the AMP HTML *optimises the pre-loading and pre-rendering part of the webpages frequently used and the AMP JS makes sure they open up faster than usual through any other browser. The Google AMP Cache on its parts ensures to fetch the details of the webpages and temporarily store them as cache memory so that these can be referred to whenever the user wants to open these pages back. The whole exercise is to not only render the webpages faster but also to do so using the least bit of memory.

Google AMP Cache being a proxy-based content delivery network that makes all images and JS files load from the same source while opening a webpage. It uses HTML 2.0 version as of now so that the efficiency in pre-loading and pre-rendering is the fastest. 

Advantages and Examples of AMP Pages

The simplest of advantages that AMP pages offers is to make the frequently used webpages run faster than usual on your internet device.Different AMP projects try achieving this in different possible ways. Besides the main intention, a few other things too matter like consuming the least of memory space, and a leaner and faster working or operating procedure.

All of the projects are still going through a lot of validation stages and continual R&D is improving the performances day by day. But soon the Google AMP will be ahead of all the others owing to its faster pace of development, technical expertise as well as an understanding of the user psyche. The user is benefited the most as s/he can now repeatedly open the frequently used webpages faster, adding to her/his convenience. A leaner AMP that consumes the least bit of space in the phone memory will help a lot, and that is exactly where the Google AMP projects of the future are headed towards.

Thursday, February 19, 2015

Rise of WhatsApp on Your Computer!

WhatsApp Web

WOW! Enjoy WhatsApp on your Computer PC's!  - Share pictures, videos, text and type directly from your Desktop!
Connect your Whatsapp to your  PC NOW!

WhatsApp is now accessible both on your phone and your computer. 
At this time, WhatsApp Web is available only for AndroidWindows Phone 8.0 and 8.1Nokia S60,Nokia S40 Single SIM EVOBlackBerry and BB10 smartphones.
When you use WhatsApp on your computer and your phone, you are simply accessing the same account on these two devices.

There are a few minimum requirements to enjoy WhatsApp Web:
  • You need to have an active WhatsApp account on your phone.
  • You need to have a stable internet connection on both your phone and your computer.
  • You need to use Google Chrome as your web browser. Click here to download Google Chrome onto your computer. (Other browsers are not currently supported).
To get started with WhatsApp Web you must first pair your phone and computer:

  • Visit on your computer.
  • Open WhatsApp on your phone and go to Menu > WhatsApp Web.
  • Scan the QR Code on your computer.

From your phone, navigate to WhatsApp Web to view your Logged in computers or to logout from an active WhatsApp Web session.
NOTE: To avoid data usage charges on your phone, we recommend that you are always connected to Wi-Fi when using WhatsApp Web.

I love this WhatsApp on Desktop. If you have a laptop or PC now you can directly type , send messages instantly and even copy paste messages which is quite easy with desktop pc.

Best Feature - You can send any files from  your pc, you can share directly from your pc to your contacts on WhatsApp. This is awesome becz many times our Phone doesn't have enough capacity to store images or videos. So instead we can now use this web feature of whatsapp directly! This is freaking awesome. Its fast, speedy and no time is wasted!

I love multitasking, many times when i am working on desktop i can use twitter, facebook, google+ with whatsapp easily as all are open in separate browsers ( chrome browser)  Its fast and easy.
Suppose i want to paste the same message in twitter, fb, google+ and whatsapp then i can do it in seconds. You don't need to check your phone whatsapp messages when you are working on your pc, this is a cool feature.

You can record you voice directly from Desktop and send it via whatsapp that's cool!

Why can't I connect to WhatsApp Web?

Thursday, May 22, 2014

ROBOTS.TXT - How to Block Pages, Directories?


Here are the ROBOTS.TXT commands which can be used by creating a notepad file and place these codes if you would like to block a page, file, directory or whole site.


  • To block the entire site, use a forward slash.
    Disallow: /
  • To block a directory and everything in it, follow the directory name with a forward slash.
    Disallow: /junk-directory/
  • To block a page, list the page.
    Disallow: /private_file.html
  • To remove a specific image from Google Images, add the following:
    User-agent: Googlebot-Image
    Disallow: /images/dogs.jpg 
  • To remove all images on your site from Google Images:
    User-agent: Googlebot-Image
    Disallow: / 
  • To block files of a specific file type (for example, .gif), use the following:
    User-agent: Googlebot
    Disallow: /*.gif$
  • To prevent pages on your site from being crawled, while still displaying AdSense ads on those pages, disallow all bots other than Mediapartners-Google. This keeps the pages from appearing in search results, but allows the Mediapartners-Google robot to analyze the pages to determine the ads to show. The Mediapartners-Google robot doesn't share pages with the other Google user-agents. For example:
    User-agent: *
    Disallow: /
    User-agent: Mediapartners-Google
    Allow: /
Note that directives are case-sensitive. For instance, Disallow: /junk_file.asp would block, but would allow Googlebot will ignore white-space (in particular empty lines)and unknown directives in the robots.txt.

Pattern matching

Googlebot (but not all search engines) respects some pattern matching.
  • To match a sequence of characters, use an asterisk (*). For instance, to block access to all subdirectories that begin with private:
    User-agent: Googlebot
    Disallow: /private*/
  • To block access to all URLs that include a question mark (?) (more specifically, any URL that begins with your domain name, followed by any string, followed by a question mark, followed by any string):
    User-agent: Googlebot
    Disallow: /*?
  • To specify matching the end of a URL, use $. For instance, to block any URLs that end with .xls:
    User-agent: Googlebot 
    Disallow: /*.xls$
    You can use this pattern matching in combination with the Allow directive. For instance, if a ? indicates a session ID, you may want to exclude all URLs that contain them to ensure Googlebot doesn't crawl duplicate pages. But URLs that end with a ? may be the version of the page that you do want included. For this situation, you can set your robots.txt file as follows:
    User-agent: *
    Allow: /*?$
    Disallow: /*?
    The Disallow: / *? directive will block any URL that includes a ? (more specifically, it will block any URL that begins with your domain name, followed by any string, followed by a question mark, followed by any string).
    The Allow: /*?$ directive will allow any URL that ends in a ? (more specifically, it will allow any URL that begins with your domain name, followed by a string, followed by a ?, with no characters after the ?).
Save your robots.txt file by downloading the file or copying the contents to a text file and saving as robots.txt. Save the file to the highest-level directory of your site. The robots.txt file must reside in the root of the domain and must be named "robots.txt". A robots.txt file located in a subdirectory isn't valid, as bots only check for this file in the root of the domain. For instance, is a valid location, but is not.


Monday, December 16, 2013

Latest SEO Updates December 2013

Some of the Latest Updates about SEO & Latest In formations

1. Google is Tough on Repeat Offenders

One of the most fascinating and little-known areas of SEO news is Google penalties. When websites break Google’s webmaster guidelines with outdated tactics like buying links, they’re typically caught. 
Google’s head spam fighter, Matt Cutts, recently revealed in a Q & A session that it’s much harder to come back and rank well after a second or third penalty. In fact, his recommendation for websites who are trying to improve their SEO after past use of purchase links to use the disavow tool to wipe their backlinks completely

2. Rich Snippets Could Be Rolled Out Soon

The author photos which appear next to search results once you’ve earned Google authorship are a form of rich snippet. However, it appears the world’s biggest search engine is considering leveling the playing field

3. In-Depth Results aren’t Going Anywhere

Back when Google’s Hummingbird Algorithm re-write was launched, the search engine announced they’d be sharing in-depth search results for around 10% of queries that may require more complex answers. Recent SEO news have indicated that the in-depth articles initiative continues, and that every web master has an opportunity to have their content featured among these top results:

While many of the websites featured in in-depth results have extraordinarily high site authority, the Google webmaster blog announced the SEO news that you can improve your chances of being featured here with the following tactics:

4. Google to Index App Content

If you haven’t yet built an app for your business, there may be even more incentive to get started now. Some of Google’s latest SEO news is that content from Android apps would soon be indexed like regular web pages.

5. Google PageRank is Now Up-to-Date

In perhaps the most shocking piece of SEO news in months, Google PageRank was quietly updated on December 6, 2013. Prior to 2012, the tool bar was updated on a quarterly basis, allowing SEOs and content marketers almost real-time access to their site’s authority in the eyes of Google.

6. Semantic Search is on the Rise

Anyone who’s been using web technologies for more than a decade remembers Boolean search operators. Early search engines weren’t smart enough to pick up on things like plural words, and you had to join your queries together with specific terms like “and”, “or”, and “else.” However, the deployment of Google Hummingbird, Bing’s Satori and Facebook’s knowledge graph in 2014 is clear evidence that search engines are getting much smarter at picking up on the variation behind phrasing choices, a concept known as semantic search.

7. Google Penalizes Excessive Linking

Cutts recently addressed the long-standing belief that you should never exceed 100 links per page on your website. Turns out, it’s still a wise best practice. In the early days of search, major engines had trouble indexing content with more than 100 links. While the capability is now there, excessive linking can be a pretty serious red flag that someone’s being spammy. Cutts recommends that content marketers stick to a “reasonable number” of links.


Google Latest Algorithm Updates 2013 | SEO News, SEO Updates EMD, Penguin, Panda Headline Animator