Showing posts with label Search engine. Show all posts
Showing posts with label Search engine. Show all posts

Tuesday, January 10, 2012

Internet Marketing Statistics for 2012


The Internet in 2012

internet1) In 2012, the audience of internet users in the U.S. will expand by 3.1% to 239 million, representing 75.6% of the total population.
In other words...
2) More than 3/4 of the total population will be online in 2012. 

Mobile in 2012

3) Mobile internet users will reach 113.9 million in 2012, up 17.1% from 97.3 million in 2011. 
4) Smartphone users will reach 106.7 million in 2012, up 18.4% from 2011. 
mobile5) In 2012, 94% of smartphones users will be mobile internet users. 
6) All mobile phone users will reach 242.6 million in 2012, up 2.3% from 2011.
7) Mobile shoppers will reach 72.8 million in 2012. 
8) Mobile buyers will reach 37.5 million in 2012. 
9) Smartphone shoppers will reach 68.6 million in 2012.
10) Smartphone buyers will reach 36.4 million in 2012. 
11) Tablet users will reach 54.8 million in 2012, up 62.8% from 33.7 million in 2011. 
12) iPad users will reach 41.9 million in 2012. 
13) In 2012, 76.4% of tablet users will be iPad users. 
14) Adult-aged eReader users will reach 45.6 million in 2012, up from 33.3 million in 2011. 

Social Media in 2012

social media15) Facebook will reach 143.4 million US users in 2012, up 8.2% from 132.5 million in 2011. 
16) About 2/3 of web users will use social networks in 2012. 
17) More than 90% of social network users will be on Facebook in 2012. 

Online Video in 2012

18) Online video viewers will reach 169.3 million in 2012. 
video19) 53.5% of the population and 70.8% of internet users (up 7.1% from 2011) will watch online video in 2012.
20) Mobile video viewers will reach 54.6 million in 2012. 
21) Smartphone video viewers will reach 51.2 million in 2012. 

Ecommerce in 2012

22) 88.1% of US internet users ages 14+ will browse or research products online in 2012. 
23) 83.9% of internet researchers will make at least one purchase via the web during 2012. 
24) Online shoppers will reach 184.3 million in 2012, up 3.3% from 2011.
25) Online buyers will reach 154.6 million in 2012, up 4.4% from 2011. 

Source- hubspot blog

Sunday, December 4, 2011

Google Algorithmic Updates


We've seen plenty of algorithmic announcements made from the company over the course of the year. In November, they discussed ten recent changes they had made. Here's a recap of those:

Cross-language information retrieval updates: For queries in languages where limited web content is available (Afrikaans, Malay, Slovak, Swahili, Hindi, Norwegian, Serbian, Catalan, Maltese, Macedonian, Albanian, Slovenian, Welsh, Icelandic), we will now translate relevant English web pages and display the translated titles directly below the English titles in the search results. This feature was available previously in Korean, but only at the bottom of the page. Clicking on the translated titles will take you to pages translated from English into the query language.

Snippets with more page content and less header/menu content: This change helps us choose more relevant text to use in snippets. As we improve our understanding of web page structure, we are now more likely to pick text from the actual page content, and less likely to use text that is part of a header or menu.

Better page titles in search results by de-duplicating boilerplate anchors: We look at a number of signals when generating a page's title. One signal is the anchor text in links pointing to the page. We found that boilerplate links with duplicated anchor text are not as relevant, so we are putting less emphasis on these. The result is more relevant titles that are specific to the page's content.

Length-based autocomplete predictions in Russian: This improvement reduces the number of long, sometimes arbitrary query predictions in Russian. We will not make predictions that are very long in comparison either to the partial query or to the other predictions for that partial query. This is already our practice in English.

Extending application rich snippets: We recently announced rich snippets for applications. This enables people who are searching for software applications to see details, like cost and user reviews, within their search results. This change extends the coverage of application rich snippets, so they will be available more often.

Retiring a signal in Image search: As the web evolves, we often revisit signals that we launched in the past that no longer appear to have a significant impact. In this case, we decided to retire a signal in Image Search related to images that had references from multiple documents on the web.

Fresher, more recent results: As we announced just over a week ago, we've made a significant improvement to how we rank fresh content. This change impacts roughly 35 percent of total searches (around 6-10% of search results to a noticeable degree) and better determines the appropriate level of freshness for a given query.

Refining official page detection: We try hard to give our users the most relevant and authoritative results. With this change, we adjusted how we attempt to determine which pages are official. This will tend to rank official websites even higher in our ranking.

Improvements to date-restricted queries: We changed how we handle result freshness for queries where a user has chosen a specific date range. This helps ensure that users get the results that are most relevant for the date range that they specify.

Prediction fix for IME queries: This change improves how Autocomplete handles IME queries (queries which contain non-Latin characters). Autocomplete was previously storing the intermediate keystrokes needed to type each character, which would sometimes result in gibberish predictions for Hebrew, Russian and Arabic.


Source- webpronews 

Thursday, December 1, 2011

On Page SEO Factors

One of the most basic things which a webmaster should understand is how to do on page SEO and the on page SEO factors to consider when both creating a new website or when adding new content and pages to an existing website.

On page SEO predominantly pertains to tweaking your site's code, choosing the right keywords, and optimizing your content overall to make it more attractive and identifiable to Google and other search engines' web crawling bots so that they will in turn rank your site higher in the SERPs.

Here I've expanded on the major techniques which you should implement on your site if you want to rank and rank well at that. Keep in mind that the weight which search engines place on these techniques is subjective as no one knows for sure the algorithms behind Google's and other engine's ranking practices. No one factor is ever been unanimously considered to be more paramount than all others, so don't overlook any of these techniques. Most of them are quick and simple to implement, so there's really no reason not to, either.

Keyword On Page SEO Factors

I've said it before and I'll say it again: keywords are the gateways to your site and they play a huge role in on page SEO.

Once you know what is a good keyword and how to do keyword research, then you can find the keywords which you should be using for your on page SEO. WordPress combined with a good free SEO plugin like All in One SEO Pack makes implementing most of the following keyword related on page SEO factors simple as can be, but let's take a look at where we need to make sure to effectively use our keywords.

Keyword(s) In Title Tags - The title tag is the line of clickable text which appears in Google's SERPs for each ranking page. This is also the text which appears at the top of your browser's window in naming or identifying the page which you are currently browsing. It's important to work your best (most relevant, highest searched, lowest competition) keywords into your title tags as Google will consider this over most factors when indexing and ranking your page and in general in determining what it's about.

Keywords In Heading Tags - Effective use of heading tags helps Google know what is most important on your site, and what text it should pick out over the rest. It's been estimated that most people when opening a new page will first instinctively read the top left of your page before other elements. Consequently, your most important message using your most important keyword that you want to get across should be here and ideally using the H1 tag.

From here, less important keywords should be put in H2, and even less important but still noteworthy keywords should be in H3, and so on. An effective use of heading tags is a valuable skill to have in on page SEO.

Keywords In the Body - There is no substitute for good content, and in SEO a large part of good content is effectively including keywords where applicable. This means no over saturation because as I mentioned in the last chapter this will get you penalized and possibly de-indexed. A number of SEOers have argued for years over just the right saturation which will get you ranking well but not penalized and they have thrown out different figures to try to answer this magical ratio.

My advice is to not waste your time trying to crunch ratios as there is no perfect ratio. Besides, you have better things to do with your time and other ways to optimize your site. Just make it look natural and don't over think it and you'll be fine.

Words Surrounding Keywords - This all may seem a bit redundant, but the words around the keywords are just as important as the keywords which you include to begin with.

Google not only looks at the keywords which you are targeting, but it looks at the words surrounding them to get a better idea of what your site is about, not to mention that it does this to check you on keyword stuffing and make sure you're not doing anything "black hat" which could get you in trouble.

Keyword In Domain Name - This takes some careful planning, but many SEOers agree that this has a decent amount of bearing on how Google ranks a site for a keyword. In continuing with this point, you can set your site up as it expands to be more SEO friendly by naming subdomains after keywords, as well. So be as specific as possible when naming if you can. For example name a subdomain of a music site "Gibson-guitars" rather than just "guitars".

Note the use of the hyphen in the example. Hyphens represent spaces, so if you are using keywords in your URLs, break them up. While keywords in the subdomains don't carry quite as much weight as the top level of the domain, they're nonetheless important and helpful to Google in identifying what your site is about.

Source- SEO NEWS

Friday, May 6, 2011

Indexing A Site

Before a site appears in search results, a search engine must index it. An indexed site will have been visited and analyzed by a search robot with relevant information saved in the search engine database. If a page is present in the search engine index, it can be displayed in search results otherwise, the search engine cannot know anything about it and it cannot display information from the page.

There are two ways to allow a search engine to learn about a new site:

   - Submit the address of the site manually using a form associated with the search engine, if available. In this case, you are the one who informs the search engine about the new site and its address goes into the queue for indexing. Only the main page of the site needs to be added, the search robot will find the rest of pages by following links.

   - Let the search robot find the site on its own. If there is at least one inbound link to your resource from other indexed resources, the search robot will soon visit and index your site. In most cases, this method is recommended. This may actually be quicker than manually adding it to the submission queue. The Google search engine is the quickest of the bunch.

   Try to make your site friendly to search robots by following these rules:

   - Try to make any page of your site reachable from the main page in not more than three mouse clicks. If the structure of the site does not allow you to do this, create a so-called site map that will allow this rule to be observed.

   - Remember that search engines index no more than the first 100-200 KB of text on a page. Hence, the following rule – do not use pages with text larger than 100 KB if you want them to be indexed completely.

   You can manage the behavior of search robots using the file robots.txt. This file allows you to explicitly permit or forbid them to index particular pages on your site.

   The databases of search engines are constantly being updated; records in them may change, disappear and reappear. That is why the number of indexed pages on your site may sometimes vary. One of the most common reasons for a page to disappear from indexes is server unavailability. This means that the search robot could not access it at the time it was attempting to index the site. After the server is restarted, the site should eventually reappear in the index.

   You should note that the more inbound links your site has, the more quickly it gets re-indexed. You can track the process of indexing your site by analyzing server log files where all visits of search robots are logged.

Thursday, April 14, 2011

Search Engine Ideology


   To understand the seo process you must know about the architecture of search engines. Search engine contains the following components:
Spider+Crawler+Indexer component group might be implemented as a single program that downloads web pages, analyzes them and then uses their links to find new resources.

   Spider. This program downloads web pages just like a web browser. The difference is that a browser displays the information presented on each page (text, graphics, etc.) while a spider does not have any visual components and works directly with the underlying HTML code of the page.

   Crawler. This program finds all links on each page. Its task is to determine where the spider should go either by evaluating the links or according to a predefined list of addresses. The crawler follows these links and tries to find documents not already known to the search engine.

   Indexer. This component parses each page and analyzes the various elements, such as text, headers, structural or stylistic features, special HTML tags, etc.

   Database. This is the storage area for the data that the search engine downloads and analyzes. Sometimes it is called the index of the search engine.

   Results Engine. The results engine ranks pages. It determines which pages best match a user's query and in what order the pages should be listed. This is done according to the ranking algorithms of the search engine.

   Web server. The search engine web server usually contains a HTML page with an input field where the user can specify the search query he or she is interested in. The web server is also responsible for displaying search results to the user in the form of an HTML page.

Spider+Crawler+Indexer component group might be implemented as a single program that downloads web pages, analyzes them and then uses their links to find new resources.

Wednesday, April 6, 2011

Some Common Mistakes In SEO


There are some common mistakes which we include in our SEO process, avoid these mistakes to effectively run your SEO campaign.

Graphic header

   Very often sites are designed with a graphic header. Often, we see an image of the company logo occupying the full-page width. Do not do it! The upper part of a page is a very valuable place where you should insert your most important keywords for best seo.
In case of a graphic image, that prime position is wasted since search engines cannot make use of images. If you must present a logo, the best way is to use a hybrid approach – place the graphic logo at the top of each page and size it so that it does not occupy its entire width. Use a text header to make up the rest of the width.

Script navigation

   Sometimes scripts are used for site navigation. As a seo, you should understand that search engines cannot read or execute scripts. Thus, a link specified with the help of a script will not be available to the search engine, the search robot will not follow it and so parts of your site will not be indexed. If you use site navigation scripts then you must provide regular HTML duplicates to make them visible to everyone – your human visitors and the search robots.

Session identifier

   Some sites use session identifiers. This means that each visitor gets a unique parameter (&session_id=) when he or she arrives at the site. This ID is added to the address of each page visited on the site. Session IDs help site owners to collect useful statistics, including information about visitors' behavior. However, from the point of view of a search robot, a page with a new address is a brand new page. This means that, each time the search robot comes to such a site, it will get a new session identifier and will consider the pages as new ones whenever it visits them.  If you are interested in seo for your site, I recommend that you avoid session identifiers if possible.

Redirects
   Redirects make site analysis more difficult for search robots, with resulting adverse effects on seo. Do not use redirects unless there is a clear reason for doing so.

Hidden text, a deceptive seo method

   The last two issues are not really mistakes but deliberate attempts to deceive search engines using illicit seo methods. Hidden text (when the text color coincides with the background color, for example) allows site owners to cram a page with their desired keywords without affecting page logic or visual layout. Such text is invisible to human visitors but will be seen by search robots. The use of such deceptive optimization methods may result in banning of the site. It could be excluded from the index (database) of the search engine.     This is another deceptive seo technique. Search engines consider the use of tiny, almost invisible, graphic image links just one pixel wide and high as an attempt at deception, which may lead to a site ban.