nav-left cat-right
cat-right

SEO Part 13 – ‘Linkbait’ Content

In the ‘Incoming Links’ section before it is described how incoming links to the site can great improve Page Rank and therefore pages ranking higher in the search engines. The section describes how to aim for links to your site and try to aim for keywords in link text. One of the best ways to attract incoming links is to publicise content that is commonly referred to as ‘link bait’. This content has two principle aims: 1. Informative content for the site visitor 2. Attract external sources to reference or link to it With this in mind it can pay to encourage articles to be created and published on your site by even external sources. If you run a site on organic fertilisers then write some articles about the field and link to this from your front page. Over time similar themed sites will start to link to it if it contains relevant and useful information. Encourage your business partners to link to it. Blog sites such as SEOSolutions.co.uk are an excellent way to provide new fresh content which will ultimately attract external links – you could therefore look at adding a Blog on your site even if it’s to discuss your latest product...

SEO Part 12 – Valid HTML and Accessibility

The W3 World Wide Web Consortium publicise specifications for web based content and how it should be structured for consistent representation across different platforms and browsers. W3 also develop guidelines widely regarded as international standards for web accessibility – the web accessibility initiative (WAI). If web based content conforms to W3 standards for valid HTML, XHTML, CSS and WAI then that content is expected to be accessible to a wider audience. For this reason search engines are likely to add more weighting to ‘valid’ sites rather than sites that do not conform. Google is beginning to move in the direction of favouring valid sites with ‘accessible search’ – http://labs.google.com/accessible/. This search ranks sites for the search string based on how accessible Google judge that site to be. Although this search is still in a beta – or ‘Google labs’ it is widely expected that such technology will be used in the main Google search. You should begin to work towards making more content valid and accessible by following specifications and guidleines on the W3 website by using CSS and HTML...

SEO Part 11 – TITLE and ALT tags

The HTML tags TITLE and ALT are used on text and image links respectively to give a description to the link or image. ALT tags are also used describe an image. The title tag is suggesting a title for the destination source. Both tags are displayed by different browsers in different ways and also auditory browsers can read the text aloud. TITLE Tags It has long been known that some of the search engines associate ranking and relevance to the target page based on TITLE tags on the links that link to that page. This is therefore tied in with the text that is used to link to pages. The TITLE is meant to be a more descriptive passage on that link target and the text used to link to that page. So here again we should try to incorporate keywords and key phrases for that target page in the TITLE tag. To add text into the USC TITLE tags for a given link, the text needs to be added into the Tooltip field of the link Alt Tags ALT tags have a similar effect to TITLE tags but just on the images they relate to. The text entered in an images ALT tag can be used by search engines for image indexing but also an indication of what the image content on that page relates...

SEO Part 10 – Internal Linking

As discussed in the external links section above, links are one the main mechanisms by which search engines find new pages and determine the relevance and importance of pages. As with external linking to your site the internal links within a site play an important role in passing on pagerank and relevance. Firstly ensure you have a very clear structure on your site such as the following: It is important to consider that pages linked to from the homepage will inherit some the page rank (PR) importance €“ as can be seen with the Google PR on the Google toolbar. What you will commonly notice is that the further you traverse down the navigation tree the more the PR degrades on a site. If you find pages that are linked to from the homepage then these will commonly have a PR just under the homepage PR. Pages that are only linked to from lower level pages will have a low PR. What does this mean or matter you might ask? Well the higher the page rank of those underlying pages the more chance they have of coming top for a given search that matches the TITLE, META and content of that page. The PR degradation discussed here can be altered by externally high PR pages linking to a page somewhere other then the homepage on your site. These in effect pass the PR onto your pages which can in some circumstances increase your page low in the navigation tree to a PR level the same as your...

SEO Part 9 – Google Webmaster Tools (Sitemaps)...

Google Webmaster Tools (formerly Google Sitemaps) is a development by Google allowing webmasters to easily provide Google with more information on what pages to index, when they have been updated and generally making a site more Google-friendly for indexing and listing within Google search results. A ‘Google sitemap’ is an XML based file which Google crawlers access on a frequent basis to check for new pages and updates to pages. Ideally you want to have sitemaps created on the fly or by overnightly tasks – this ensures Google is kept up to date on a daily basis. All published pages are listed within the sitemap file with the last modified time. The SEOSolutions sitemap can be viewed here: http://www.seosolutions.co.uk/sitemap.xml In addition to sitemaps within the Google Webmaster Tools there is other interesting data available: 1. Data relating to top search queries for your site 2. Information on last website crawl by Google 3. Website errors experienced by Google when traversing your site 4. Web page analysis 5. Index stats More information on Sitemaps and Google webmaster tools can be found here:...

SEO Part 8 – Robots.txt

A robots.txt is text file that resides on the root directory of the web server.  The purpose of this file is it to indicate to visiting robots and crawlers which parts of the underlying web site should be visited for the purpose of inclusion in a search engine. More information can be obtained on the following site: http://www.robotstxt.org/ In most cases you will want the search engines to access the site and crawl all parts of the site.  An example Robots.txt file is below which indicates to ALL crawlers and robots (indicated by ‘User-agent: *’) to index the whole site apart from content under the folder ‘mySecretFolder’ User-agent: * Disallow:...