Re-designing or redeveloping a website has become a major issue for companies, as they seek to refine their online presence and invest in their websites. However, more than ever, the search engines are the biggest obstacle to website improvement.
Planning for a Re-Design
I find that in our development and Information Architecture consulting, one of the largest hurdles that we encounter is dealing with the transition from the old website to a newer architecture. For larger sites, planning a transition to maintain the links and rankings held by thousands of pages that will no longer exist is quickly becoming one of the more time consuming tasks.
Surprisingly, the main obstacle to developing improved websites (both architecturally and usability) is the search engines themselves. The method of retrieving pages into a central index for an algorithm is outdated and antiquated, as it does not account for improvements and changes in a website. In short, companies are being penalized for not being aware of the limitations of search engines, Google in particular.
What goes wrong:
1. In a new website project, the architecture of the site typically changes. Companies are becoming more aware of search-friendly programming and implementing it into their development. However, when the new architecture goes live, the old architecture and old pages addresses (URL’s) held the rankings.
Results: Rankings are lost as old pages are no longer available.
2. Incoming links to the website and the deep pages within the website no longer have a destination (page names usually change with a new architecture). This reduces the “link juice” that is carried to that website, as the destination of the link no longer exists.
Results: Decreased rankings and value based on incomplete (broken) incoming links.
Redirects
To remedy these situations, the old formula of applying URL rewrites and 301 redirects is employed in order to match the old pages to their newer counterparts. However, this requires server power to accomplish. In a re-direct, the old page is requested, and the server scans through the instructions to see if there is a new page to deliver instead of the old page. In doing this, rankings can usually be maintained.
Redirected Links
Links are also maintained, but lose their value overtime. Redirected links not a direct link; the new page destination may not be the page intended as the original link destination, thereby losing value. It is always best to have a direct incoming link for best link value. However, for site owners with hundreds to thousands of links, they now have to go back and ask other webmasters, site owners and companies to edit the links on their sites to point to the new URL in order to receive the full value. Is that really necessary? Is search engine technology so lacking in foresight that this will be the bane of webmasters and marketers for the next decade?
The issue with redirects is that every redirect takes a fraction of server resources to accomplish. A few redirects are fine, however when working with sites that are taking 8-10 years of history and thousands of pages, the redirects become a significant drag on server resources.
Duplicate Content
This also takes into account that the redirects are written and applied properly. I am amazed at the canonicalization issues that still hinder websites and the amount of work that a webmaster is expected to perform in order to “help” the search engines.
I have worked with many programmers that do fantastic, innovative work and develop amazing applications within websites, only to have the issues of duplicate content hinder the website. What can be considered effective user-based programming has to be tossed out the development window in order to accommodate search engine crawlers. How many companies are even aware of duplicate content and how that can hinder their rankings in search engines? How many websites are being penalized unaware?
Google asks: “Would I do this if the Search Engines Didn’t Exist”?
Companies are developing websites smarter than ever, using search friendly architecture, AJAX, CSS and other technologies in an attempt to make the experience better for their users. However, because the information of information retrieval is so outdated, these same companies are penalized for changing a site that would have been better left alone.
Essentially, Google has written the rules of website development, re-development and innovation. If a company is not aware of those rules or does not invest the time and money to reverse engineer their new website to accommodate outdated technology, then they are effectively penalized.
In short, the rule of “Would I do this is Search Engines didn’t exist?” (Google Webmaster Guidelines: Quality Guidelines – basic principles) is nonsensical. Especially when paired with the latest news of Google’s attempt to solve the AJAX issue – developers are left to struggle with increasingly outdated search engine technology in an attempt to have a new website (that is hopefully better for their users) maintain rankings.
I enjoy a good challenge, but the challenge is starting to come at the expense of innovation for developers and the companies that desire to improve their online presence and user experience. Rather than innovate in tools and applications, it’s time for the search engines to step up and improve their methods in their core service – search.
Otherwise, true innovators are the ones who are penalized.