Online marketing information can change quickly This article is 10 years and 191 days old, and the facts and opinions contained in it may be out of date.
Last week, a gent by the name of Ruslan Abuzant, got a rare peak at a portion of the algorithm of Google, stumbling accross it when looking at the cached version of a multi-language page. He was kind enough to post his findings on digital point forums which I found via threadwatch.
Perhaps, it’s because it happend over the holiday weekend, but I thought it was a bit odd that more SEO’s weren’t as excited by this as I was. No, there’s probably not A LOT that can be learned from this, but there is some, and it was finally like being “through the looking glass” to get a rare glimpse of how google really ranks pages.
e_supplemental=150000 –pagerank_cutoff_decrease_per_round=100 –pagerank_cutoff_increase_per_round=500 –parents=12,13,14,15,16,17,18,19,20,21,22,23 –pass_country_to_leaves –phil_max_doc_activation=0.5 –port_base=32311 –production –rewrite_noncompositional_compounds –rpc_resolve_unreachable_servers –scale_prvec4_to_prvec –sections_to_retrieve=body+url+compactanchors –servlets=ascorer –supplemental_tier_section=body+url+compactanchors –threaded_logging –nouse_compressed_urls –use_domain_match –nouse_experimental_indyrank –use_experimental_spamscore –use_gwd –use_query_classifier –use_spamscore –using_borg
While this isn’t EXTREMELY telling, there are some things we can take a look at here that are potentially useful. Perhaps the other reasons SEO’s weren’t to excited, because as you break this down, you will tend to see a lot of the variables that we often speculate about anyhow. TallTroll (hey Brendon – I’d link to ya if I knew any of your sites;)), mentioned on threadwatch a while back:
The joke is that even if they published a definitive version of the algo, the kind of people who moan about Google still wouldn’t be any better off, since they STILL wouldn’t have any clue what to do with the information. Those who do know what to do with it already have a good idea of what the algo looks like, at least in broad terms, and so will gain little themselves.
I guess Most SEO’s don’t NEED to know the algorithm, because they have adapted best practices to suit their process for the most part. They may be able to adapt their process a bit if they knew the EXACT algo, but many folks have a pretty good guess of where the knobs are dialed to, although I’m certain it’s far from a comprehensive understanding of exactly what the mountain of Ph.d’s at G, Y, and MSN have up their sleeves.
So without further ado, here’s a bit of my speculation on what I thought was one of the coolest developments in a long time. It’s only a piece of what is a much bigger thing, but I thought it was definitely worth a look, when Matt confirmed it was real (and also that we will most likely NEVER see something like this again).
**Note This is pure speculation and 99% of it may be pure trash
Best guess: Could be about anything I suppose – potentially a metric for spidering frequency to the specific page
Best guess: spidering frequency to entire site?
Best guess: Metric for how CPU intensive site spidering was
Best guess: Perhaps how fast to spider the website based on server performance
Best guess: How many times the web site has timed out to requests over time
Best guess: File size of the document – last time requested
Best guess: Latency speed of the webserver serving the document requested
Best guess: File size of the document – current request
Best guess: Total stored site size
Best guess: Total queries for the site category, or perhaps the specific site Perhaps “navigational” queries are used to measure the popularity of a site?
Best guess: Threshhold for placing results into the supplemental index
Best guess: Some cutoff point for figuiring link popularity – perhaps an incorporated trust filter to decrease link popularity by several multiples until it’s found trustworthy
Best guess: Some cutoff point for figuiring link popularity – see above
Best guess: Parent topical categories (think DMOZ) – or parent pages within the site (think SE theme pyramids or virtual site heirarchy)
Best guess: Choose primary country of origin for website or page
Best guess: Threshold for maximum spidering of website
Best guess: an indicator of filetype or which datacenters it’s the data is distributed throughout
Not much to go on here –
From – Automatic Discovery of Non-Compositional Compounds
Spaces in texts of languages like English offer an easy first approximation to minimal content-bearing units. However, this approximation mis-analyzes non-compositional compounds (NCCs) such as “kick the bucket” and “hot dog.” NCCs are compound words whose meanings are a matter of convention and cannot be synthesized from the meanings of their space-delimited components.
Best guess: Sounds like some implementation of LSA/LSI to create meaning from non-standard language. Perhaps some type of language AI.
Best guess: Have googlebot revisit unreachable servers
Best guess: Adjustments on PR algo
Best guess: Disregard navigation that is consistent throughout the website – Some type of block level analysis
Best guess: Who the hell knows…not much to go on here…I’m grasping at straws already if you got this far and didn’t realize it;)
Best guess: Aditional block level analysis, perhaps some duplicate content detection
Best guess: Log more in depth information (links, clickthrough rates, etc.) for this page
Best guess: Perhaps a fix for SID’s in urls or other disregarding other types of urls that create infinite loops – disregarding any type of variables after the questionmark in a url
Best guess: Some type of Canonicalization fixes
Best guess: Dunno, but it sounds like a good thing to start tryin’ to figure out – perhaps they finally ARE going to roll toolbar or user data into the algo. Perhaps personalization finally making its’ way in.
Best guess: Newer version of the below spamscore – number filters that give an indicator of how likely a page is spam.
Best guess: not much to go on here – I’ll go with “google word database”
Other guesses have included “google web directory” or “google world domination”
Best guess:Something as simple as
Similar to yahoo mindset
More likely a deeper extension of the above.
Query specific variables to certain verticals –
Think “transactional real estate” – new york real estate agent
“informational real estate” – new york real estate news
This criteria would also help to decipher which queries to serve “onebox results” for froogle/googlebase/google local/ google maps/ etc.
Best guess: The “non-beta” or working version of the above mentioned spam score that is a constant work in progress. Things like multiple dashes in a domain have are good indicators of a high likelihood of a page being spam. Domain names over a certain lengths, and probably many other things would fall into what could be used to evaluate a sites “spamscore”
Best guess: A. Some technology or systems developed by Anita Borg (time for some homework) – or B. google really *IS* trying to take over the world, and we’re all being added to a massive database – I’m going with A as my best guess though;)
People sometimes have a hard time understanding that algorithm variables are not necessarily good or bad, fair or unfair..they are only effective or ineffective in judging quality. People evaulate search results subjectively, but a search algo is objective to many different criteria that make up the final result. A webmaster may think that tracking the number of times a site goes down is “unfair”, but on a massive scale it is an accurate indication of the quality of a website.
I’m sure the boys at the ‘plex are getting a nice chuckle from some of my wild speculation, so I’d like to be my normal google nitpicking self and add my own two cents to Matt’s super beta-algo (I like where it’s going:):
Ã¢â‚¬“initial_time_travel_wormhole=Ã¢â‚¬ÂWednesday, December 31 1969 11:11 pmÃ¢â‚¬Â
You may be better off with:
Ã¢â‚¬“initialize_flux_capacitor=Ã¢â‚¬ÂNovember 5, 1955, 0600 AMÃ¢â‚¬Â (stop Doc Brown!)
Hope this helps spice things up a bit:)
We know there are hundreds if not thousands of variables and combinations, so you have pretty good odds that you can pick SOMETHING that is in the secret sauce SOMEWHERE. This could of course be just another ploy to keep SEO’s busy and wondering rather than actually WORKING on creating more websites;) Anyone else care to toss out their best guesses on what some of this stuff may or may not mean? Wasn’t anyone else excited to get a brief little peak of the code we all so diligently try to reverse engineer?