Blue Hat Technique #18 – Link Saturation w/ Log Link Matching

Alrighty I’m moving this post up a bit to answer a few questions. In my Real Life SEO Example post I talked a bit about the technique of Log Link Matching. It’s an awesome technique that deserves a bit of attention. So here we go.

DescriptionThe reality of the search engines are that they only have a certain percentage of the documents on the web indexed. This is apparent by looking at your own saturation levels with your own sites. Often you’re very lucky if you get 80% of a large site indexed. Unfortunately this means that tons upon tons of the natural links out there aren’t counting and giving proper credit to you and their respective targets. This is a two edged sword. This means your competitors actually have quite a bit more links than it appears, and more than likely so do you. Naturally you can guess what has to be done.

ObjectiveSaturation usually refers to how many pages you have in the index in comparison to the total number of actual pages on your site. For instance if you have a 100 page site and 44 pages are indexed than you have 44% saturation. Since this is a topic that never really gets talked about, for the sake of making it easy on ourselves I’m going to refer to our goal as “link saturation.” In other words the number of links you have showing in the index in comparison to your total actual inbound links. So if you have 100 links in the index but you really have 200 actual links that are identifiable than you have 50% link saturation. That aside, our object is to use methods of early detection to quickly identify inbound links to our sites, get them indexed, and if possible give them a bit of link power so the link to our site will count for more. This will have an ultimate ending result of huge efficiency in our link building campaign. It also will more than likely stir up a large percentage of long dormant links on our older sites that are yet to use the Log Link Matching technique. First let’s focus on links we’ve already missed by taking a look at our log files.

Methodology #1 – The Log FilesOur site’s common log files are a great indicator of a new and old inbound links that the search engines may have missed. Most log files are usually located below the root of of the public html folder. If you’re on a standard CPanel setup the path to the log file can be easily found by downloading and viewing your Awstats config file, which is usually located in /tmp/awstats/awstats.domain.com.conf. Around line 35 it’ll tell you the path of the log file: LogFile=”/usr/local/apache/domlogs/domain.com”. Typically your site as a Linux user has access to this file and can read it through a script. If not than contact your hosting provider and ask for read access to the log.

1) Open up the log file in a text editor and identify where all the referrers are then parse them out so you have a nice list of all the sites that link to you. If you use Textpad you can click Tools – Sort – Delete Duplicate Lines – OK. That will clean up the huge list and organize it into a manageable size.

2) Once you have your list of links there’s several routes you can take to get them indexed. These include but not limited to creating a third party rolling site map, roll over sites, or even distributing the links through blogrolls within your network. Those of course are the more complicated ways of doing it and also the most work intensive, but they’re by far the most effective simply because they involve using direct static links. The simplest of course would be to simply ping Blog Aggregators like the ones listed on Pingomatic or Pingoat. My recommendation is, if you are only getting a couple dozen links/day or are getting a huge volume of links (200+/day) than use the static link methods because they are more efficient and can be monitored more closely. If you’re somewhere in between than there’s no reason you can’t just keep it simple and continuously ping Blog Aggregators and hope a high percentage eventually will get indexed. After so many pings they will all eventually get in anyways. It may just take awhile and is harder to monitor (one of the biggest hatreds in my life..hehe).

There are several Windows applications that can help you mass ping this list of referral URLS. Since I use custom scripts instead of a single Windows app myself I have no strong recommendations for one, but feel free to browse around and find one you like. Another suggestion I have to help clean up your list a bit is to clean the list of any common referrers such as Google, MSN, and Yahoo referrals. That’ll at least save you a ton of wasted CPU time. Once you’ve gotten this taken care of you’ll want to start considering an automated way of doing this for any new links as they come in. I got a few suggestions for this as well.

Methodology #2 – Direct ReferralsOf course you can continue to do the method above to monitor for new referrals as long as you keep the list clean of duplicates. However it doesn’t hurt to consider accomplishing the task upon arrival. I talked a little bit about this last year with my Blog Ping Hack post, and the same principle applies except instead of pinging the current page we’ll ping the referral if it exists.

1) First check to see if a referral exists when the user display the page. If it does exist than have it open up the form submit for a place such as Pingomatic to automatically ping all the services using the users browser. Here’s a few examples of how to do it in various languages.

CGI CODEif(($ENV{'HTTP_REFERER'} ne "") || ($ENV{'HTTP_REFERER'} =~ m/http://(www.)?$mydomain//)) { print qq~~;}

PHP CODEif($_SERVER['HTTP_REFERER'] != "" || preg_match("/http://(www.)?$mydomain///i",$_SERVER['HTTP_REFERER'] > 0) { echo "";}

JAVASCRIPT CODEI really don’t know. Can someone fill this in for me? It’s entirely possible I just don’t know Javascript regex well enough.

This will check to see if the referrer exists. If it does and its not a referrer from within your domain than it’ll display an invisible IFRAME that automatically submits the referrer to PingOMatic. If you wanted to get a bit advanced with it you could also check for Google, MSN, and Yahoo referrers or any other unclean referrers you may get on a regular basis.

If you have an older site and you use this technique you’ll probably be shocked as hell about how many actual links you already had. Like I mentioned in the other post, at first you’ll start seeing your links tripling and even quadrupling but as also mentioned its just an illusion. You’ve had those links all along they just didn’t count since they weren’t indexed in the engines. After that starts to plateau, as long as you keep it up you’ll notice considerable difference in the efficiency and accuracy of your link saturation campaigns. I really believe this technique should be done on almost every site you use to target search traffic. Link Saturation is just too damn fundamental to be ignored. Yet, at the same time, its very good for those of us who are aware that it is not a common practice. Just the difference between your link saturation percentage and your competitors could be the difference between who outranks who.

Any other ideas for methods of early detection you can use to identify new inbound links? Technorati perhaps? How about ideas for ways to not only get the inbound links indexed but boost their creditability in an automated and efficient way? I didn’t mention this but when you’re pinging or rolling the pages through your indexing sites it doesn’t hurt to use YOUR anchor text, it won’t help much but it never hurts to help push the relevancy factor of your own site to their pages while you’re at it.

–>

Real SEO Example

Every so often I get an Email from someone who instead of having a question or comment they give me a url and ask me to give them SEO advice on it. I normally don’t respond to these emails other than maybe a quick “did you have a specific question” type response. It’s not because I don’t want to or I take any offense to it, in fact its the complete opposite. Sorry to say I just don’t have the time to do full blown SEO analysis jobs for people. It’s not that I don’t want to, trust me I do, its just that I’m not one of those SEO bloggers whose career is blogging about SEO. I’m in the thick of it just like you guys are; day in and day out. I’m out there doing the techniques I talk about on this site everyday. Thats my job, and there’s no shortage of it. However, every once in awhile I’ll hit a gem. One of those unique situations that I can’t help but mole it around in my mind. One such example came to me last week by a reader here who was referred to by Jon Waraas(excellent blog; check out), and I think it could apply to a lot of other people in the same situation. He was kind and patient enough to allow me to publicize my response in an effort to help out others. So out of the usual context I’m going to take this example and show you all exactly what I would do in his situation.

The SituationHis site is going for the term “Myspace Layouts.” He already used to rank #1 for a solid amount of time. However after some time he admittedly got lazy and lost his ranking. He now still ranks in the top 10 steadily, but not quite top 5. He wants to rank number one again and was wondering if implementing my Keyword Fluffing and SERP Domination strategies would help.

I consider “Myspace Layouts” as a highly competitive term so this will be a great example. It’s also has the added difficulty factor of being an extremely fast growing niche; especially in just the last year. However, our mission isn’t just to take the number one rank, but to keep it intimidating for the others thinking about trying to take it themselves. So first we’ll look at what we got and then we’ll analyze what we’re up against and see if we can spot any weaknesses we can use to our advantage.

What We GotWithout even having to look at his site I can assume his on-site optimization is near perfect. He used to rank #1 steadily and where he’s at now he’s holding firm. Obviously, if he ranked at one point there’s no reason why he can’t rank again. So my first suggestion would be to not do a damn thing to the site itself. In my SERP Domination post I talked about how to break into the top 10 on a highly competitive niche by splitting up the site into a network of smaller content sites. In this situation he’s already in and has everything he needs to rank in the top position, so I would definitely advice against breaking up the site or making any drastic on-site SEO changes. Instead we’ll focus on off-site optimization and getting what links we need to earn that coveted spot again. Right now he has about 320,000 inbound links according to the Yahoo linkdomain: command. About 303,000 go directly to the main page. ~86,000 of his links come directly from Myspace Profiles. He also has a PR6.

What They GotHis top competitor has about 551,000 inbound links. 344,000 come directly from Myspace Profiles. Taking a quick look at the second and third placed competitors they are slightly less than the number one and theres nothing too notable about them. So for now we’re going to say fuck ‘em and not concern ourselves with what they’re doing because we know if we take the #1 site we’ll beat them as well. That after all is our main target and without extenuating circumstances they can be ignored in this case. So with that information out of the way for now, lets look at some strong determining factors and some weaknesses of both sites in respect to their rankings.

Spotting The Weaknesses and LoopholesLets analyze the math real quick. Without Myspace profile links he has 234,000 links on his own from other sites. His competitor has about 207,000. He is the clear winner in this instance which gives us a huge advantage. However, his 86,000 links directly from people’s Myspace profiles account for only about 27% of his inbound links. While his competitor clearly dominates by having his 344,000 Myspace profile links account for a whopping 62% of his total inbound links. We now have both a strength and a weakness for each of the sites. More importantly we now know why we’re loosing. Thus, we know what loopholes we’ll need to exploit in order to win.

What Do We know?By looking at both our site and the competition we know that if we can increase our links from people’s Myspace profiles in the index to at least 62% than we will win. That means we’ll have to gain about 112,000 links on Myspace Profiles. We also know that our competitor is getting these profile links the same way we are. By giving out free Myspace layouts that include a link to our sites. There is a big problem standing our way though. Since they are ranked #1 and we’re ranked lower than we can assume they are getting more traffic than we are. Therefore, they are gaining these profile links faster than we are by giving out more layouts. It’s quite the pickle. We’ll have to do one of two things. We’ll either have to increase the value of each profile link or we’ll have to increase our volume at a much much faster rate. Neither sound like a fun solution, so we’re going to have to take a few shortcuts. One of which includes a twist on a technique I’ve already talked about, the other is an upcoming Blue Hat Technique I’m yet to discuss. First and foremost we need the proper mindset. We can’t just try to beat this guy, we need to brutally destroy him and get him so far below us that he’ll be stuck in the same situation we are in now. It’s the only way to maintain our competitive edge. So get any thoughts of playing nice out of your head right now. They aren’t going to do you any good.

Establishing A Plan Of AttackFirst we’ll need to do an interesting twist on my Link Laundering technique and merge it with our SERP Domination tactic. Remember when I mentioned in one of the comments that all the techniques I’ve talked about on this site fit together perfectly like a puzzle to create an ultimate ranking strategy? I wasn’t fucking around. There’s no reason why you can’t create a network of sites within subniches that launder targeted links to your main site. He already mentioned in his original email that he wanted to create a few sites based on Myspace subniches to help build link popularity to his main site. Except, knowing what we know now, he doesn’t need more links from outside sites. He needs more links from Myspace Profiles. So I’d recommend doing just that. Build several sites on Myspace subniches such as image uploaders, profile editors, webproxies..etc. However, anytime they have an opportunity to get a link on the persons profile, instead put in the url to your main site. Looking at the current scenario he’s going to need quite a few of these sites in order to catch up, I’d say about 15 that perform on the average or even a little below average. It also wouldn’t hurt to sponsor a few of these types of sites in exchange for them laundering out links to you. I know places like Myspace Web Proxies have a hard time finding and keeping converting ads on their sites, they could be a very cheap solution. Just as long as they are consistently getting your main site links on peoples profiles thats our ultimate goal.

You got to understand, this guy only has 344,000 links on profiles. Thats not a big deal. If his site has been up for the last two years thats only about 15,000/day, not counting a sharp increase from when he started ranking of course. Getting 15 subniche sites within your network getting about 1,000 uses/day isn’t too incredibly hard. You can get that within a month or two. Once you accomplish that, you are at least matching him which is progress in the right direction.

Matching him? I’m sorry I fucked up. At that rate you’re no where NEAR matching him. There is the inevitable law of diminishing returns standing in your way. There are over 100 million Myspace profiles, only about 10 million exist in Google right now. Therefore less than 10% actually gets indexed. Since it’s only the ones that get indexed that matters to your rank than that means even if you match the 15,000/day; You’re really only getting 1,500 that matter. That’s not nearly enough. Consider your ass kicked. Or is it? We may need the help of a good Ol’ Blue Hat Technique to bridge the gap and turn that 10% into nearly 100%. The technique is called Log Link Matching. I’m not going to go into intricate detail about it until the post comes out of course, but I will hint to it and explain as it applies to this case.

Blue Hat Technique #XX – Log Link MatchingAnyone with experience can attest that not nearly all of the true links to your sites actually gets indexed and count in the search engines. It’s unfortunate but true. Not every page is indexed by the engines such as Google and therefore even if they link to you, they can’t pass a value. This is especially true for social sites like Myspace where only about 10% of the profiles get indexed. Log Link Matching is a technique used to ensure that nearly 100% of your real inbound links gets actually indexed by the search engines, thusdramatically increasing your visible inbound links and bumping your rank in an indescribable way. Here’s how it works.

First create an automated Roll Over Site using my Abandoned WordPress Accounts Part 2 post. Did I mention all my techniques merge together? Well they do. Do this by first building up several WordPress blogs into high PR blogs by doing the Digg.com method as described. This will get plenty of search engine spiders to the site on a steady and fast stream as well as give plenty of authority to the site(to boost each site). Lastly, make a system that allows you to quickly add links to your Blogrolls within these accounts (quit being lazy and learn how to code!).

After you got your WordPress.com blogs setup start parsing through your site’s log files. The log files are the first indicator of someone linking to your site. Everytime someone pulls an image on their Myspace account from your server it will show up in there, regardless of whether or not their profile actually shows up in the engines. Where there is a profile pulling your images there is more than likely a link to your site. So you must get that profile page indexed in the search engines in order for the link to count. So parse through your log file for any referrers coming from profile.myspace.com or www.myspace.com/*. Remember profile.myspace.com/viewprofile.blahblahblah?friendid=blahblah is a page on its own just like www.myspace.com/myusername. Therefore BOTH count. So there’s no reason why our site can’t instantly almost double all of his links from Myspace profiles at the exact same time as getting nearly 100% of them indexed. All he has to do is start adding each of these referrers to his blogrolls. After so many WordPress.com will automatically limit how many links actually show up and start randomizing which ones are shown on each and every pageview(or spider view). I think that limit is 15 currently but I’m not entirely sure. It seems to change every so often. Either way every time Googlebot visits the rollover sites it’ll be greeted by XX amount of Myspace profiles to index. Eventually it’ll get almost all of them. Thus the illusion of our inbound links tripling and even quadrupling daily will start happening. We won’t actually be performing nearly to the degree it appears, but we definitely won’t be short any needed momentum either. It’ll just appear that way because many links we’ve had for a long time will finally start showing up and giving us proper credit.

If we keep parsing the log files and checking for new people using our templates and ensuring they all get indexed eventually we will catch up to that #1 site. Coincide that with our link laundering sites within our network we should have no problem overtaking him and holding our ground. Once we are at that point, its check and mate for the time being. He’s either going to have to top that performance or back off and accept his second place trophy. If he does manage to pull something sneaky and come back, no big deal, persistence is worth more than gold.

I hope this little real life example helped a few people in the same situation, and I’ll move up the date on that Blue Hat Log Link Matching Technique so you can get the details of it and really learn how to utilize it in some very powerful ways. It never hurts to have more in our arsenal. Just remember, the fact that you already have more inbound links from other Myspace layout related sites than him. If you can just match him on his strong points you can beat him.

Go Get ‘Em Tiger!

–>

Follow Up To 100’s Of Automated Links/Hour Post

This is a follow post to my 100’s Of Links/Hour Automated – Black Hole SEO post.

I’m going to concede on this one. I admittedly missed a few explanations of some fundamentals that I think left a lot of people out of all the fun. After reading a few comments, emails and inbound links (thanks Cucirca & YellowHouse for good measure) I realize that unless you already have adequate experience building RSS Scraper Sites than its very tough to fully understand my explanation of how to exploit them. So I’m going to do a complete re-explanation and keep it completely nontechnical. This post will become the post to explain how it works, and the other will be the one to explain how to do it. Fair enough? Good, lets get started with an explanation of exactly what a RSS scraper site is. So once again, this time with a MGD in my hand, cheers to Seis De Mayo!

Fundamentals Of Scraping RSSMost blogs automatically publish an RSS feed in either a XML or ATOM format. Here’s mine for an example. These feeds basically consist of a small snipplet of your post(usually the first 500 or so characters) as well as the Title of the post and the Source URL. This is so people can add your blog into their Feed Readers and be updated when new posts arrive. Sometimes people like to be notified on a global scale of posts related to a specific topic. So there are blog search engines that are a compilation of all the RSS feeds they know about through either their own scrapings of the web or people submitting them through their submission forms. They allow you to search through millions of RSS feeds by simply entering a keyword or two. An example of this might be to use Google Blog Search’s direct XML search for the word puppy. Here’s the link. See how it resulted in a bunch of recent posts that included the word Puppy in either the title or the post content snippet (description). These are known as RSS Aggregators. The most popular of which would be, Google Blog Search, Yahoo News Search, & Daypop.

So when a black hatter in an attempt to create a massive site based on a set of keywords needs lots and lots of content one of the easiest ways would be to scrape these RSS Aggregators and use the Post Titles and Descriptions as actual pages of content. This however is a defacto form of copyright infringement since they are taking little bits of random people’s posts. The post Title’s don’t matter because they can’t be copyrighted but the actual written text can be if the person chose to have description within their feed include the entire post rather than just a snippet of it. I know it’s bullshit how Google is allowed to republish the information but no one else is, but like i said its defacto. It only matters to the beholder(which is usually a bunch of idiotic bloggers who don’t know better). So to keep in the up and up the Black Hatters always be sure to include proper credit to the original source of the post by linking to the original post as indicated in the RSS feed they grabbed. This backlink slows down the amount of complaints they have to deal with and makes their operation legitimate enough to continue stress free. At this point they are actually helping the original bloggers by not only driving traffic to their sites but giving them a free backlink, Google becomes the only real victim(boohoo). So when the many many people who use public RSS Scraper scripts such as Blog Solution and RSSGM on a mass scale start producing these sites they mass scrape thousands of posts from typically the three major RSS Aggregators listed above. They just insert their keywords in place of my “puppy” and automatically publish all the posts that result.

After that they need to get those individual pages indexed by the search engines. This is important because they want to start ranking for all these subkeywords that result from the post titles and within the post content. This results in huge traffic. Well not huge, but a small amount per RSS Scraper site they put up. This is usually done in mass scale over thousands of sites (also known as Splogs, spam blogs) which results in lots and lots of search engine traffic. They fill each page with ads (MFA, Made For Adsense Sites) and convert the click through rate on that traffic into money in their pockets. Some Black Hatters make this their entire profession. Some even create in the upwards of 5 figures worth of sites, each targeting different niches and keywords. One of the techniques they do to get these pages indexed quickly is to “ping” Blog Aggregators. Blog aggregators are nothing more than a rolling list of “recently updated blogs.” So they send a quick notification to these places by automatically filing out and submitting a form with the post title, and url to their new scraped page. A good example of the most common places they ping can be found in mass ping programs such as Ping-O-Matic. The biggest of those would probably include Weblogs. They also will do things such as comment spam on blogs and other link bombing techniques to generate lots of deep inbound links to these sites so they can outrank all the other sites going for the niche the original posts included. This is a good explanation of why Weblogs.com is so worthless now. Black Hatters can supply these sites and generate thousands of RSS Scraped posts daily. Where legitimate bloggers can only do about one post every day or so. So these Blog Aggregator sites quickly get overrun and it can easily be assumed that about 90% of the posts that show up on there are actually pointed to and from RSS Scraper Sites. This is known as the Blog N’ Ping method.

I’m going to stop the explanation right there, because I keep saying “they” and it’s starting to bug me. Fuck guys I do this to! Haha. In fact most of the readers here do it as well. We already know tens of thousands, if not more, of these posts go up everyday and give links to whatever original source is specified in the RSS Aggregators. So all we got to do is figure out how to turn those links into OUR links. Now that you know what it is at least, lets learn how to exploit it to gain hundreds of automated links an hour.

What Do We Know So Far? 1) We know where these Splogs (RSS Scraper sites) get their content. They get them from RSS Aggregators such as Google Blog Search.

2) We know they post up the Title, Description (snippet of the original post) and a link to the Source URL on each individual page they make.

3) We know the majority of these new posts will eventually show up on popular Blog Aggregators such as Weblogs.com. We know these Blog Aggregators will post up the Title of the post and a link to the place it’s located on the Splogs.

4) We also know that somewhere within these post titles and/or descriptions are the real keywords they are targeting for their Splog.

5) Also, we know that if we republish these fake posts using these titles to the same RSS Aggregators the Black Hatters use eventually (usually within the same day) these Splogs will grab and republish our post on their sites.

6) Lastly, we know that if we put in our URL as the link to the original post the Splogs, once updated, will give us a backlink and probably post up just about any text we want them to.

We now have the makings of some serious inbound link gathering.

How To Get These Links 1) First we’ll go to the Blog Aggregators and make a note of all the post titles they provide us. This is done through our own little scraper.

2) We take all these post titles and store them in a database for use later.

3) Next we’ll need to create our own custom XML feed. So we’ll take 100 or so random post topics from our database and use a script to generate a .xml RSS or ATOM file. Inside that RSS file we’ll include each individual Title as our Post Title. We’ll put in our own custom description (could be a selling point for our site). Then we’ll put our actual site’s address as the Source URL. So that the RSS Scraper sites will link to us instead of someone else.

4) After that we’ll need to let the three popular RSS Aggregators listed above (Google,Yahoo,Daypop) know that our xml file exists. So, using a third script, we’ll go to their submission forms and automatically fill and submit each form with the URL to our RSS feed file(www.mydomain.com/rss1.xml). Here are the forms:

Google Blog SearchYahoo News SearchDaypop RSS Search

Once the form is submitted than you are done! Your fake posts will now be included in the RSS Aggregators search results. Then all future Splog updates that use the RSS Aggregators to find their content will automatically pickup your fake posts and publish them. They will give you a link and drive traffic to whatever URL you specify. Want it to go to direct affiliate offers? Sure! Want your money making site to get tens of thousands of inbound links? Sure! It’s all possible from there, its just how do you want to twist it to your advantage.

I hope this cleared up the subject. Now that you know what you’re doing you are welcome to read the original post and figure out how to actually accomplish it from the technical view.

100’s Of Automated Links/Hour

–>

100’s Of Links/Hour Automated – Introduction To Black Hole SEO

I really am holding a glass of Guinness right now so in all the authority it holds…Cheers! I’m kind of excited about this post because frankly it’s been a long time coming. For the last 7-9 months or so I’ve been hinting and hinting that there is more to Black Hat than people are willing to talk about. As “swell” as IP delivery and blog spam are there’s an awesome subculture of Black Hats that takes the rabbit hole quite a bit deeper than you can probably imagine. This is called Black Hole SEO. By no means am I an expert on it, but over the last few years I’ve been getting in quite a bit of practice and starting to really kick some ass with it. In the gist, Black Hole SEO is the deeper darker version of black hat. It’s the kind of stuff that makes those pioneering Black Hat Bloggers who dispel secrets like parasite hosting and link injection techniques look like pussies. Without getting into straight up hacking its the stuff black hatters dream about pulling off, and I am strangely comfortable with kicking in some doors on the subject. However lets start small and simple for now. Than if it takes well we’ll work our way up to some shit that’ll just make you laugh its so off the wall. Admit it, at one point you didn’t even think Advanced SEO existed.

In my White & Black Hat Parable post I subtly introduced this technique as well as the whole Black Hole SEO concept. It doesn’t really have a name but basically it follows all the rules of Black Hole SEO. It targets sites on a mass scale, particularly scraper sites. It tricks them into giving you legitimate and targeted links and it grabs its content on an authoritative scale (will be explained in a later related post). So lets begin our Black Hole SEO lesson by learning how to grab hundreds of links an hour in a completely automated and consenting method.

ObjectiveWe will attempt to get black hat or scraper sites to mass grab our generated content and link to us. It’ll target just about every RSS scraper site out there, including Blog Solution and RSSGM installs including many private scrapers and Splogs.

Methodology1) First we’ll look at niche and target sources. Everyone knows the top technique for an RSS scraper is the classic Blog N’ Ping method. It’s basically where you create a scraped blog post from a search made on a popular Blog Aggregator like Google Blog Search or Yahoo Blog Search. Then they ping popular blog update services to get the post indexed by the engines. For a solid list of these checkout PingOMatic.com. Something to chew on, how many of you actually go to Weblogs.com to look for new interesting blog posts? Haha yeah thats what I thought. 90% of the posts there are pinged from spam RSS scraper blogs. On top of that there’s hundreds going in an hour. Kinda funny, but a great place to find targets for our link injections none the less.

2) We’ll take Weblogs.com as an example. We know that at least 90% of those updates will be from RSS scrapers that will eventually update and grab more RSS content based upon their specified keywords. We know that the posts they make already contain the keywords they are looking for, otherwise they wouldn’t of scraped them in the first place. We also have a good idea of where they are getting their RSS content. So all we got to do is find what they want, where they are getting it from, change it up to benefit us, and give it back.

3) Write a simple script to to scrape all the post titles within the td class=”blogname” located between the !– START – WEBLOGS PING ROLLER — comments within the html. Once you got a list of all the titles store it in a database and keep doing it infinitely. Check for duplicates and continuously remove them.

4) Once you got all the titles steadily coming in write a small script on your site that outputs the titles into a rolling XML feed. I know I’m going to get questions about what a “rolling XML feed” is so I’ll just go ahead and answer them. It’s nothing more than an xml feed that basically updates in real time. You just keep adding posts to it as they come in and removing the previous ones. If the delay is too heavy you can always either make the feed larger (up to about 100 posts is usually fine) or you can create multiple XML feeds to accommodate the inevitably tremendous volume. I personally like the multiple feed idea.

5) Give each post within the feed the same title as you scraped from Weblogs. Then change the URL output field to your website address. Not the original! Haha that would do no good obviously. Then create a nice little sales post for your site. Don’t forget to include some html links inside your post content just in case their software forgets to remove it.

6) Ping a bunch of popular RSS blog search sites. The top 3 you should go for are:Google Blog SearchYahoo News SearchDaypop RSS Search

This will republish your changed up content so the RSS scrapers and all the sites you scraped the titles from will grab and republish your content once again. However, this time with your link. This won’t have any affect on legitimate sites or services so there really are no worries. Fair warning: be sure to make the link you want to inject into all these Splogs and scraped sites as a quickly changed and updated variable because this will gain you links VERY quickly. Lets just say I wasn’t exaggerating in the title A good idea would be to put the link in the database, and every time the XML publishing script loops through have it query it from the database. That way you can change it on the fly as it continuously runs.

As you’ve probably started to realize this technique doesn’t just stop at gaining links quickly, it’s also a VERY powerful affiliate marketing tool. I started playing around with this technique before last June and it still works amazingly. The switch to direct affiliate marketing is easy. Instead of putting in your URL, grab related affiliate offers and once you got a big enough list start matching for related keywords before you republish the XML feed. If a match is made, put in the affiliate link instead of your link and instead of the bullshit post content put in a quick prewritten sales post for that particular offer. The Black Hat sites will work hard to drive the traffic to the post and rank for the terms and you’ll be the one to benefit.

Each individual site may not give you much but when you scale it to several thousands of sites a day it starts really adding up quickly. By quickly I mean watch out. By no means is that a joke. It is quick. There are more RSS scraped pages and sites that go up everyday than any of us could possibly monetize no matter how fast you think your servers are.

–>

The Parable Of The White Hat & The Black Hat

When linking here, for some reason people keep describing this blog’s content as grey and black hat. I’m flattered but I actually couldn’t disagree more. If anything I’m trying to make some serious noise to the white hats and encouraging them to learn some new tricks, but most seem to be covering their ears with enthusiasm. La la la la I can’t hear you! On that note there’s an interesting long-term debate going on disputing whats better, black hat or white hat? Did I say interesting? I meant boring and completely bullshit. Saying you like black hat better doesn’t make you a black hat. Likewise, showing your disgust for black hats without learning it doesn’t even make you a white hat. In my opinion until you take the time to learn and develop both skills to the fullest extent there’s only one classification for you….Amateur.

So with that I’m going to attempt to make the ultimate white hat post by telling the ever famous parable of The White Hatter and The Black Hatter. You may have already heard it. If you’re not familiar, a parable is a short story, often fictional, told to illustrate a lesson or morale. If that still doesn’t ring any bells then you should read the Bible more you heathen. There are several in there. I’m kidding of course. I respect your religion now matter what it is. and Yes, there will be real techniques you can use hidden within the story.

The Black Hatter And The White Hatter – When Two Pros MeetOnce there was a white hatter. He had a great website that ranked #1 in a competitive niche. He had many fans of his site and it got lots of search engine traffic from his primary keyword. Suddenly one week while checking up on his site he noticed another high quality site in his niche moving up quickly in the ranks. It was also a very nice site with lots of value. The White Hatter didn’t think much of it because of his own solid rankings but knew he better watch this site more closely due to it’s upward momentum. Suddenly one day after a small SERP update the site he was watching moved into the #2 slot right under him. This started to make him nervous because he knew the differences in income between the number one and number two slots. He has gotten very comfortable in the number one position and had no intentions of giving it up.

Once there was a black hatter. He created a few Made For Adsense sites across a couple hundred niches. They were fairly uniform. Some performed well, some performed poorly. By chance of fate one of these sites was in the White Hat’s niche. Since this was a competitive niche his site quickly caught the attention of the lower positioned sites that were still struggling amongst themselves for the #2 slot. The black hatter’s site was doing fairly well in this niche and making a couple bucks a day. It wasn’t ranking for any major terms within the niche, but since the niche was such a good one it was still bringing him some solid residual and he was very happy. The rest of the site owners within the niche, angered by his intrusion, quickly took action against his black hat site. After a few legal threats and a bunch of complaints to everyone possible they were finally successful and got the black hatter’s site taken down and banned. Once entered and realizing the potential of the niche the Black Hatter reluctantly took the site down and built a clean site for competition within the niche. There’s no point in letting such a great niche go he thought. So he built a very clean and high quality site and aimed it directly at the most competitive keyword. He built up the site nicely and quickly. While the other lower sites within the niche fought amongst themselves and spent their time combating the endless supply of spam entering their niche he focused on building the link count required for the number one position. It wasn’t very long before he managed to grab the number 2 slot. Things were going well except that number one site in his way was clearly going to be a force to be reckoned with. He was going to have to pull off something slick.

By this time the site had the full attention of the White Hatter. He started watching the inbound links and site content of the Black Hatter’s site intently. It seemed fairly even. In fact this site even managed to get many of the same link spots he had as well as a few new ones. The site had lost its momentum but still a worthy adversary for the top spot. The White Hatter watched in dismay as his site and the other bounced back and forth between the top two positions. He continued building links and working on his site. Suddenly without notice, the other site took the top position and it stuck. The opponent kept it and wasn’t budging. The White Hatter had to figure out why and quickly. Both sites had a solid number of links at about 45k-50k and almost all were at least relevant. He started investigating all the inbound links and finally one day found something very odd. This other site had about 10,000 new inbound links from random Blogspot accounts.

The Black Hatter knew he wasn’t going to take down this monstrous number one site by playing clean. He had to do something drastic. So he whipped up some scripts and grabbed a list of the top 1,000 or so keywords for the niche and created some Blogspot accounts accordingly. He populated them from a popular RSS Aggregator and made sure each page had a link to his main money site. With the added link popularity he easily took the number one position and wasn’t budging. He didn’t want these Splogs to get quickly banned and knew later on he would need their link age so he did the responsible black hat thing and gave each post credit to the original and left all the ads out. He also knew not to continuously create too many and draw attention. He had to keep his numbers just high enough to gain and maintain his position and stop it there and just consistently update each blog once it’s established. After all the big money was in his big clean money site and he knew it.

The White Hatter definitely received a big blow to his business and had no intentions of taking it lying down. He created a crawler and using the footprints within the Blogspot templates started compiling a list of all the accounts made. He located a large portion of the 10,000 accounts and started investigating where all their content was coming from. They were clearly feed scrapes. So he took all the post titles and scanned common RSS Aggregators. He found it! They were coming from Google Blog Search. The White Hatter was smart. He knew if he started panicking and throwing a fit and trying to get Blogspot and the search engines to delete and ban all these accounts it would do him no good. The other site could easily generate them faster than he could ever get them deleted. It was clearly a futile effort and he knew it. Time for a workable plan. So he had to hit the Blogspot accounts where it hurts. He used his crawler to scrape all the titles of almost all the scraped posts created. The White Hatter then concocted a script to ping Google Blog search with all the same post titles as was in his list. He injected links to his site within the article content as well as put himself as the source. Unfortunately, the scraper was smart enough to remove all the html from the feeds but he still got to keep the original link within each and every post. He knew that if each of these post titles managed to get grabbed by the Black Hatters scraping script than they would surely get snagged again once all the blogs got updated and he fed them into the aggregator. It worked. All the new posts on all the spammy Blogspot accounts now had a link to his site. This evened the playing field. Whenever the Black Hatter’s site got a link, his site got a link. However this wasn’t acceptable, the black hat site still had all the previous post links and was barely beating him. Something had to get done about that. All these links had to be devalued. So the White Hatter opened up his Askimet filter logs and started scanning for domains that were banned in the engines. Once he managed to find a couple hundred so he started slowly feeding them to the Blogspot accounts through the Google Blog Search feed. He kept the same post titles knowing they would get scraped yet again and he made a balanced mixture of putting his site as the source of the post as well as the banned domains that way they would both get links.

By this time the Black Hatter, with his position secure, had already moved on to his next niche. Suddenly one day he noticed a large drop in revenue. His site had lost the number one position and was back in the battle for the first two spots. “What the hell is going on?!” he thought. He looked at his Blogspot scripts to find the source of the problem. He quickly noticed that not only was his accounts giving his main competitor links but they weren’t worth a shit because he was also linking to a bunch of banned pharmaceutical sites, putting every Splog he created into bad neighborhoods. His efforts were worthless and he knew why and more importantly how. It was time to face a tough decision. He could either endlessly combat this guy, who obviously knew his stuff, to keep his current revenue or he could move on and continue to focus on new niches and creating new revenue for himself. He couldn’t help but laugh about it. So he emailed the White Hat and expressed his respect for the competitive exchange. There was no point in furthering it and they both decided to call it quits and just exchange links on the main page to help lock in their positions and let the algorithms decide from there who was better. After considering it a draw they both went their separate ways.

The Moral Of The StoryInterpret it however you want but recognize the fact that the White Hat defended himself. He didn’t just roll over or waste his time throwing a fit trying to get the inevitably endless supply of black hat sites banned or deleted. He knew his opponents tricks and thus was well equipped to combat them. He stood his moral stance and did what he had to, in order to protect his business. No matter where you stand on the issue you can respect that.

If you hear anyone whining and crying about black hats or white hats please politely explain to them that there are no black or white hats. Only amateurs, experts, and people who are willing to learn how to protect and grow their businesses then send them this story.

–>

User Contributed-Making Money With A Video Blog

Mark, a knowledgeable reader and benevolent commenter here has made a separation and started his own blog called Digerati Marketing. It’s pretty new and only has a couple posts so far, but I got to say the material thus far is absolutely fantastic. I think it’s a great representation of the Blue Hat spirit. So I have absolutely no problem plugging it, because I really do think you should add it to your readers in case he keeps up the excellent work. He was kind enough to donate one of his posts here. It’s called Making Money With A Video Blog. I think it’s a great post and if you’ve been paying close attention you’ll know that I also already do this idea and he is absolutely right on all counts. So enjoy.

-Eli

An Introduction to Video WebsitesThere’s a lot of them about, a lot. Apart from the old-timers like Ebaumsworld, Google’s acquisition of YouTube has really seen them starting to eat up this market. The great thing is, that doesn’t matter, videos are a disposal media – people look at them once, show their mates and then it’s old news. You’re only as good as your last video! What we’re going to look at doing here is setting up a video website with minimal cost & time and maximizing our profit.

Now, I’m not claiming this will make you a millionaire, but you can earn around £500 ($1000 to our American friends) per month without too much trouble. So it’s well worth it for the day or two it will take to set up!

Step #1: Monetization Strategy

Okay, for this site we are going to make our bucks from a couple of different sources. Our main income will be made from Google Adsense. If you haven’t got yourself a Google Adsense Publisher account, click the big button below and sign up. Adsense will display contextual adverts which you can neatly blend in with the design of your website, to make them non-intrusive, yet a natural click away. For this website we are aiming at generating a 25% click-through rate while staying well inside the Adsense Terms Of Service.

An important note: I’m going to give you some tips on optimizing your Adsense placement and layout later, if you want to take the optimization further, you may be pushing the limit on what Google does and doesn’t allow. The Adsense team can be merciless at times if they think you are breaking their TOS, which can be quite “grey

Keyword Real Estate

Hey guys and gals. Incase you didn’t notice I took a short break from posting to catch up on some work. Thanks for hanging out. Let’s review. In my SERP Domination post I spilled the beans about creating a site network by breaking down a larger site to take down the competition in a very efficient manner. This of course works beautifully in reverse, but we’ll save that for another day. In the mean time though I want to touch on a small portion of that post I could of easily divulged a hell of a lot more on. I’m talking about the “secondary network.” Incase you have short term memory loss I’ll refresh ya with a quote from the post.

[quote]Blah blah blah, sexist joke. Blah blah snarky remark. I’ll create a much larger secondary network to help boost my blah blah inbound link authority. Blah blah ramble ramble.[/quote]

In words longer than short I introduced the well proven concept of Keyword Real Estate. Keyword Real Estate isn’t just a concept its a practice. In fact scratch that. Keyword Real Estate is the law of high rankings just as sure as Murphy’s law will make your servers go down between 1-3am instead of 9-5pm. Practicing good keyword real estate snatching will help boost your sites. I will go as far as to say that it is unwise to enter any niche no matter how uncompetitive without grabbing up as much real estate related to your terms as possible. I’ll explain.

Domains, Free Hosting, Blog Networks, Social Networks, Social Bookmarking, URL Shorteners. The list goes on and on. Any where there is an authoritative domain that allows you to own a static page there is no reason why you shouldn’t register it and put up a landing page advertising your site(s), or even a simple link if that’s all that is possible. I’ll explain this in a simplistic white hat way using an analogy because this is in no gray or black hat about it, its just common sense business.

Imagine you live in a big city and you setup a small business. The local phone book directories want you to pay them to put a small business card sized ad in the yellowpages/category/keyword/page/adspot. Okay so it fits within your ROI so you do it. It’s almost manditory for a small business to be found by it’s customers. However, at the same time, all over the city there are these giant sky scrapers with all these blank billboards on them. They are giving them out for free all you got to do is claim them first before your competitors do and you can put up any ad for your business you want within reason(TOS). Also, in other portions of this fictional city are little chunks of land that you can grab and put up a giant sign with a big ass arrow pointing to your business. They may not be in the primo business districts but they at least get drive by traffic. I don’t know about you but I’m too ethical of a business owner to take advantage.

Ya know on second thought, I think I will jump in on this offer. So let’s look at a few examples of these billboards and opportunities for Keyword Real Estate and where to easily find them. They are everywhere.

A few examples might include:

keyword.Wordpress.comkeyword.blogspot.comkeyword.blogger.comdel.icio.us/keywordborntobuzz.com/keyworddigg.com/users/keywordtechnorati.com/profile/keywordbloglines.com/blog/keywordkeyword.typepad.comastore.amazon.com/keyword-20myspace.com/keywordsquidoo.com/keywordkeyword.spaces.live.comsomeforum.com/members/keyword.html <-vbulletin with vbseo installed & your link in the signature.I know it’s simple and not very advanced advice, but it’s solid.  Look at what the experts are doing. A great example of how well this works is of course SEO Contests. How are they won? In the simple sense they are won by Keyword Real Estate. They are also a great resource for finding some awesome opportunities. After all it works damn well. I personally never enter a competitive niche without grabbing as much real estate as possible on that keyword or phrase and using it to boost my sites’ rankings and traffic. It’s common sense, all you got to do is follow through religiously.

–>

Check Mate: Google Images

Perfectly Optimized Google Image Code Keyword

”keyword”keyword

Top Determined Factors1) Keyword in same table cell as image.2) Keyword below or above image in DIV or floating DIV.3) Keyword in ALT tag.4) Keyword in image name and image meta file summary.5) Keyword in same paragraph as image.

Acceptable Code


keyword in more than 4 words and less than 10.

text blah blah

Factors In Order Of Apparent Importance1) Factors 2 and 5 equal2) Factor 13) Factor 44) Factor 3*Factor 3 assumed because it showed no distinguishable results in controlled tests.

Automatically Remove The Google Frameif (parent.frames.length > 0) top.location.replace(document.location);

Perfect SizesSmall: 150×150 or smaller.Medium: Larger than 150 x 150 and smaller than 500 x 500.Large: 500 x 500 and larger.

Google Image Traffic Redirection.HTACCESS FileRewriteEngine onRewriteCond %{HTTP_REFERER} !^http://images.google. [NC]RewriteRule .*.$ cgi-bin/redirect.cgi [R,NC]*If google images is sending you traffic you don’t want, you can use this code to redirect to a targeted advertiser. I have an affiliate site that sells lingerie. It gets tons of porn related traffic from Google images. They usually just realize it’s not a porn site and they quickly leave. So I looked at the most popular keywords used in Google Images traffic they send me and found a related porn sponsor and started redirecting all the users to my affiliate link. What do you know? Conversions!

*Check Mate*

Update: I didn’t even realize this, but there is a great write up by EMP over at Blind Ape SEO. Worth checking out.

–>

SERP Domination

Let’s take a moment and chat. No secrets or techniques this time, I just want to take a post and discuss my own personal strategies for consistently dominating almost every niche I enter. Instead of creating an objective and methodologies I’m just going to casually write and talk some details, because of course my strategies require quite a bit of work and aren’t for everyone. I’m not one of those lazy ass Internet marketers that only works a few hours a week. In fact I work my fuckin ass off and no less than 60-70 hours/week. So feel free to jump in on this discussion with your own thoughts just as long as you take what I say equally as lightly as I write it. I already understand my approach isn’t for everyone. Please don’t remind me.

Anytime I approach a middle to heavy weight niche I try to look at the current SERPS sensibly and take the idealism they present with a grain of salt. The intent is that they rank your sites upon a mixture of their quality and relevancy. When the reality is they rank the sites upon their authoritativeness and subject coverage. Tamato, tomato right? Well….no. That’s like saying a “person with good qualities” is a “good quality person.” Regardless of how many people it fools, it’s still false. It’s these subtle differences that gives those of us without a registered 1998 domain a chance. With that right and the lack of proof that even an algorithm can count to infinity everything becomes part of a nice little scale. I don’t have to be big, I just have to be larger than the rest.

With that new perspective I stand a smaller chance of running uphill in the mud with my work. Sometimes people rush into a niche too eagerly. I’ve done it so many times. I just create a site based on some keywords and keep building and building on it while getting nowhere. The other sites just seem to have too large of a head start and have already had time to build up momentum faster than I can catch up. It seems like an endless cycle which often times has a much easier and quicker path; I am just too bull headed and committed to see it. So now that we got the right mental approach let’s review what we’re up against.

For this fictitious example we’re faced with a top 15 consisting of four or five very large old community sites on the subject. Also, a few informational sites with lots of pages and inbound links(a range of 7k-45k). There also seems to be a couple shithead subpages. By shithead subpages I’m of course affectionately referring to major authoritative sites that have a page on the matter, such as Wikipedia, About.com, and Amazon. Within these results the average domain age is 6 years. The domain age range is 5-10 years. Alright, so I got quite the project ahead of me. Let’s analyze what I’ll need. The presence of subpages in the results tells me that “subject coverage” factors aren’t highly required because in this case Google luckily believes that a single page written by a punk editor is more relevant than an entire site dedicated to the subject due to domain authority. However, I’ll keep the subject coverage factors in my back pocket because they’ll help me deliver the final kill once we catch up. Catching up? Hmm. there’s an interesting dilemma. I got some serious age and authoritative competition to deal with. Even if I create an absolutely huge site and get tons of inbound links quickly I could still get stuck treading water with the sharks for the next 3-4 years. In the interest of getting paid I have to figure out a better way of generating huge amounts of authority.

There’s more than one way to get a job done. Sometimes you got to ask yourself a rhetorical question. Whats the fastest way to move a bunch of dirt, one giant bulldozer or a half dozen or so small bulldozers? A logical person would realize the flaw in the question. The answer doesn’t depend on the bulldozers. It depends on the dirt. In this instance I obviously can’t use one giant site with lots of power, I’d just be fighting muscle with muscle and get nowhere. I need to beat them in sheer numbers. Like a bunch of midgets on a bear. So I got this awesome ideal website in my head that I would love to build for this niche. It would cover just about every aspect of the niche and absolutely rock. Unfortunately after looking at the competition I realize and accept the fact that it’s a bad business model. So I have to break it up and remodel.

Let the destruction begin. First thing I do is break apart the entire site into primary sections. For instance if the site has a forum, that becomes a new site. If the site has a blog, boom another separate site. If I got an articles and informational section or even a shop, same thing. I tear the entire site apart into separate and focused sites and bundle them together as part of a mid-sized network of sites. So for this example I ended up with between 15-20 separate mid to large sites that are all interlinked and cross promoting as part of a tight network within the niche. I build up each site with a different template and fluff it with some extra pages and content. If some of the sites are really lacking I’ll take one section and break up all the content and divide it amongst the sites. I think we’re just about ready to create some authority.

Lets review the dynamics of what we now got. Each site has X amount of authority. So if we have 20 sites we have Xx20 total authority. If we give Site #1 an inbound link it gains a little authority. So Site #1 is X+1 authority. Since Site #1 now has some authority it can call out other sites and as part of it’s recommendation can help them gain authority themselves. So since we gave Site #1 some authority, by proxy, sites 2-19 also gained slight amounts of authority themselves. This creates a nice little leech and donate relationship amongst the network. When one site raises in the SERPS it’ll naturally try to help pull the others up with it. Hey! Bring my buddies to! Each site is working in synergy to raise the other sites within the network’s authority faster and more efficiently. If you are under the initial impression that authority points are strictly an average amongst all the sites on the net than that would mean the total sum of all the pages would equal a relative 0. Therefore there would be sites with negative authority of equal part to sites with positive authority. Naturally that is untrue, sites can and do help build each other’s authority rating. If you’re with me so far you can see where this is headed, and that is straight through the Relevant Link Wall.

You got to love the supposedly nonexistent brick wall of relevant links. Dipshits on newbie forums love telling people, “don’t worry about the rankings, just build some relevant links and they’ll come.” So you do just that, after all they have over 4,000 posts on that forum, it can’t all be complete garbage advice. At first it’s totally working, your gaining a good 50-200 very relevant links a day. You submit to directories and score a bunch of links from your competitors. After a couple weeks you even manage to score some big authority links within your niche. Suddenly it all starts to slow down. The sites that are willing to link already have, and the rest are holding firm. You’ve just hit the Relevant Link Wall. Don’t bother going back for further advice. They don’t have any, and if they did they are too busy trying to rank for Kitty Litter Paw Prints to help. On the plus side with the power of my network I just pushed my proverbial wall back quite a ways. Where it was previously at around 1,700-3,000 links its now around 40k in total. Giving me some huge authority. So now my network is scattered around the 20-50 position range in the SERPS. With the pansy sites out of the way I just need a bit more of an authoritative push to go play with the big boys. So I’ll need to create a second much larger network to make the final shove.

Volume is my key strategy here. I don’t care about building up these other sites or even getting links to them. I just want them indexed and get their links to count. So I create a nice little blog network using free blog hosts on authority domains. I may even do some clean parasite hosting and push for some authoritative small links. Any relevant links I can find to make that last push up the hill and into the top 30 positions. Unfortunately this will be a struggle and take some time I don’t feel like wasting. I got to work double time and get some link power from some preexisting non-relevant sites within my arsenal.

For this I’ll use the rule of three and link to my network from a few other sites of mine that already rank high in the targeted engines. This will help the links on the individual sites within my network have more outbound link weight, so they can give more authority to other sites within the network. By policy I never allow these other sites to join the network, so they only link to one of the sites within the network and never get a link back. Otherwise the other sites within the network would gain less overall authority. It’s better to push all it’s weight to one single site and let that site boost the other sites within the network. So now our network is sitting between the 7-30 positions. They may be a bit scattered and thats okay as long as we break into that top 10 with at least a site or two. If at this point I’m coming close but still can’t quite break through I may have to resort to sabotage. Nothing says successful sabotage like slowly removing the outbound links on that ranking Wikipedia page and labeling them as Spammy links in the edit history. I know it’s a bitch move and the link may just go back up but nothing is greater than when the webmaster notices the traffic loss and checks to see why their Wikilink was gone and is like, “Fuck you Wiki, you think my site is spammy? Fine, there goes your backlink ya bitch.” <– Hehe may be in more words or less. Hey, sometimes you gotta do what you got to do, and if that means doing sneaky underhanded stuff like convincing About.com authors that the page is in the wrong category and should relocate the article to a more suitable one, than so be it. Just don't take it too far and pretend to be the owners of those sites and harass directory editors because you feel like your site should be listed above the competitions', then call them morons when they try to explain alphabetization. Just get that booty into the top 10.

So now, like a girlfriend’s hair dryer, I’m in and under the radar. Time to prove that my sites are the most relevant by expanding the content on the 2-5 sites that made it into the top 15. Now is the perfect time to shake up the industry by releasing a new feature on your site that the others don’t have. The other webmasters and the 3-4 community forums that are already in have taken notice of my sites by now so I need to give them something to talk about. I’ll focus on these primarily top sites and leave the others to push authority of those sites up. Perhaps by organizing my NOFOLLOW tags more efficiently? This is also usually when I’ll steal little portions of content from the few sites that fell through the cracks in the rankings and put them up on my top 5-10 sites. Suddenly my number one site will need a featured blog and the blog site will quit updating so often. I may even spend a little time focusing on some Market Bait. Once I got 4-7 of my sites within my network into the top 10 I’m in like Flint, grab my cash, and move on to the next niche.

All I got to do from here is watch for other Pros attempting to do the same thing to me. They are usually pretty easy to spot when they email you from several different email accounts begging for links in a similar way. If these strategic thoughts are starting to sound familiar, but you’re not sure from where; I can explain. Have you ever had one of those sites that have always ranked well for their keywords? Then suddenly over the course of a week or so your site just deranks like crazy. All these new sites pop in to the top 10 and your left like WTF? First you glance around and wonder if something is wrong with your site. Nothing out of the usual, in fact it’s quite odd for such a consistently ranking site to drop like that. So you assume it must be some engine update. You do some research, but no one reporting anything too strange other than the few usual paranoid one hit wonder eccentrics screaming Damn Big Daddy! So you chalk it up to a possible algorithm change. Sure that must be it.

–>

Free Amazon.com Link

I have a couple posts almost ready to go, but I simply don’t have enough time to finish and make them live yet so I’ll make a quick one just to hold you over.

Here’s How To Get A Free Link on Amazon.com 1) Create an Amazon.com affiliate account.2) Select the aStore option.

3) Fill in the keywords you would like to use and select a product category related to your site.

4) At the bottom of that page while still on step 1, Insert your link and anchor text.

5) Complete the last 3 steps. They are very short.

6) Parse or copy the custom affiliate URL. It’ll be in this format: http://astore.amazon.com/username

7) Run the URL through the QUIT tool.

8) Do it again before they catch on that everyone is starting to do this.

–>