Posts filed under 'Main'

Choosing Robots.txt or the Noindex tag

Got into a brief discussion today about whether or not certain pages should only be linked to with a rel=nofollow tag, excluded in the robots.txt file, and/or should have a meta noindex tag on the page.

Is going with all 3 overkill? What ones are necessary? And what’s the easiest best practice to implement for a site with over 800 pages?

First of all, we should clarify what exactly these three choices are.

http://thehistoryhacker.com/2013/01/22/political-debate-fruit-fight/?replytocom=311 rel=nofollow tells a search engine not to follow this link. It not only prevents Pagerank flow, but it also prevents this page from being indexed IF the search spider doesn’t find any other links to it.

Robots.txt exclusions tell a search spider not to crawl or access that particular page.

META NoIndex tells a search engine not to list that page in its search results.

These may all sound similar, but there are some very subtle difference here. To help understand these differences, it’s best to understand what type of pages we’d likely apply them to.

Examples of pages you don’t want indexed or crawled include:

  • Search results pages
  • Thank you pages
  • Error pages
  • Steps in a signup process
  • any other page you wouldn’t want a user to start on or see out of context

Basically, if (by some odd fate of chance) a user searches for something and comes upon my “thank you, you have been unsubscribed from my newsletter” page, that user is going to be lost to me. Additionally, they’re going to be confused as hell about the content of the page. Did it really unsubscribe them from something?

The old school way of preventing this was simply to list the page in Robots.txt so that the spiders couldn’t crawl it – but that alone isn’t enough. Looking to our list above, robots.txt only says not to crawl a page. It doesn’t say anything about listing it in the search results; and that’s exactly what happens. If somebody else links to a page that’s forbidden in your robots.txt file, search engines may still show that page’s URL in their results pages. They won’t have any information about it, but it will still be possible for users to click the link.

The other problem is that suddenly all of your form action & result pages are listed in robots.txt. This can provide valuable information to attackers and other people interested in compromising your website. For that reason, I prefer not to use robots.txt for this task.

rel=nofollow eliminates the list of pages created in robots.txt, but it’s also not very effective in keeping pages out of the search results. The problem with rel=nofollow is that it just doesn’t scale. I can tell the search engines not to follow my link, but what about somebody else who links to that page? I can’t count on that not to happen, and I certainly can’t count on them to put the nofollow tag in their link either.

That’s where the Meta NoIndex tag comes in. No matter how the spider ends up on the page or who linked to it, the NoIndex tag will always be there to tell the search engines not to index this page. Additionally, search spiders will still crawl the page and follow any links on it. This can be useful to people trying to manually shape their Pagerank flow.

For those of you curious, the tag looks like this:

<META NAME="ROBOTS" CONTENT="NOINDEX, FOLLOW">

So what do I do?
I use a 2 fold method. Firstly, I make sure I put a meta noindex tag on any page I don’t want indexed. Secondly, I always make sure to put a rel=nofollow tag on any links to that page from my website. This way, I keep my Pagerank flow how I want it and prevent my confirmation pages from being listed in search engines.

April 6th, 2009

I’m Joining ZAAZ

Just a quick note to let people know that I’ve accepted a new job with ZAAZ. Starting in April I’ll be taking on the position of “search marketing strategist” at the local Dearborn ZAAZ office.

It’ll be a big change from my previous role at identity.net but it’s something I’m excited about. I’ve been used to the start-up culture and experience, and that’s why I liked this opportunity at ZAAZ After visiting their offices, it’s clear that they’re a big company that really values keeping the start-up atmosphere. If you haven’t had the chance to work in the type of environment I’m talking about, you’re really missing out. It’s why I like start-ups, and it’s what made the ZAAZ decision easy.

As far as Identity.net goes, I’m still friends with everybody there and will continue to talk to them regularly. I loved having the chance to meet the venture capitalists and CEOs that I worked with daily, and I’m sure we’ll still keep bouncing ideas off of each other as time goes on.

Now is also a good time to remind everybody that the views and opinions I blog about are mine, and mine only, and in no way relate to those of any company I work for. I say that because I regularly blog about SEO, start-ups, search, and marketing – and I just want to let people know that even though I may hold one opinion, that opinion isn’t necessarily the best or proper one for the situation or company I may work with. I know how boring these statements are, but I like putting them out there to avoid any confusion.

Anyway, that’s all I wanted to say. I look forward to seeing what the future holds for me at ZAAZ

March 27th, 2009

More Newspapers Just Don’t Get It

All I’m seeing on Twitter is articles about how major news publishers are lobbying Google to give their sites more weight. The basic argument goes something like this:

As news reporters we’re doing all the work to break the story, but somebody blogging about it is ranking higher in search than us. We should be first, it’s our work and we’re more reputable.

There’s a few problems with that argument though.

First off, Google search isn’t just news. In fact, there’s a whole separate section of Google that does nothing but search news – and the news agency sites come up first.

Secondly, news is just that – news. When I search for a topic I don’t want just news, I want insight. I want to go to a site with comments and see what other people are saying about the news. I want to know more than just what happened, I want to know how it impacts me and how others feel about it.

The most important factor though, is that newspapers have spent so much time being anti-Google that they don’t deserve to rank higher. The main reason that most newspapers don’t rank well is because they spent too much time doing stupid things.

They’ve spent so much effort hiding behind pay walls and suing people who link to them that they’ve UN-SEO’d themselves right out of any rankings they should have gotten. As somebody said in a comment to an earlier post than mine, most newspapers terms of service expressly prohibit you from even linking to their site.

It’s shocking to think that newspapers are just now seeing the value of having an online offering and getting traffic to it. Just a couple years ago they were trying to charge Google for even including them in search results, now they want to be at the top? I don’t get it, it’s like they don’t even listen to themselves when they talk.

If you want your newspaper to rank well you need to start doing some basic SEO. Remove the pay walls and subscription only features. Make your online articles more spider friendly. You’d be surprised at how many news articles I still see that don’t have the date in them or fail to mention what city the newspaper is even from. Even more use antique content management systems that put huge undecipherable session IDs in their URLs.

If you’re serious about building online traffic you need to hire somebody who knows what they’re doing and start taking these simple steps to get your content noticed. Bitching about it to Google is just going to make you look more stupid than you actually are.

1 comment March 23rd, 2009

The FencePost Error

Some friends and I were talking about the recent Dayton police lawsuit – where people are suing the city because only a small percentage of African American applicants pass the entrance exam. In order to squash concerns that the exam was somehow racist, the city released a bunch of sample questions from the exam. After reading it, I can confirm that it’s not really racist. It’s probably another case of correlation vs causation at work here. If less blacks take the test than whites, less blacks are going to pass the test.

Looking at the test though, some friends of mine and I kept staring at question #9. Question 9 reads:

A parade has been scheduled to run 5 miles through the city. It is desirable to have an officer stationed every 3 blocks. If there are 12 blocks in a mile, how many officers will be needed?

A) 13

B) 15

C) 18

D) 20

It’s easy to see the logic that they want you to do here. 12 blocks X 5 miles = 60 blocks. Divide that by 3 and you get 20.

20 seems like the right answer, until you actually think about how you’d implement this in the real world. In the real world you’d want to put an officer at the start and finish of the race, so you’d need 21 officers to work it, not 20. The logic that the question uses is not the logic you’d use if you were really trying to position police officers along a parade route.

It’s a classic example of what we computer scientists call the “fencepost” or “off by 1” problem.

The problem here is that you don’t want to count the number of 3 block sections, you want to count the number of officers that have 3 block sections between them.

It’s the difference between counting sections of fence or fence posts. Let’s look at a graphical example.

Assume this is a fence:

|—|—|—|—|

This fence has 4 sections, but needs 5 posts to connect them. So, if I want a 4 mile fence with 1 mile long sections, I’ll need 5 fence posts.

The same holds true with positioning police officers at a marathon.

The error most often occurs when counting starts at 1 rather than 0 – and it’s why we computer scientists always write things like:

for (i=0; i<5; i++) instead of for (i=1; i<6; i++) or, the similarily confusing for (i=1; i<=5; i++)

March 19th, 2009

Newspapers Who Don’t Link

I’m starting to notice an ugly trend among online newspapers. Unlike most blogs and online only sources, traditional newspapers don’t link to websites in articles – even when the website is the focus of the article.

I must have read about 4 or 5 articles earlier about wikileaks leaking the ACMA list of banned websites – but none of the articles included a link to said list. A quick Google search showed that no major newspaper linked to the list, yet every major blog that covered the story did. I can’t help but wonder if there is some sort of editorial policy at work here, or if the newspapers are just afraid of linking to “improper” content. (which, in this case was simply a list of more links to the actual websites – many of which feature porn.)

Somebody on Twitter pointed out that they could be doing so out of legal concerns – but I’m fairly sure that it’s not illegal to link to something in America. Linking is no different than giving directions – even a first year law student knows that if I tell you how to make a bomb and where to get the parts, that doesn’t make me liable for what you do with the bomb.

Ordinarily I’d be convinced that fear of legal repercussions was the motive, but I’ve seen this phenomenon many other times too.

Just yesterday I read a Times piece about a NFL mock draft that included several predictions, but not a link to the actual draft that the article was about. Instead, they just listed the name of the sportswriter who wrote the mock draft and forced me to find it on Google.

Almost every day I receive a Google alert about a news news source that mentioned my website, NoSlang.com in one of their stories – and the result is always the same: A full article about internet slang, quotes and top 10 lists pulled directly from the site, but only a text mention instead of a hyperlink. That’s fine if the story were appearing in print, but it’s as if all they did was copy and paste to put the story up on the web.

Actually, that copy and paste theory makes a lot of sense when I think of the broader picture of most newspapers. It seems to me that newspapers are still so out of touch with the internet that they don’t realize how important it is to readers to add hyperlinks to the online versions of their stories. Can that be true? If so, it’s no wonder that a lot of newspapers are failing.

1 comment March 19th, 2009

4 More Conversion Secrets

I just finished reading a blog post about trust building graphics and how they can increase conversion. While I wish it would have given some hard statistics about conversion rates for the various graphics, I do understand how hard those numbers can be to generate.

I’m still curious whether Mcaffee secure, Verisign, or PayPal converts better. I’d love to set up some PPC campaigns to landing pages and only alter the graphics to see what works, but I really don’t have the budget to do that. If anybody else can, let me know and I’ll design the experiment for you based around your product. You may ask a pay per click management agency Cardiff to know more about PPC.

The post hits on some great points, but I believe it leaves out a few other “images” and visual techniques that can greatly increase conversion rates. If I learned anything working for an auto loan company in the past, I learned all types of little tricks to increase conversions. Here’s some of the most successful ones I’ve tried:

1. The color green. Seriously. Green means go. Green is an aphrodisiac. Green signifies life, renewal, nature, and health. Green is calming and easy on the eyes. Most importantly for the marketer though, green means money. Try it out.

2. A Login Box. I didn’t think this would work when we first tried it, but then I saw double digit increases in conversion percentages. The mere presence of a login box on a web page can make a consumer feel more comfortable and secure. Even if there’s nothing for them to login and do, the feeling of being able to check on or fix things if there was a problem can be consoling. It’s a bit shady and unethical to put up a login box that doesn’t work just to increase conversions, and I wouldn’t recommend it – but login boxes DO build consumer confidence.

3. SSL Certificates. The reason phishing still works is because many internet users think that the little lock icon means everything is safe and secure and nothing bad can happen to them. They’ll give their Paypal password to paypal.russianhookers.com as long as it’s got a little lock icon in the status bar – and they’ll be more inclined to give their credit card number to your shopping cart too.

4. Testimonials. I know I know, when you think of testimonials you instantly think of Meliissa Pierce of Tampa Florida and how she seems to think every product out there is INCREDIBLE! We all know that most testimonials are pure garbage, but for some reason they still work. It probably has more to do with the uplifting positive attitude they convey and less to do with actual believability, but they still work. So go ahead and generate some of your own and see what kind of effect they have on your sales. Of course, it’d be better to actually have a product that’s so good that users actually send you real ones. You should aim for that. You can invest in software to get these kinds of testimonials and customer retention. If you’re wondering how your business can achieve customer retention, then you may consider visiting sites like https://delighted.com/blog/improving-customer-retention-strategies to know more on this.

Bonus 5th Tip: Google Checkout. I did a previous test with some past clients, and we found that by using the Google Checkout icon in our adwords ads, we were able to greatly increase ad clicks while also significantly increasing conversion rates. If you don’t already accept Google Checkout as a payment method, you should strongly consider it.

Again, if you’re interested in trying any A/B testing on your campaigns please drop me a line. I’d love to hear your results. What other conversion tricks have worked for you in the past?

March 10th, 2009

The Path To Startup

I can’t count how many times over the past year somebody has told me “you should start your own company.” It’s not that I don’t think I’d be great at running a company – I’m sure I would be awesome at it. I’m pretty good about making everything I do a success. The reason I haven’t started a company is because I don’t have anything worthy of a company. Yes I’m full of great ideas, several of which have been profitable, but I still don’t think any of them would have made a good company.

When I think of a startup, I think of 2 main paths that an idea takes to becoming a company, and about 20 other paths to creating a dot com failure. The latter of which I’ve seen plenty of at my time working with some Seattle area venture capitalists.

Every successful startup that I can name followed one of two main paths to success. They were either born out of academia, or they were a result of somebody solving one of their own problems at work and realizing that others may find their solution useful. Perhaps the most famous startup, Google was born out of a college project. This is often the case with new algorithms. We computer scientists tend to let the academics come up with all the new theories, opting to just put them into practice. It’s win win.

The more common approach to launching a start up though is to simply solve your own biggest problem at work. Big companies like Microsoft and Google know this, yet despite all their best efforts they still lose employees every year who turn their work inspired projects into stand alone companies. Friendfeed is a good example of this.

Statistically, these two methods will yield the most successful companies. I’ve seen too many failures to just start a company first and then look for a business model. It’s really easy to just buy a domain name and start a company, but it’s much harder to find a successful business model that way. I’m a firm believer that business models must come before domain purchases and company formations.

Sure, I’ll continue to make websites that generate cash, and I’ll also continue to solve problems that I encounter. If I ever happen to come up with a problem and solution that are broad enough to apply to many others AND generating cash, then I’ll start a company.

3 comments March 10th, 2009

Utah Bans Online Competition

You probably remember Utah from their past crusade against online keyword bidding. After filing a new house bill to ban competitive keywords though, the Utah legislation has began to step up their efforts to protect the profitability of out dated business models.

House Bill 451 will effectively make all types of online competition illegal. Section 1 of the bill states that “it shall be unlawful to compete with a trademarked or copyrighted product; whether by selling similar, same, or better products in said trademark holder’s market.”

The bill further clarifies that “in the case of trademarked, or patent pending products, the market shall be awarded to the market leading product. Selling same, similar, or better products for less shall constitute a violation.

It’s still uncertain how Utah will keep track of the “market leader” products, but legislators are already talking to lobbyists about solving this problem.

“This is a great law for business owners like me”, said Joe Lipstien – a local record shop owner. “It’s so hard to compete with these new-fangled online sites like iTunes and Amazon. With this law, it will be illegal for them to sell Music in Utah – allowing me to spend my money on government kickbacks instead of innovation.”

OK Ok, yes it’s satire, and yes it’s terrible – but it’s also believable; and that’s what should scare you!

March 5th, 2009

Better Solutions Than A Mileage Tax

The good news is that Obama just rejected the mileage tax, but the bad news is that politicians (especially in oregon and taxachussets) keep thinking that this would be a good idea.

The fact that elected officials think it would be good to track citizens every movement is scary. If we keep electing people like this we’re going to head down a very slippery slope toward the type of 1984 society that Orwell imagined. In fact, it’s starting to look more and more like Orwell was just off by about 30 years.

So, instead of creating huge privacy concerns, here’s some better ways to go about changing the tax.

1.) If you really want to tax by mile, why not just look at a car’s odometer? Many states (like Texas) require yearly car inspections. It would be very easy to just write down the mileage at the inspection and charge the appropriate tax. Same data, no privacy concerns. Late fees and tax repercussions would even cause more people to get their car inspected on time, and eventually reduce emissions by a little.

2.) Tax Diesel More. Heavier trucks put more wear and tear on the roads than light cars do. Heavier trucks use Diesel fuel. Why not tax it accordingly?

3.) Tax Tires. If you follow my twitter feed you probably saw this suggested by Xich. Bigger tires = more wear and tear on the roads. Tires have a set mileage to them, so tax accordingly.

4.) Just raise the gas tax. Compared to Europe, America pays pretty cheap gas prices. We’re no Venezuela, but we still have cheap gas compared to some other places in the world.

5.) Start spending responsibly. The gas tax isn’t the real issue here though. The big issue is that cities and counties and states all spent their money and are now panicking. Instead of taking the money given to them for road repair, they spent it on other stuff. In Michigan, for example, we sent it to Nigeria.

February 22nd, 2009

How To Fail At Social Networks

I’ve seen a thousand and two posts over the past few months that all say things along the lines of: “Is your company on twitter?” “do you have a facebook app?” and “why your company needs to embrace social networking.”

While that’s all and good, most of these posts (and most of these businesses) are completely missing the point about social networks.

Yes your business needs to be on social networks – but only if you’re truly embracing the social part of them.

When I think of the world “social” I think of society. In other words, people. Social networks are all about people, and they differ from traditional old school marketing greatly. Unfortunately, a lot of businesses aren’t realizing it yet.

The same mistake was made 3 years ago when everybody was screaming that companies need blogs. Since nobody knew anything about blogging they quickly assigned it to somebody in marketing and they quickly filled up post after post with marketing speak. Even today most corporate blogs still read like a brochure. They don’t allow comments, they don’t talk about anything interesting, and they only talk about how great their company is. In other words, they’re boring and nobody reads them.

I’m starting to notice the same trend as I look at various Twitter, MySpace, and Facebook accounts. Blogging and social networks are about conversation, but nobody in the business world wants to converse. Everybody just wants to have a site out there to push their brochure style marketing statements.

If you’re going to use social sites like that, you’re better off diverting your resources elsewhere. An it company Lethbridge can bring in different consultants for your project.

Social sites MUST be a conversation, and all conversations require 2 or more human parties. So instead of typing corporate marketing speak, you should instead focus on being an actual person. A good example of this is Google’s Matt Cutts. In addition to blogging about Google related things, Matt also blogs about geek related things. Matt’s twitter account regularly links to interesting articles from all around the web. He shows us that he actually has a personal side, allowing him to build rapport with all of his readers. Rapport is crucial.

That’s how you should use social networks. As a CEO or marketing manager you should be blogging, on twitter, and on all the sites like Facebook. You should also update these accounts regularly with both normal personal stuff and business stuff. You should be monitoring Google Alerts for your company and responding to comments on other social sites. You should even encourage your employees to do all of the same. Give them some leeway and see what they do. Most employees will do more productive than harmful things.

For more examples of companies who get it you can look at Craig Newmark, the founder of Craigslist who I’ve seen comment right here on this blog when I reviewed job search sites or Bob Parsons of GoDaddy. Both of these blogs regularly mention all kinds of current interesting things – not just news about their company. These are the guys who “get it.”

What about you? Does your company get it?

3 comments February 12th, 2009

Next Posts Previous Posts


About Ryan Jones

Name: Ryan Jones
Alias: HockeyGod
Location: Michigan
Company: Team Detroit
Title: Sr. Search Strategist
AIM: TheHockeyGod
Pets: Who Dey

Twitter & Klout



My Websites

Internet Slang Dictionary
Fail Pictures
FeedButton
Translate British
TextSendr
URL Shortener
Bad Words
WoW Slang
Free Softball Stats

Buy My Book

Recent dotCULT Posts

Calendar

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Posts by Month

Posts by Category

Subscribe To RSS Feed

Link Me





ypblogs.com