Technology Transfer by means of SEO, Web 2.0 and conventional web tools

The author is completing a methodology to be sold to Universities as part of a Technology Transfer Program.

There will be a Consulting Marketplace, with all possible categories to help classify the offers and demands, and to allow efficient matching of problem posers and problem solvers. Both companies looking for solutions and individuals offering services can find each other and become partners.
Continue reading “Technology Transfer by means of SEO, Web 2.0 and conventional web tools”

Our automated site creation software for Google ranking

We are improving our page-generation software, obtaining every time better sites and always good positioning results.

We recently made a project for our Spanish SEO site, www.yo1ro.com and got excellent results for most of the keywords. It is very interesting to analyze the results of the near 80 pages made, and which ones obtained good results. All pages have one of 5 templates, depending on their role in the site network: sitemap, 1st level keyword, 2nd level keyword, index to secondary domain pages, randomly combined keywords.

Since we can change the structure of the templates, getting different keyword densities and element (image, metatag, heading, tag) combination, there are many experiments to be made. Those experiments let us know which factors are important (positive or negative) for results.

If you understand some Spanish, ask us for the  detailed results of this experiment. Ohterwise, wait a couple of weeks for the next English experiment resutls.

What a web promoter learns from a Trojan site infection

I noted yesterday that the visits to a website of mine had multiplied 5 fold in a day. After feeling some satisfaction on the success of my promotion efforts, I noticed something unusual: the requested subjects had nothing to do with my main theme, but with foods and drinks. And when you visited one of those pages, you were redirected to a malware site.

I entered my sites by FTP and I found many offending pages that some bot had placed there. And when I say many, I mean about 5000 files in dozens of directories in several sites accross 3 servers… Many hours were needed to clean up.

Of course I had to clean my PC from malware, apparently coming from a mailing software that I had downloaded 2 weeks ago.

Continue reading “What a web promoter learns from a Trojan site infection”

Results of the Antolinez Family Experiment

In only 10 days I got results for the experiment that tried to detect Goog penalization. It turned out that 2 domains are penalized with a -20 fall.

The other result refers to the extent of indexation. It seems that penalized sites receive only superficial indexation. For instance, if a site is well indexed, all the word strings will be indexed, and searching for phrases within quotes will find them. If the site is badly indexed due to penalization, only individual words will be indexed, and the strings will not be detected. Interesting…

The other result is that penalization covers all subjects, even those unrelated to the main one. For instance, a domain penalized for duplicate content will be penalized for content that has nothing to do with the abused content.

However, I am not making a difference between penalization types, which probably exists. In the next test I will include one domain that was penalized for duplicates and other penalized for linking to bad neighbourhood. Let’s see if the penalizations are similar.

I am now working on a new experiment (3rd of my controlled series) using more domains, mixing penalized with healthy domains.  

On the other hand, I am analyzing the directories I use for submission with reciprocal link exchange. It seems that some of them are considered bad neighbours, maybe because they include black hat sites, or they sell links, or whatever. My analysis includes only existing factors, because I am not free to upload test pages to them.

There are a few factors that warn you against bad directories: bad ranking in Google while searching for their own Home Page Title or Description, as compared with Yahoo or MSN. Also, few indexed incoming links, and other parameters. We are trying to establish the most reliable of those parameters.

For all the Penalization Detection experiments we need to focus on keywords that have 10-200 results. Less, is not enough to detect a fall in rankings. More, are difficult to detect and count.

We plan to offer a Standard Penalization Detection service (exact value), and a Penalization Diagnosis which will try to find an explanation for the issue. In most cases we detect bad linking, code problems or duplicate contents that explained the problem and could be corrected. 

Detection of Google Domain Penalization

Sometimes the effort we make to position a website is fruitless, and the client and the SEO wonder why.
A full penalization is obvious: the website disappears from the search engine under every keyword, even under its own domain name. It won’t have a Google PageRank, not even zero. And it is common for a domain banned from Google to rank perfectly well in Yahoo and the other search engines, more lenient penalizers.
Nevertheless, partial penalization is hard to detect. The website has a worst ranking than it should, and there is no way to know why or for how long.
A good website and link strategy analysis can suggest some of the exclusion reasons, and the measures to be taken. However, even after asking and begging the search engine for forgiveness, a response could take months. The best thing to do is to rapidly create a new site with similar content.
To be sure the best is to run the Penalization Test. It involves selecting a group of uncommon keywords, or even made-up ones. For example, the Antolineck family. Its members are Gualter Antolineck. Serap Antolineck, Rupert Antolineck, Torib Antolineck and Egbert Antolineck. The name and last name of each brother must appear in the URL and in the Title, and once in the page body under the H1 header.

5 brothers

We create one page for each brother, hosted in each tested domain. We also create a sitemap in an independent well ranked domain pointing to each brother. Something like: “The Antolineck family is composed by (Gualter link), (Serap link), (Rupert link), (Egbert link), (Torib link).”
That’s it. After a couple of weeks we will see the results. We will search Google for every brother, and register the rankings. And if we search for Antolineck, every indexed page will appear, ordered by the real positioning value of each domain. This value is not necessarily the same that Google’s PageRank.
For this example I used a made-up last name, completely absent in search engines, for clearer results. However, the test can be performed under less common but existent words, say 1000 results. In this way the search is more natural and detects the -30 or -100 penalizations (descending 30 or 100 results in any search).

Are the domains penalized? We will soon see. Will the domains be penalized because of a test? There is no breaking of the S.E. rules here.

penalized brothers

I plan to repeat the test periodically with my own and my client’s domains, because if a penalization occurs I need to instantly be aware of it and fix whatever it is that I did wrong. The test is also open to individual SEO webmasters who want to share the data.

Ask a SEO-focused webmaster to position your site. Webmasters today are quite specialized, and the guy that designs, programs, writes and hosts, does not necessarily get you a good ranking…

Site-specific stop words in Google: what they tell us about the indexation quality

This is the 4th article in the Googleometry Project Series.

Saturation is usually defined as the number of indexed pages in a website. However, supplemental results can be a significant part of the indexed pages, with no ranking value whatsoever. So, a deeper analysis of saturation and indexed pages is needed.

We define 3 kinds of poorly indexed pages:

– Foreign Pages: pages not assigned to any known language, so it show only if the searcher uses “all the Web” in the Language Preferences.

– Pages non associated to keywords: the only appear in the listings when you request site:domain.com, but they have no keywords associated with them. So, they are useless.

– Pages in the Reduced Indexation Set. Those pages are shown when a Stop Word appears in the search query. This indexation is probably limited to the page Title alone.
We researched the effect of combined searches such as:

site:domain.com keyword1 OR keyword2

Experiments were performed along several days, but data sets need to be obtained in the same day, because there is some day-to-day variation.

We found these consistent indicators of website indexation quality:

– number of pages within English filtered pages, versus pages for all the Web. This setting is modified in the Preferences section of Google. Google not only supplies English pages, but also quality-filtered pages. Most pages in any Web search are discarded in the English-only search, although they are in perfect English. These works equally for Spanish or French pages.

For some reason, the English searches tend to place the Supplemental results inside a link, the well known: “In order to show you the most relevant results, we have omitted some entries….”, while the Web searches directly add the Supplementals at the end of the regular organic results. 

Stop Words are specific for each site.

Ask us for specific experiments that you want us to run…

Why I have 2 accounts in each of these 80 social networks

I want to be able to test every social network for its promoting power for my stories. Some of them are going to be more receptive than others, depending on its size, difficulty, subject and to the importance they give to old, reliable accounts.

Social networks give a value to each user, sometimes called ‘karma’, and that value is useful for promotion of stories, either the own user stories or stories from ‘friends’ and strangers.

I am sticking to 2 accounts per network because it is well known that they detect some features that could point to spam, namely IP. Of course, IP can be defeated by using a navigation proxy, but that needs information, expertise and a potentially self-destructive desire to spam the sites. The second account is used if the first one loses value, or to start polemic discussions that are often followed with more attention.

I read about a “snowball” effect while promoting stories in the Digg-like sites, started from a minor network, where it should be easier to get noticed, and bringing users/friends/voters to the other sites. It would be useful if the home of your stories included the links pointing to the other social networks where the visitor can vote you. For that, I included a couple of plugins in my WordPress blog.

Stories are improved by user feedback and testing as they pass thru networks.

I am starting to test the power of this promotion technique, not too fast because I need my accounts to be mature enough. An account is mature when it had some time and healthy activity in the networks. As in real life networks, you cannot arrive, post your story and expect everyone admire you.

It is also good if your stories refer to the same subject, and if you develop virtual ‘friends’ that show their trust in you. It is important to complete a profile and include a photo.

This is the partial list of the networks where I am now. If this story gets enough Diggs, Propellers, Reddits, and so on, I plan to add the age, votes and karma of all the accounts, to help value them.  Continue reading “Why I have 2 accounts in each of these 80 social networks”

I want to get my ideas accross the Web, and make money from them…

I have been studying the way to communicate my good bizz ideas across the Web, and maybe find a buyer, a partner, an investor or other kind of supporter.
I have some expertise in SEO, so I can rank my sites quite well in Google and Yahoo. However, there are social networks that are faster and probably more targeted.
So, I am experimenting with Digg, Meneame and many others.
It is not easy to get a news promoted by those sites, unless you have a lot of time to spend increasing your karma.
This is done by reading many news every day and voting the best ones.

The links that I obtain by publishing in those sites are very helpful for my medium-term efforts of ranking into Google. So, both strategies are concurrent.

A keyword-independent measure of search engine positioning

So far, it is not possible to compare multi-keyword positioning (ranking) results in google or yahoo. Positions are not additive, because every keyword has a different difficulty. Obviously, it is better to be 1st of 100 than 1st of 10.

However, is it better 1st of 100 or 2nd of 1000? How about any other non-obvious pair of rankings?

Moreover, things get complicated when there are more variables: in addition to position (rank) and total number of SERP (search engine result pages), the search engine and the specific keyword.

The ideal measure of search engine positioning should have these properties:

– easy to calculate

– additive

– representative of the difficutly and merit of the positioning

– correlate with commercial results for the ranked site

– reflect the habits of the search public

It is known that most people search Google or Yahoo for the first 5-6 results. However, to cover the whole population, we need an idea of the math explaining this behavior, like an asymmetic Poisson distribution curve.

When several SEO companies want to compete, they create a contest with a single keyword or keyphrase: ‘mangeur de cicogne’, ultramarine nigritude and other nonsensical, unique, rarely sought phrases. This method is cumbersome, takes time, and otherwise useless. And it is completely separated from real life situations.

It would be much better to have a method to compare existing positioning results.

Thus, I proposed the SEPI, Search Engine Positioning Index, as:

 SEPI = Total SERP x Keyword Difficulty ^2 / Position ^2

Keyword Difficulty can be estimated by the PageRank of the tenth page for a Google search on the keyword. 

You can use this online calculator

This index is useful for:

– comparing results of SEO software or SEO campaigns

– comparing SEO companies.

– charging customers by SEO results

I need consensus from the SEO webmaster community, or suggestions to improve this index.