When most people think online search

When most people think online search, they think Google. That’s what’s happened to the online search market – it’s dominated by one (very good) player, with a few minor competitors (Bing and Yahoo!) picking up the scraps.

But it wasn’t always this way. There was a time when Google was just an interesting experiment, and users had a dozen or more viable search engines and directories to choose from.

So strap on your goggles (not your Googles!) and get ready to take a trip into the search engine past – way back into the mid-1990s!


Searching the Internet – Before the Web

The Internet today is primarily the World Wide Web – that “www” in all website addresses. But the Internet was around long before web pages and hyperlinks became common, since the 1970s in one form or another.

Users back in those pre-Web days still had the same issues that users today face – namely, sorting through all the junk to find the golden nuggets of information they needed. Just as now, information back then was stored on a multitude of servers around the globe, and being able to search those servers for specific information was an essential task.

If you’ve never seen the pre-Web Internet, know that it wasn’t pretty. There weren’t any fancy graphics and clickable hyperlinks, which means that all the data back then existed in text format only. And there weren’t any search engines per se, not like Google or Bing, anyway.

So how did users search for information before there was the Web? They used one of four primary tools:

  • WAIS, which stands for wide area information server. This tool let you use the old text-based Telnet protocol to perform full-text document searches of various Internet servers.
  • Archie, which was a client for searching for files across multiple FTP sites. (The word “archie” is the word “archive” with the “v” removed.) FTP sites still exist, but Archie is long gone.
  • Gopher, which was a tool for organizing files on dedicated servers. Gopher was surprisingly popular in universities across the U.S., which is where most of the information back then was housed. (Gopher was created at the University of Minnesota – home of the Golden Gophers.) Each Gopher server contained lists of files and other information, both at that specific site and at other Gopher sites around the world. Gopher worked similarly to a hierarchical file tree like that used in Windows Explorer – you clicked folder links to see their contents and navigated up and down through various folders and subfolders.
  • Veronica. When you wanted to find information on a specific Gopher server, you used a Gopher client for that server. But when you wanted to search across multiple Gopher servers, you used Veronica. Veronica was the Archie for Gopher servers. (Archie and Veronica – get it?) This software client functioned kind of like one of today’s search engines – you entered a query and clicked a Search button, which generated a list of matching documents found on various Gopher servers.

These tools were all rather primitive, at least by today’s standards. And after the Web came along, these old tools went the way of the horse carriage and buggy whip.


Enter the Web – and Web Directories

With the advent of the World Wide Web in 1994 (or thereabouts), data started migrating from Gopher and FTP servers to Web servers. Boring old text documents got dusted off and spruced up with graphics and hyperlinks, and Microsoft and Netscape started battling back and forth about who had the better Web browser. (This was after Mosaic paved the way, of course.) In short, the Internet was stood on its head as the Web became the dominant infrastructure – and as millions of new users flooded the Internet monthly.

As the number of individual Web pages grew from tens of thousands to hundreds of thousands to millions, it became imperative for people to quickly and easily find their way around all those pages. With the explosion of the Web, then, came a new industry of cataloging and indexing the Web.

The earliest attempts to catalog the Web were all done manually. That’s much different from the way today’s search engines do it, with automated web crawlers and indexing software. Instead, back in the day, real honest-to-goodness human beings physically looked at individual websites and pages and manually stuck each one into a hand-picked category. When they got enough Web pages collected, they ended up with what was called a directory.

It’s important to know that a directory (and there are still a few around today) doesn’t search the Web, it only catalogs chosen Web pages, which themselves represent a small subset of everything available. But a directory is very organized and very easy to use, and lots and lots of people back in the mid-90s used Web directories every day.

In many ways, those Web directories looked and worked like traditional print Yellow Pages. (Which are also facing extinction today, by the way.) When you wanted to find something, you clicked through the various categories and subcategories on a given directory site until you ended up with a list of pages recommended by the directory’s editors. You didn’t get the magnitude of results you get today, but what you got were choice. It’s the old quality versus quantity thing.