When people are looking for something online, they use search engines, this would not be the case unless the user already knows the name of the site. Before we can know something about the Deep Web, we must know how search engines operate. A search engine has 3 operations: web crawling, indexing and searching. 1. 1 Crawling In this stage, the search engine deploys spiders (which are also known as a crawler, Robot, Searchers or simply a Boot) that will roam the Internet to gather data. These programs collect different data from every hyperlink of a website and a database of indexes can be created from the data gathered .
The first thing that a spider do is to look for a file named “robots. Txt” for this file contains the instructions of the spider. It tells what content the spider should index and what to ignore. The “robots. Ext” file is important for this is the only way to control the spider, so that it will index the website according to the owner’s preference . When the spider “crawls” the page, it will only load the contents, which the “robots. Txt” file instructed, to the database and it will be loaded in to the search engine’s index . 1. Indexing Search engine indexing begins when a spider (web crawler) returned from its web crawl in a website. In this phase the information collected by the spider on each page of a website is analyzed and a database of words and phrases is built while engendering these factors: Number of times a word/phrase is used on the web page Search engines determines if a website has an excessive use of a word or phrase. This is done to filter websites with little content from websites with quality content and to see if a website is spamming a certain word or phrase Just to get indexed by the search engine.
Weight of the word/phrase The weight if a word depends on where it is located. -Top of the document -Sub headings -Text lines -Meta tag -Title Table 1 Factors that affect indexing (Source: http://www. Administrator. Ca/search-engine-indexing. HTML) Results vary from engine to engine because each engine has its own algorithm in processing the data that a spider has collected. A website may show good results in one search engine and the opposite on other search engines . 1. 3 Searching Once indexing is finished, all information is stored in a database waiting to be used whenever a person types in a search query.
There are four categories that cover most web search queries : Informational Navigational Transactional Connectivity This is how search engines like Google, Binge and Yahoo give links whenever a user hypes in a search query about a specific topic. 2. The Deep Web Having knowledge on how search engines work and how important indexing is. Are we really sure that the spider have crawled the entire Internet and indexed every page whenever we are typing a search query? Is this all the search results or is there something hidden within the depths of the Internet.
Whenever we try to search “illegal drug websites”, are we sure that those are the only websites that have anything related to the query? The Deep Web is part of the Internet that cannot be indexed by major search engines like Google, Binge and Yahoo. Pages within the Deep Web contains dynamic content, they change from time to time, that it gives the spider a hard time to index the site. The difference with dynamic content pages from others is that dynamic sites don’t exist until you search for them, the site itself is assembled while the search is done, compared to other website, you can easily get the page from the websites database . . 1 How Do You Connect to the Deep Web? As deep and mysterious Deep Web sounds, it’s not that hard to connect to the Other Side of the Internet. First, you need to install “The Onion Router” or “Tort”. It’s a software that is designed to provide the user complete anonymity as possible. It provides anonymity by encrypting your message multiples times through different directory servers to prevent other users from tracing your location. 2. 2 What’s Inside the Deep Web? If you will compare the Internet to an iceberg, you can say that the “surface web” is just the tip of the iceberg.
According to Michael Bergman (2012), founder of Brightened and credited with coining the phrase Deep Web, Searching on the Internet today can be compared to dragging a net across the surface of the ocean: a retreat deal may be caught in the net, but there is a wealth of information that is deep and therefore missed. In terms of size, the Deep Web contains 7,500 terabytes of information compared to 19 terabytes of information the surface web has to offer. So Backbone, Twitter, Youth, gag and many more websites is Just a mere portion of what the Internet is really is.
You may have used the Internet for different purposes such as for online shopping; searching for answers to your assignment or maybe download a movie or two using torrent applications, but that is Just a small fraction of what you can do when you roam the Deep Web. 8]. 2. 2. 1 The Silk Road One example of the sites that can be found in the Deep Web is the famous but now closed “Silk Road”, named after the famous trade route. This site caters to almost all kinds of drugs and it can be easily bought from this site.
Think of it as an eBay of illegal drugs where buyers Just purchase items using Biting, an encrypted, decentralized and anonymous kind of currency, and wait for the package to reach your doorstep. Below is a screens of what the “Silk Road” caters to. Figure 1 Screens of what the Silk Road sells. (Taken from http://thinness]knee. Mom/inside-the-deep-web-my-]Rooney-through- the-new-underground’) 2. 2. 2 Hire a Whitman sites In the Deep Web, users can hire an assassin to take down targets in exchange for money or Bitching. The most popular hire a Whitman sites are White Wolves and Shuttle.
They provide a wide range of services, from knowing where a specific person is to providing intelligence from any point in the globe and the most basic service they provide, eliminating a target. Once a contract is made, the customer needs to deposit at least half of the fee, the price depends on where and who the target is, so that the mission can start. The other half is paid once the mission is done and the assassins will send a picture to the client to confirm that the Job is done. Below is a screens of what these sites look like .
Figure 2 Screens of the White Wolves Professionals site (Taken from http://www. Chirpiness. Com/deep-web-guide) 2. 2. 3 The Hidden Wick If there is Wisped in the Surface Web, there is also the Hidden Wick for the Deep Web. The Hidden Wick provides a list of onion sites and gives a description of the site . Figure 3 The Hidden Wick (Taken from http://krypton. Wordless. Com/2011109/04/the-hidden-wick-between-the- areas-of-the-onion-router-networks/) With all of these dark things hidden in the Deep web, there are some good things that can be found in the Deep Web.
One example is Weeklies, where people can post information, no matter how sensitive they can be, and access this information. Deep web is not only used for illegal stuff, even though most of the content in the Deep Web is illegal it is still not right to blame Deep Web for this kind of stuff. The anonymity that the “Tort” can provide can be used in different ways. It can be used to express whenever the oppressed is silenced or whenever every movement is controlled or monitored in every way.
Just like what Joe Falconer of twentieth said: “Tort is not inherently about supporting child porn or any illegal activity, Just as paper is not responsible when child porn is printed on it and the postal service is not responsible when pedophilia share pictures in the mail” CONCLUSION Browsing through the Internet has always been easy, especially with the help of search engines. However, there is a part of the Internet that cannot be seen by usual search engines for they cannot index these sites and this is the Deep Web. As the web continues to expand, Deep Web and its contents will continue to grow .