User:Arefin/Internet Vs World wide web

From Wikiversity
Jump to navigation Jump to search

Internet Vs World Wide Web[edit]


We all probably have seen the terms Internet and World Wide Web (WWW or Web) or might even a little bit familiar with them depending on individual. But in this chapter we are going to discuss extensive information about Internet and World wide web for the better understanding of the ideas that correlate them.

In the previous chapters, you had an overview of Ethernet, Internet protocol (IP), Transmission control protocol (TCP) and Domain name system (DNS). In the chapter section 1, will address introduction to Internet, Internet Architecture (where there is going to be brief analogy of the Internet Protocol suite (Commonly known as TCP/IP)), routing and internet access as well as internet governance. Starting from section 3 we will discuss step by step uses of internet. After that, section 4 is about the World Wide Web and its history, technologies behind the Web and principals of Web. All along this chapter, you will learn the history, principles, technologies and different interesting facts about Internet vs World wide web, which will define them as an unique platform on their own.

Introduction to Internet[edit]

Internet is an enormous network of interconnected networks. The internet is the physical network of computers all over the world. It is a global system that serve millions of users around the world to communicate with each other through the Internet protocol suite (TCP/IP). Whereas, a network is a group of two or more computer systems linked together. The networks can be private, public or government networks that are linked by wide array of electronic, optical and wireless networking technologies. For instance: Think about global data network interconnected with every part of the world.

Interlinked Global Data

A wide range of interlinking documents and information's that carried over the internet are Hypertext documents of the WWW or Web as well to support email and peer-to-peer networks. Hypertext documents are easy to use and flexible for sharing information's over the internet. The idea behind the invention of the internet was to make the entire system decentralized. Hence, it gave us a super fast way of communication throughout the world. The Internet is also often referred to as the Net.


Historically, in 1970s the technical term "internetwork" was used to define interconnected networks. Later in early 1883 the word "internet" was used to refer to interconnected motions.

In the late 1960s and early 1970s various protocols (such as:ARPANET,Tymnet,Telenet) have been developed for internetworking, where different multiple networks could be joined together into a network of networks. The foundation of the internet began in 1969, when the US Department of Defense created ARPANET, a project to allow military personnel to communicate with each other in an emergency. The Advanced Research Projects Agency Network (ARPANET) was one of the world's first operational packet switching networks, to implement TCP/IP, is the ancestor of the global Internet. The first two nodes of the ARPANET were interconnected between Leonard Kleinrock's Network Measurement Center at the UCLA's School of Engineering and Applied Science. In 1982, the Internet Protocol Suite (TCP/IP) was standardized and the concept of a world-wide network of fully interconnected TCP/IP networks called the Internet was introduced. In 1990 the ARPANET was decommissioned.

The Internet started to booming rapidly to U.S, Europe and Australia in the mid to late 1980s and to Asia in the late 1980s and early 1990s. The Internet is continuously growing from the early age to the modern era, driven by the huge amounts of online information, knowledge, business, social networking and so on.

:Principles of Internet Architecture[edit]

The foundation of the Internet was to create a decentralized system to make an open door of communication. An Internet communication system consists of interconnected packet networks supporting communication among host computers using the Internet protocols. The networks are interconnected using packet-switching computers called "gateways" or "IP routers" by the Internet community, and "Intermediate Systems". In order to accomplish that, we needed an Internet Architecture which offers a solid platform to work on that system. Internet Architecture [1] is a compound of inter networking based on the standard Internet Protocol Suite or TCP/IP protocol. It has four different layers. Apart from that there are other different layered conceptual models like OSI model for creating global connection of networks. It has seven logical layers. Both models work similar fashion in a way but they do have variance among themselves. Before we briefly talk about the Internet Protocol Suite, I will give you comparison chart of the two models I have mentioned earlier.[2]

  • Comparison chart:
Implementation of OSI model Reference model
Model around which Internet is developed This is a theoretical model
Has only 4 layers Has 7 layers
Considered more reliable Considered a reference tool
Protocols are not strictly defined Strict boundaries for the protocols
Horizontal approach Vertical approach
Combines the session and presentation layer in the application layer Has separate session and presentation layer
Protocols were developed first and then the model was developed Model was developed before the development of protocols
Supports only connection less communication in the network layer Supports connection less and connection-oriented communication in the network layer
Protocol dependent standard Protocol independent standard

In addition to that Internet Architecture, was broadly defined by RFC 1958, RFC 3426 and RFC 3439 gives us more in-depth knowledge and concept about design principles of Internet Architecture.

  • Internet Protocol Suite:´

The four layers of the TCP/IP [3] suite are-

1.Link Layer: is a group of methods or protocols that only operate on a host's link. The link is the physical and logical network component used to interconnect hosts or nodes in the network. A link protocol is a suite of methods and standards that operate only between adjacent network nodes of a Local area network segment or a wide area network connection. It solves the problems like- specifying sender and receiver, synchronizing time intervals, detection of corrupt data, collision detection. For example- Ethernet works on the link layer protocol in a Local area network using MAC address.

2.Internet Layer: is a group of internetworking methods, protocols, and specifications that are used to transport datagrams (also called packets) from the originating host across network boundaries, if necessary, to the destination host specified by a network address (IP address) which is defined for this purpose by the Internet Protocol (IP). The internet layer derives its name from its function of facilitating internetworking, which is the concept of connecting multiple networks with each other through gateways and routers. It is not responsible for reliable transmission. For example-the core protocols used in the Internet layer are IPv4, IPv6, the Internet Control Message Protocol (ICMP), and the Internet Group Management Protocol (IGMP) etc.

3.Transport Layer: provides end-to-end communication services for applications within a layered architecture of network components and protocols. It also provides convenient services such as connection-oriented data stream support, reliability, flow control, and Multiplexing. The core concept of this protocol is end-to-end principle and port numbers. It uses 3 way handshake to send acknowledge (ACK) between the host computers. Inside a TCP package 2 port numbers can be placed whcih are 16bits. For example- Transmission Control Protocol, UDP etc.

4.Application Layer: is an abstraction layer reserved for communications protocols and methods designed for process-to-process communications across an Internet Protocol computer network. Application layer protocols use the underlying transport layer protocols to establish process-to-process connections via ports. DNS plays such an important role in the Internet protocol suite since it gives us a meaningful scheme of addressing hosts on the Internet. DNS is hierarchical, It uses forward and reverse lookups to turn domain names to IP addresses and to turn IP addresses to domain names respectively. Examples would be- Domain Name system, DHCP, FTP, HTTP, SMTP-Simple Mail Transfer Protocol used for email etc.

     "Even though the layers are separated in theory there are many services and protocols that act on various layers. 
     The address resolution protocol is an example which connects the link layer and the Internet layer. 
     Network address translation (NAT) is using information from Linked Layer (like MAC addresses), 
     Internet layer (w:IP_addresses|IP addresses) and from the Transport layer (Port numbers) in order to work properly."
  • When transferring data like a file from one computer to another computer we have to have a protocol on the application layer which enables a process to process communication channel.
  • The file is usually split into segments which can are encapsulated into an TCP segment on the transport layer. The transport layer protocol is used to established a host to host connection.
  • Each TCP segment is packed inside an IP packet which then can be routed between various routers and across network network boundaries. The routing happens on the internet layer which interconnects different networks on the path of the packet.
  • In order to send data from one router to another in a network a linked layer protocol that supports the network is used. IP packets are then encapsulated inside frames. Ethernet or DSL are typical linked layer protocols.
  • As the data arrives the encapsulation process is reversed in order to assemble the original file.

Data Flow of the Internet Protocol Suite (Web Science MOOC)

:Routing and Internet access[edit]

  • Routing:

Generally routing is done through a routing device called router connected to two or more computer networks over the Internet. Basically, it forwards IP packets between the networks which then reads the source address inside the IP packet to get the destination address. After that, it looks up to its routing table and using the routing protocols sends the IP packet to the destination node. This process go on until the packet reach its desire destination mentioned in the IP packet. Routers are also used for port forwarding between private internet connected servers. For example-Port forwarding is often used in FTP , Web server or other server based applications behind a NAT gateway to operate.

End-to-end nodes use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet. Large organizations, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.

  • Internet access:

Internet access connects individual computers, cell phones and computer networks to the Internet. Moreover, enabling users to access Internet services to be precise. For example, email and the World Wide Web. Most commonly the method used are dial-up, broadband (over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and Edge/3G/4G technologies. Now a days, Internet can be accessed in most of the public places like- airports, malls, restaurants and coffee shops or educational institutions using Wi-Fi Hotspot. Internet cafe's are also widely common to access internet.

Blackout of the Internet can cause signal interruptions or slowdowns which can be done by natural disaster or cable cut offs. For example-in 2008 submarine cable disruption refers to three separate incidents of major damage to submarine optical cables. The first incident caused damage involving up to five high-speed Internet submarine communications cables in the Mediterranean Sea and Middle East from 23 January to 4 February 2008, causing internet disruptions and slowdowns for users in the Middle East and India. The incident called into doubt the safety of the undersea portion of the Internet cable system. The second was, in late February there was another outage, this time affecting a fiber optic connection between Singapore and Jakarta. And the third incident was On 19 December, FLAG FEA, GO-1, SEA-ME-WE 3, and SEA-ME-WE 4 were all cut.

:Internet Governance[edit]

ICANN Headquarter in U.S

As we know Internet is distributed network that works on a decentralized system. So who owns the internet? Well, the answer is "Nobody". Nevertheless, there are many organizations, corporations, governments, private citizens and service providers, they all own part of the infrastructure, but there is no one who completely owns it. However, organizations that oversee and standardize what happens on the Internet and assign IP addresses and domain names. For example: the National Science Foundation, ICANN, InterNIC and the Internet Architecture Board, National Telecommunications and Information Administration. Also there are some Regional Internet Registries (RIRs) for allocation of IP addresses.

The Internet Society (ISOC) was founded in 1992, with a mission to "Assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". ISOC provides an administration for a number of organizations involved in developing and managing the Internet, like- the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG) and Internet Governance Forum (IGF)

Uses of Internet[edit]

The Internet has opened great possibility for everyone to get involved in virtual world of Web. It can be accessed by globally regardless computers, mobile phones and gaming consoles. Every now and then people involved with Internet are improving content (text, image, video) as well as software's to enrich the vast amount of information technology out there. The Internet allows computer users to remotely access other computers and information stores easily. They may do this with or without computer security, authentication and encryption technologies, depending on the requirements. This is encouraging as there is a way of working for many companies even from home.

The overall usage of Internet has been growing rapidly. The total number of people connected to the Internet worldwide is more than 2 billion [4] which is remarkable. Interestingly, the total number of applications downloaded over all types of devices will exceed 50 billion[5] by the end of 2013 (a report by the International Telecommunications Union).

The most common language for communication on the Internet is English. After English (27%), the most common languages on the Web are Chinese (23%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%).[6] Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet.If we look at picture it shows by region, 45% of the world's Internet users are based in Asia, 21% in Europe, 11% in North America, 11% in Latin America, 7% in Africa, 4% in the Middle East and 1% in Oceania/Australia.

According to Euromonitor, by 2020 43.7% of the world's population will be users of the Internet. Splitting by country, in 2011 Iceland, Norway and the Netherlands had the highest Internet penetration by the number of users, with more than 90% of the population with access.


The at sign (Source:Email)

Email or Electronic mail is a method of exchanging digital messages from a sender to one or more recipients. Modern email operates across the Internet or other computer networks. Early email systems had a drawback that the author and the recipient both be online at the same time. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver, and store messages. Neither the users nor their computers are required to be online simultaneously.

Historically standards for encoding email messages were proposed as early as 1973. Conversion from ARPANET to the Internet in the early 1980s produced the core of the current services. An email sent in the early 1970s looks quite similar to a basic text message sent on the Internet today. Network-based email was initially exchanged on the ARPANET in extensions to the File Transfer Protocol (FTP), but is now carried by the Simple Mail Transfer Protocol (SMTP). In the process of transporting email messages between systems, SMTP communicates delivery parameters using a message envelope separate from the message (header and body) itself.

The main components of email consists of message envelope, message header, and message body. The message header contains control information, including a sender's email address and one or more recipient addresses. Usually descriptive information is also added, such as a subject header field and a message submission date or time stamp. The default formatting of email addresses is "local-part@domain" where the local-part may be up to 64 characters long and the domain name may have a maximum of 255 characters – but the maximum 256 characters length of a forward or reverse path restricts the entire email address to be no more than 254 characters. Examples of different types of email based on technologies would be-

  • Web-based email (webmail)
  • POP3 email services
  • IMAP email servers
  • MAPI email servers

:Social Networking[edit]

Internet has created the opportunity to use Social networking websites such as Facebook, Google+, Twitter, MySpace, Flickr, Foursquare, Delphi, Hi5, so on and so forth. Those platforms have created new ways to socialize and interact over the Internet which is fascinating. Users of these sites are able to add a wide variety of information to pages, to pursue common interests, and to connect with others. It is also possible to find existing associates, to establish links between existing groups of people to share information.

                      (A quote by —Hilary Mason, chief data scientist, bitly, VentureBeat, 2012)
     “The things you share are things that make you look  good, things which you are happy to tie into your identity.”

People use online chat for real-time conversation over the Internet, by transmitting text or video messages using various software's and protocols. For instance- Skype, Yahoo, windows live, WhatsApp, Google+ and Google+ Hnagouts, eBuddy and many more. Not to mention, those social networking sites and software'x are mostly open source allowing everyone to get connected. Another interesting fact would be knowing what is happening in the Internet, where and when or what type of information are getting the most vibe around the world, just by collecting set of data from social networking sites. For example- Online crowd behavior which allow us to know statistical information throughout the Web.


Entertainment is another part of the Internet. Blended with music, video, lyric, gaming, movie, sports and a whole lot. Popular sites for accessing or downloading music and video is YouTube, Spotify. There is a lot of forums and sites where you can get latest music information about your favorite bands or stars. Also, live streaming of news, tv-shows, sports can be watched from any part of the world just by a click. Multiplayer gaming creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. Besides computers people use gaming consoles like Nintendo, X-Box, PlayStation via Internet now a days. These range from first-person shooters, from role-playing video games to online gambling like poker and all. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists "copyrights issues" than others.

:E-Learning and E-Business[edit]

  • E-Learning:

E-Learning is educational method over the Internet. Information and Communication Technology has given us the chance to learn and gather endless knowledge from the massive virtual book of information. Basically, Electronic learning includes numerous types of media that deliver text, audio, images, animation, and streaming video as learning materials. E-learning can be inside or outside of the classroom. It can be self-learning or instructor-led learning. E-learning is suitable for distant education but it can also be used in accordance with typical teaching method, which is commonly used. E-Learning is mostly used in the University level but some countries use this system in the college level as well. Example- United States. There are still ongoing debates for online education because of the vast amount of content that need to produced to use it properly from top to bottom level. Content is a core component of e-learning and includes issues such as pedagogy and learning object to re-use. While there are a number of means of achieving a rich and interactive e-learning platform, one option is using a design architecture composed of the “Five Types of Content in eLearning” (Clark, Mayer, 2007). Content normally comes in one of five forms: (Source:Electronic performance support system)

  • Fact - unique data
  • Concept - a category that includes multiple examples including various types of Instructional Design
  • Process - a flow of events or activities
  • Procedure - step-by-step task
  • Strategic Principle - task performed by using a frame work

Internet is the main part of e-learning and there are numbers of sites available.There are several technologies behind this, such as devices like computers, tablets, mobile phones, and then audio, video, webcam, blogging, smart boards, Screencasting (for live video steaming). Massively-open online course (MOOC) that can replace traditional educational system. Universities such as Massachusetts Institute of Technology (MIT), Stanford and Princeton, Harvard University, University of Koblenz-Landau (Department of WeST) offer wide range of disciplines at no charge. Also google, wiki, microsoft and private organizations like Udacity, Khan academy, Coursera, YouTube etc.

  • E-Business:

E-commerce known as Electronic commerce, is a type of industry where the buying and selling of products is conducted over electronic systems such as the Internet and other computer networks. Electronic commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. Modern electronic commerce typically uses the World Wide Web at least at one point in the transaction's life-cycle, although it may encompass a wider range of technologies such as e-mail, mobile devices, social media, and telephones as well. Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of the exchange of data to facilitate the financing and payment aspects of business transactions. This is an effective and efficient way of communicating within an organization and one of the most effective and useful ways of conducting business. The major different types of e-commerce are: (Source:Types of E-commerce)

  • Business-to-Business (B2B):

B2B e-commerce is simply defined as e-commerce between companies. About 80% of e-commerce is of this type, and most experts predict that B2B e-commerce will continue to grow faster than the B2C segment.

  • Business-to-Consumer (B2C):

Business-to-consumer commerce between companies and consumers, involves customers gathering information; purchasing physical goods or information of goods.

  • Business-to-Government (B2G):

Business-to-government e-commerce or B2G is generally defined as commerce between companies and the public sector.

  • Consumer-to-Consumer (C2C):

Consumer-to-consumer e-commerce or C2C is simply commerce between private individuals or consumers. M-commerce is the buying and selling of goods and services through wireless technology handheld devices such as cellular telephones and personal digital assistants (PDAs).


Telecommuting is working from a remote location. A person who telecommutes is known as a "telecommuter", "teleworker", " or "work-at-home" employee. Internet has made it much simple to perform the task from home or on the go. Telecommuters are equipped with web conferencing capabilities, allowing them to sit in on office meetings via modem and webcam, or a conference call.

People enjoy the flexibility of working their own hours, at their own pace, as long as all deadlines are met. The flip side to this is the lack of companionship so this can be rather lonely. People have to be motivated by themselves in order to work effortlessly. By missing out on the office meetings, the worker can also miss out on vital pieces of information. Telecommuting not only enables an employee the convenience of working at home, it also allows the employer to save money on certain overhead expenses such as rent, utilities. In addition, since telecommuters are happier, they are often more productive in a sense that, they may spend more time on working than their office working counterparts who tend to spend more time on lunch breaks or gossiping while having coffee or smoking.

:Political revolutions[edit]

The Internet has given a platform for politics. Many countries has already adopted to use Internet interact with the Crowd or people. Meanwhile some nations have got the success for using Internet as a tool. Think about E-Participation as an example, where e-voting, petitioning, policy making, consultation in law making and decision making all this made possible by the grace of Internet. Nations like Switzerland from 2003 and Estonia from 2001 are performing nationwide elections using e-voting, thus made a huge impact among the public and the government itself. Technologies and tools are still being developed to perform these acts transparently but nevertheless the revolution is worth mentioning all because of Internet.

:Freedom of Voice[edit]

Citizens has the freedom of voice even before the technological developments. But using the social media via Internet it has added even greater value to humanity. If we look at the political revolutions in Tahrir Square, Egypt in 2011, where it helped certain amount of protesters organize protests, communicate, and broadcast information (Pic-1). Or think about Shahbag Protest happened in Dhaka, Bangladesh in 2013 where people regardless age, religion, race had voiced their demand for the punishment of the "1971 war criminals" (Pic-2). Those incidents were majestic that how fast people can communicate with each other and express their rights to government for the betterment of their nation.

:Internet censorship[edit]

Internet censorship is control process which determines what can viewed, published, shared, accessed on the Internet. The legislation can deployed by the government or the private organizations. In order to protect social, religious, moral or commercial issues for misinterpretation of the content, the norms are applied. Governments such as china, Saudi Arabia, UAE, North Korea etc have restrictions to some dubious contents like religion, human rights and politics. Several nations including, Unites states, Australia, Denmark has imposed ISPs to filter contents for child pornography or inappropriate content libraries from the Internet. Those process are being done using Content control softwares. Other than that, filtering can be implemented in some other ways:by software on a personal computer, via network infrastructure such as proxy servers, DNS servers, or firewalls that provide Internet access. Examples would be- Browser based filters, E-mail filters, Client-side filters, Network-based filtering, Search-engine filters. Also some countries have still not legalized VoIP to protect telecommunication companies from economic downfall. Media sharing in YouTube, social networking like Facebook and some blog hosting sites are blocked in some countries for copyright protection.

Introduction to World wide web(WWW or Web)[edit]

WWW logo by Robert Cailliau

The World Wide Web or WWW or the Web is an application or service of the Internet. Web is a system of interconnected hypertext documents and information's. Many people might have the misconception that Internet and Web is the same thing. But the truth is Internet is global system which allows data accessible between hosts, on the other hand Web is an application and a way of accessing those data on the Internet. World wide web could have been build without the invention of the Internet. Similarly we can say that Internet is physical network of hosts and Web is a virtual network of hyperlink websites. Moreover, Web is an application that interplay's with technical aspects, sociological aspects and interconnection between people using computer. The major build component of the Web is a combination of Hypertext Markup Language (HTML) with Uniform Resource Locator (URL) and Hypertext Transfer Protocol (HTTP). In this section we will discuss about Inspiration, evolution of the Web as well as technologies behind it in details.

: History behind the Web[edit]

The invention of the World Wide Web has had happened by a British scientist, Tim Berners-Lee in March 1989 at CERN.[7] Even before the invention of world wide web, in June 1980 Tim Berners-Lee developed a software project called ENQUIRE at CERN, a simple hypertext program. Although it had some similar ideas as the world wide web and the semantic web but was different in many ways. The ideas of ENQUIRE were to make compatibility with different networks, disk formats, data formats, and character encoding schemes, to transfer information between different systems. The different hypertext-systems before ENQUIRE were not passing these requirements (example: Memex and NLS). ENQUIRE was a closed door project while the Web was made for open door use so that everyone can contribute and develop it in simple way.

Cailliau Abramatic Berners-Lee 10 years WWW consortium

On November 1990 the prototype of World Wide Web developed on the NeXT by Tim Berners-Lee. With help from Robert Cailliau, he published a more formal proposal on 12 November 1990, to build a "Hypertext project" called "WorldWideWeb" as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers.

By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (named WorldWideWeb, which was also a Web editor), the first HTTP server software (later known as CERN httpd), the first web server (, and the first Web pages ( that described the project itself. The browser could access Usenet newsgroups and FTP files as well. However, it could run only on the NeXT; Nicola Pellow therefore created a simple text browser that could run on almost any computer called the Line Mode Browser.

The Internaut day is celebrated on August 23, anniversary of the WWW - World Wide Web, which was developed in the CERN laboratories in Switzerland. A short summary of the World Wide Web project on the alt.hypertext newsgroup posted by Berners-Lee.

  "The WorldWideWeb (WWW) project aims to allow all links to be made to any information anywhere. [...] The WWW project 
   was started to allow high energy physicists to share data, news, and documentation. We are very interested in 
   spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!" 
  —from Tim Berners-Lee's first message.

Tim Berners-Lee was knighted in 2004 by Queen Elizabeth II for his contribution to the World Wide Web.

: Earlier Web browsers[edit]

Early websites combined links for both the HTTP web protocol and the Gopher protocol, which provided access to content through hypertext menus presented as a file system rather than through HTML files. Early Web users would navigate either by bookmarking popular directory pages, such as Berners-Lee's first site at, or by consulting updated lists such as the NCSA "What's New" page. Some sites were also indexed by WAIS, enabling users to submit full-text searches similar to the capability later provided by search engines.

There was still no graphical browser available for computers besides the NeXT. On April 1992 with the release of Erwise, an application developed at the Helsinki University of Technology, and in May by ViolaWWW, created by Pei-Yuan Wei, which included advanced features such as embedded graphics, scripting, and animation. ViolaWWW was originally an application for HyperCard. Both programs ran on the X Window System for Unix.

Students at the University of Kansas adapted an existing text-only hypertext browser, Lynx, to access the web. Lynx was available on Unix and DOS, and some web designers, unimpressed with glossy graphical websites, held that a website not accessible through Lynx was not worth visiting.

In 1993, the introduction of the Mosaic web browser, a graphical browser developed by a team at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC), led by Marc Andreessen. Surprisingly the first Mosaic Browser lacked a "back button", a feature proposed in 1992-3 by the same individual who invented the concept of clickable text documents. The request was emailed from the University of Texas computing facility. The browser was intended to be an editor and not simply a viewer, but was to work with computer generated hypertext lists called "search engines".

The first Microsoft Windows browser was Cello, written by Thomas R. Bruce for the Legal Information Institute at Cornell Law School to provide legal information, since more lawyers had more access to Windows than to Unix.[8] Cello was released in June 1993.The NCSA released Mac Mosaic and WinMosaic in August 1993.

: Web Servers[edit]

First Web server, The NeXTcube used by Tim Berners-Lee at CERN.

The foundation of web servers is to deliver web pages to client. The communication takes place between client and server using Hypertext Transfer Protocol. Web pages contain HTML documents including images, videos, style sheets, scripts and text contents. The web browser initiates communication by making a HTTP request and the server responds with the requested content else displays an error message. Content can be received from the client by implementing full HTTP request. A web server can be either implemented into the OS kernel (like Microsoft Internet Information Services on Windows or TUX on GNU/Linux), or in user space (like regular applications).

Many web servers also support server-side scripting using Active Server Pages (ASP), PHP, or other scripting languages. This means that the behavior of the web server can be scripted in separate files, while the actual server software remains unchanged.

Web servers are not always used for serving the World Wide Web. They can also be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring and/or administering the device in question. This usually means that no additional software has to be installed on the client computer, since only a web browser is required.

: Operative Scheme of Web[edit]

  • Uniform Resource Identifier (URI):
URI Euler Diagram no lone URIs

A URI is a compact string of characters to identify a physical resource. It can be further classified as a locator, a name, or both. The URI scheme refers to the namespace of the URI and may restrict the syntax and the semantics of the identifier. This specification defines those elements of the URI syntax that are either required of all URI schemes or are common to many URI schemes. Internet standard STD 66 (also RFC 3986) defines the generic syntax to be used in all URI schemes. Every URI is defined as consisting of four parts, as follows:

                       <scheme name> : <hierarchical part> [ ? <query> ] [ # <fragment> ]

Examples of URIs:

  3. urn:isbn:15353613
  • Uniform Resource Locator (URL):

A URL is a specific character string that constitutes a reference to a resource. In most web browsers, the URL of a web page is displayed on top inside an address bar. technically it refers to the subset of URI to provide the primary access mechanism by means of location. URLs are commonly used for web pages, but also used for file transfer, email, telephone numbers and many other applications. A Uniform Resource Locator functions like a person's street address. Example- URL by performing an HTTP request to the host at, using port number 80.

  • Uniform Resource Name (URN):

The term URN has been used to refer to both URIs under the "urn" scheme, which are required to remain globally unique and persistent, even when the resource ceases to exist or becomes unavailable. A Uniform Resource Name, functions like a person's name. Example- urn:isbn:15353613, so in this case "isbn" can be a street name and "15353613" the number can be seen as a generic street number.

  • Hypertext Transfer Protocol (HTTP):

HTTP is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. The standards development of HTTP was coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs), most notably RFC 2616 (June 1999), which defines HTTP/1.1, the version of HTTP in common use. The scheme specifier begins with "https://" at the start of a web URI, refers to Hypertext Transfer Protocol and uses port 80(by default). Unlike www, which has no specific purpose, these specify the communication protocol to be used for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web. It functions as a request-response protocol in the client-server computing model. HTTP has several methods like- GET, POST, DELETE, CONNECT etc.

  • Hypertext Transfer Protocol Secure (HTTPs):

HTTPS is a communications protocol for secure communication over a computer network, with especially wide deployment on the Internet. Technically, it is not a protocol by itself, rather the result of simply layering the Hypertext Transfer Protocol (HTTP) on top of the SSL/TLS protocol. Thus adds the security capabilities of SSL/TLS to standard HTTP communications. The added encryption layer in HTTPS is essential when confidential information such as passwords or banking information are to be exchanged over the public Internet. HTTPs syntax is identical to the standard HTTP scheme and its scheme token. The scheme specifier begins with "https://" at the start of a web URI, refers to Hypertext Secure and uses port 443(by default). As of 2013-09-02, 24.6% of the Internet's 168088 most popular web sites have a secure implementation of HTTPS.

: HTML[edit]

HTML5 logo and wordmark

HTML stands for Hyper text Markup Language. It is a "language" that creates the World wide web documents. HTML is an open standard and is constantly evolving. HTML documents are platform independent, they can be displayed by Windows, Mac or Linux operating systems. The most common filename extension for files containing HTML is .html or .htm. HTML was highly influenced by SGMLguid and SGML-based documentation format.

HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets (like <html>), within the web page content. HTML tags most commonly come in pairs like <h1> and </h1>, although some tags represent empty elements and so are unpaired, for example <img>. The first tag in a pair is the start tag, and the second tag is the end tag (they are also called opening tags and closing tags). In between these tags web designers can add text, further tags, comments and other types of text-based content. The purpose of a web browser is to read HTML documents and compose them into visible or audible web pages. The browser does not display the HTML tags, but uses the tags to interpret the content of the page. It can embed scripts written in languages such as JavaScript which affect the behavior of HTML web pages. Web browsers can also refer to Cascading Style Sheets (CSS) to define the appearance and layout of text and other material.

HTML lacks some of the features found in earlier hypertext systems, such as source tracking, fat links and others. Even some hypertext features that were in early versions of HTML have been ignored by most popular web browsers until recently, such as the link element and in-browser Web page editing. Sometimes Web services or browser manufacturers mixture these shortcomings. For instance- wikis and content management systems allow surfers to edit the Web pages they visit.

HTML markup consists of several key components, including tags (and their attributes), character-based data types, character references and entity references. Another important component is the document type declaration, which triggers standards mode rendering. An example of the basic program written in HTML code:

<!DOCTYPE html>
    <title>Learn HTML</title>
    <p>Basic of HTML</p>

(The text between <html> and </html> describes the web page, and the text between <body> and </body> is the visible page content. The markup text '<title>This is a title</title>' defines the browser page title.) This Document Type Declaration is for HTML5. If the <!DOCTYPE html> declaration is not included, various browsers will revert to "quirks mode"

: Privacy & Security[edit]

While a web page is requested from a web server the server it need to identify the requested user and it gets done by the IP address from which the request has arrived. Sometimes to make this process easier, most web browsers store the web information in the history by keeping cache locally of the previous surfed content. As for HTTPS encryption is used for more secured web surfing. User identification on the web can be done though various process, which user must specify personally in order view or receive content such as name, address, e-mail address, etc. Upon receiving that piece of information a connection can be made between the current web traffic and that user. If the website uses HTTP cookies, username and password authentication, or other tracking techniques, then it will be able to relate other web visits, before and after, to the identifiable information provided.

Web security is always been a concern among the people involved with the web. A huge number of private and public data is out there on the web and every now and then the volume is getting increased. As a matter of fact cyber crime and cyber attack, for example- identity theft, fraud and spreading malware, virus is a major concern for us. Although knowing that those acts are unethical but still some people does that and will always do it. As for reason security measures have been developed and updated on a regular basis like- McAfee, AVG. Also there are loads of organizations and qualified personnel working all along the way to prevent and protect the web from those attacks. It is very important to maintain the safety and digital rights for further development of the web.

: Foundation of W3C[edit]

The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee in October 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet. A year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission DG InfSo, and in 1996, a third continental site was created in Japan at Keio University. Since then, Berners-Lee has played an active role in guiding the development of web standards and has advocated his vision of a Semantic Web. Tim Berners-Lee originally expressed the vision of the Semantic Web as follows:

 "I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, 
 links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to 
 emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by 
 machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize."
                                                                                             — Tim Berners-Lee, 1999

The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularizing use of the Internet. Although the two terms are sometimes makes confusion in use, but the World Wide Web is not identical to Internet. The web is a collection of documents and both client and server software using Internet protocols such as TCP/IP and HTTP.

Subsection 4.8: Design principles of Web[edit]

So, speaking about design principles of the world wide web, what was motivation, policy, challenges and goals that Tim Berners-Lee have put together to define the web as a different particle from the internet. Tim has articulated some design principles for the web and in this section we are going talk about those backbones of the web.

  • Decentralized System:

Decentralized system is that it has no central force that will control the entire system, although it consists of many external systems. In this way, the system is not bound to any particular section and the system will have work-ability due to partial system failure which is incredible. For example- Think of this as a fishing net design, where each knot represent an independent system, and the lines between the knots represent the communications channels between the systems. Even if one or two knots gets broken it will still be able to catch fish without too much difficulties.

  • Distributed System:

Distributed systems can be in two types like centralized and decentralized. For instance- client-server system is a centralized distributed system and Multi-tired Application (The Java EE platform). Alternatively, peer to peer communication is a decentralized distributed system which does not contain server.

An abstract insight of a distributed system means, it gives the power to the stakeholders to produce knowledge and link those knowledge internally to share those piece of informations around the world.

  • On top of the internet:

The idea to create the web on top of the internet was to utilize the existed unique technology. But new platforms could have been developed or may even get developed in future. Such thought behind creating a new advanced system would impose huge financial budget and might not even work the way it is working now. Tim Berners-Lee took those technological components such as internet, DNS and hypertext and blended it all together that laid an open platform for the web to work on.

  • Test of independent invention (ToII):

I would like start here with few lines written by Sir Tim Berners-Lee-

   "Take the Web. I tried to make WWW pass the test. Suppose someone had (and it was quite likely) invented a World Wide 
    Web system somewhere else with the same principles. Suppose they called it the Multi Media Mesh and based it 
    on Media Resource Identifiers, the Multi Media Transport Protocol, and a Multi Media Markup Language(tm). 
    After a few years, the Web and the Mesh meet. What is the damage?"
    - Source: W3C

As we can see Tim has described there is a system should have its authenticity before it comes to the real world. Designing such platform takes lot of effort and time. If two systems encompass similar aspect in nature they will create division and conflict among themselves. Which may lead them to desertion over time and cost incredible amount of data lost as well as cause financial breakdown. So it is really important to think about before implementing an idea in order to make sure it stands on without making any unexpected collision or downfall.

  • Simplicity:

Simplicity as a design principle means if a system is easy understand, easy to access then it will work best possible way by avoiding unnecessary complexity within the system. Such system will involve more people to come up with new ideas to contribute spontaneously and enrich it as time passes by. If we look back at the hypertext markup language system, it actually made life easier to write and produce web content since people were already acquainted with the hypertext long before the invention of the world wide web. In 1960, U.S Navy came up with a design principle called "K-I-S-S"-

                                               "keep it simple, stupid."
  • Modular:

Modular design, is an approach that divides a system into smaller parts that can be independently created and then used in different systems to drive multiple functionality. Modular design in web is the same as modular design in computer. The idea is to build a system with easily replaceable components that use standardized interfaces. This technique allows us to upgrade certain aspects of the system easily without having to dissect the whole system altogether. As a matter of fact this principle is less parallel to ToII (test of independent invention).

  • Tolerance: (against what? failure of subsystems?)

Yet another gallant quote from Tim-

                          "Be liberal in what you require but conservative in what you do."

In terms of defining the tolerance of the web we can say that the technology and the protocols has to maintain the standards for the web to work. Although there might be some tolerant for example- web browsers are sometimes lenient to understand the written query but it does not mean that authors of the web content should not follow the best practices. For instance: if we think about the http GET request which will be connected using port 80, the definition has to be valid in order to make the protocol work properly. A standard http GET request would be:

                                           "GET/index.html HTTP/1.0 Host: host name"

Violation of this standard is not an excuse for the principles of tolerance.

  • HTTP 404:

The 404 is a common error message being displayed on the web specifying that the client managed to communicate with web server while the server could not managed to find the request. This is standard HTTP response code often receive by the client due to broken links or dead links on the web. No confusion should be made thinking that "server side is down" or "server not found". Moreover 404 can occur by mistyping the URL in the browser by client side.

  • Client server model (request / response):

A client server model in the context of communication is simply request-response messaging pattern where clients sends a request and server returns a response to it. This communication exchange takes place between client and server using a common language that both can interpret. The client-server protocols operate in the application layer using web services such as API (Application programming interface). API is an abstract layer to use resources in the server such as databases. For example: When a Facebook user wants to access to his account with a browser, it initiates a request to Facebook's web server and asks for username & password to get access. The login details are stored in the database where the web server runs a program to establish the access. Hence, the web server sends the credentials to the web browser to interpret the data. Here, each server acts as a client while submitting data to another server to process. In every step client-server exchange messages where the host process and sends the data.

: Web 2.0[edit]

Web 2.0 does not refer to a new version of World Wide Web but builds upon the core concept of Web 1.0 and revamp towards a dynamic web content explicitly. The vision behind Web of Sir Tim Berners-Lee was to create a "Collaborative Medium" where people could meet, read, write and share information's among themselves. Web 2.0 carries out that philosophy and allow users to collaborate more diversely by means of social networking, wikis, podcast, blogs, web applications, video sharing so so forth. The term "Web 2.0" was first used in January 1999 by Darcy DiNucci, a consultant on electronic information design in an article, "Fragmented Future"-

  "The Web we know now, which loads into a browser window in essentially static screenplays, is only an embryo of the Web 
   to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo 
   might develop. The Web will be understood not as screenplays of text and graphics but as a transport mechanism, the 
   ether through which interactivity happens. It will appear on your computer screen, on your TV set, your car dashboard, 
   your cell phone, hand-held game machines, maybe even your microwave oven."
  • Technological Aspect:

Web 2.0 uses some of the same technologies as Web 1.0. Languages such as PHP, Ruby, Perl, Python, as well as Enterprise Java (J2EE) and Microsoft.NET Framework, to output data dynamically using information from files and databases. Web 2.0 introduces Ajax and JavaScript frameworks such as YUI Library, Dojo Toolkit, MooTools, jQuery, Ext JS and Prototype JavaScript Framework. Ajax programming uses JavaScript to upload and download new data from the web server without undergoing a full page reload. However, Ajax does not replace the Web 1.0 protocols such as HTTP but adds a new dimension or extra layer on top of the fundamental technologies of the web.

Web 2.0 Syndication feature uses standardized protocols to permit end-users to make use of a website's data in another context. Protocols permitting syndication include RSS (really simple syndication, also known as web syndication), RDF (as in RSS 1.1), and Atom, all of which are XML-based formats. Observers have started to refer to these technologies as web feeds. Specialized protocols such as FOAF and XFN (both for social networking) extend the functionality of sites and permit end-users to interact without centralized websites.

Web 2.0 often uses machine-based interactions such as REST and SOAP. Servers often expose proprietary Application programming interfaces (API), but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into use. Most communications through APIs involve XML or JSON payloads. REST APIs, through their use of self-descriptive messages and hypermedia as the engine of application state, should be self-describing once an entry URI is known. Web Services Description Language (WSDL) is the standard way of publishing a SOAP Application programming interface and there are a range of web service specifications.

Web 2.0 Map

Web 2.0 can be described in three parts:

  • Rich Internet application (RIA): defines the experience brought from desktop to browser whether it is from a graphical point of view or usability point of view. Some buzzwords related to RIA are Ajax and Flash.
  • Web-oriented architecture (WOA): is a key piece in Web 2.0, which defines how Web 2.0 applications expose their functionality so that other applications can leverage and integrate the functionality providing a set of much richer applications. Examples are feeds, RSS, Web Services, mash-ups.
  • Social Web: defines how Web 2.0 tends to interact much more with the end user and make the end-user an integral part.

Web 2.0 connects the dots between client and server-side software, content syndication and the use of network protocols. Web 2.0 sites provide users with information storage, creation, and distribution capabilities that were not possible in the environment now known as "Web 1.0".


In the conclusion you wrap up the most important points mentioned before on roughly half a page and provide a paragraph of outlook to later chapters. You may also mention important issues that have not been considered in this chapter (e.g. if UDP is not considered in the IP chapter).




  1. W3C. "Architecture of the World Wide Web, Volume One". Retrieved February 02,2014. Check date values in: |accessdate= (help)
  2. Vipul Lovekar. "TCP/IP Model Vs. OSI Model". Retrieved February 02,2014. Check date values in: |accessdate= (help)
  3. "Web Science MOOC:Foundations of the web/Internet Architecture/". Retrieved February 01,2014. Check date values in: |accessdate= (help)
  4. Batambuze III,Ephraim. "Pctech Magazine:Worldwide Internet Users to Surpass 2.7 Billion in 2013". Retrieved February 02,2014. Check date values in: |accessdate= (help)
  5. ITU Newsroom. "Dramatic growth in data volumes and globalized services create new ICT regulatory challenges". Retrieved February 02,2014. Check date values in: |accessdate= (help)
  6. Miniwatts Marketing Group,May 31 2011. "Internet World Stats". Retrieved February 02,2014. Check date values in: |accessdate= (help)
  7. CERN. "The birth of the web". Retrieved February 02,2014. Check date values in: |accessdate= (help)
  8. Cornell Law School. "The Cello Internet Browser". Retrieved February 02,2014. Check date values in: |accessdate= (help)

--Arefin003 (discusscontribs) 12:11, 4 May 2014 (UTC)