Seven Stars Group

Konsultan IT, Travelling, Shop Online, Event Organizer, Etc.

Seven Stars Group

Konsultan IT, Travelling, Shop Online, Event Organizer, Etc.

Seven Stars Group

Konsultan IT, Travelling, Shop Online, Event Organizer, Etc.

Seven Stars Group

Konsultan IT, Travelling, Shop Online, Event Organizer, Etc.

Seven Stars Group

Konsultan IT, Travelling, Shop Online, Event Organizer, Etc.

Rabu, 01 September 2010

Dynamic Host Configuration Protocol - DHCP

The Dynamic Host Configuration Protocol (DHCP) is an autoconfiguration protocol used on IP networks. Computers that are connected to IP networks must be configured before they can communicate with other computers on the network. DHCP allows a computer to be configured automatically, eliminating the need for intervention by a network administrator. It also provides a central database for keeping track of computers that have been connected to the network. This prevents two computers from accidentally being configured with the same IP address.
In the absence of DHCP, hosts may be manually configured with an IP address. Alternatively IPv6 hosts may use stateless address autoconfiguration to generate an IP address. IPv4 hosts may use link-local addressing to achieve limited local connectivity.
In addition to IP addresses, DHCP also provides other configuration information, particularly the IP addresses of local caching DNS resolvers. Hosts that do not use DHCP for address configuration may still use it to obtain other configuration information.
There are two versions of DHCP, one for IPv4 and one for IPv6. While both versions bear the same name and perform much the same purpose, the details of the protocol for IPv4 and IPv6 are sufficiently different that they can be considered separate protocols.[1]

History

DHCP was first defined as a standards track protocol in RFC 1531 in October 1993, as an extension to the Bootstrap Protocol (BOOTP). The motivation for extending BOOTP was that BOOTP required manual intervention to add configuration information for each client, and did not provide a mechanism for reclaiming disused IP addresses.
Much work was done to clarify the protocol as it gained popularity, and in 1997 RFC 2131 was released, and remains the standard for IPv4 networks. DHCPv6 is documented in RFC 3315. RFC 3633 added a DHCPv6 mechanism for prefix delegation. DHCPv6 was further extended to provide configuration information to clients configured using stateless address autoconfiguration in RFC 3736.
The BOOTP protocol itself was first defined in RFC 951 as a replacement for the Reverse Address Resolution Protocol RARP. The primary motivation for replacing RARP with BOOTP was that RARP was a data link layer protocol. This made implementation difficult on many server platforms, and required that a server be present on each individual network link. BOOTP introduced the innovation of a relay agent, which allowed BOOTP packets to be forwarded off of the local network using standard IP routing, so that one central BOOTP server could serve hosts on many IP subnets.[2]

Technical overview

Dynamic Host Configuration Protocol automates network-parameter assignment to network devices from one or more DHCP servers. Even in small networks, DHCP is useful because it makes it easy to add new machines to the network.
When a DHCP-configured client (a computer or any other network-aware device) connects to a network, the DHCP client sends a broadcast query requesting necessary information from a DHCP server. The DHCP server manages a pool of IP addresses and information about client configuration parameters such as default gateway, domain name, the name servers, other servers such as time servers, and so forth. On receiving a valid request, the server assigns the computer an IP address, a lease (length of time the allocation is valid), and other IP configuration parameters, such as the subnet mask and the default gateway. The query is typically initiated immediately after booting, and must complete before the client can initiate IP-based communication with other hosts.
Depending on implementation, the DHCP server may have three methods of allocating IP-addresses:
  • dynamic allocation: A network administrator assigns a range of IP addresses to DHCP, and each client computer on the LAN is configured to request an IP address from the DHCP server during network initialization. The request-and-grant process uses a lease concept with a controllable time period, allowing the DHCP server to reclaim (and then reallocate) IP addresses that are not renewed.
  • automatic allocation: The DHCP server permanently assigns a free IP address to a requesting client from the range defined by the administrator. This is like dynamic allocation, but the DHCP server keeps a table of past IP address assignments, so that it can preferentially assign to a client the same IP address that the client previously had.
  • static allocation: The DHCP server allocates an IP address based on a table with MAC address/IP address pairs, which are manually filled in (perhaps by a network administrator). Only requesting clients with a MAC address listed in this table will be allocated an IP address. This feature (which is not supported by all DHCP servers) is variously called Static DHCP Assignment (by DD-WRT), fixed-address (by the dhcpd documentation), Address Reservation (by Netgear), DHCP reservation or Static DHCP (by Cisco/Linksys), and IP reservation or MAC/IP binding (by various other router manufacturers).

Technical details

DHCP uses the same two ports assigned by IANA for BOOTP: UDP port 67 for sending data to the server, and UDP port 68 for data to the client. DHCP communications are connectionless in nature.
DHCP operations fall into four basic phases: IP discovery, IP lease offer, IP request, and IP lease acknowledgement.
Where a DHCP client and server are on the same subnet, they will communicate via UDP broadcasts. When the client and server are on different subnets, IP discovery and IP request messages are sent via UDP broadcasts, but IP lease offer and IP lease acknowledgement messages are sent via unicast.

DHCP discovery

The client broadcasts messages on the physical subnet to discover available DHCP servers. Network administrators can configure a local router to forward DHCP packets to a DHCP server from a different subnet. This client-implementation creates a User Datagram Protocol (UDP) packet with the broadcast destination of 255.255.255.255 or the specific subnet broadcast address.
A DHCP client can also request its last-known IP address (in the example below, 192.168.1.100). If the client remains connected to a network for which this IP is valid, the server might grant the request. Otherwise, it depends whether the server is set up as authoritative or not. An authoritative server will deny the request, making the client ask for a new IP address immediately. A non-authoritative server simply ignores the request, leading to an implementation-dependent timeout for the client to give up on the request and ask for a new IP address.
DHCPDISCOVER
UDP Src=0.0.0.0 sPort=68
Dest=255.255.255.255 dPort=67
OP HTYPE HLEN HOPS
0x01 0x01 0x06 0x00
XID
0x3903F326
SECS FLAGS
0x0000 0x0000
CIADDR
0x00000000
YIADDR
0x00000000
SIADDR
0x00000000
GIADDR
0x00000000
CHADDR
0x00053C04
0x8D590000
0x00000000
0x00000000
192 octets of 0's. BOOTP legacy
Magic Cookie
0x63825363
DHCP Options
DHCP option 53: DHCP Discover
DHCP option 50: 192.168.1.100 requested
DHCP option 55: Parameter Request List: Request Subnet Mask (1), Router (3), Domain Name (15),
Domain Name Server (6)

DHCP offer

When a DHCP server receives an IP lease request from a client, it reserves an IP address for the client and extends an IP lease offer by sending a DHCPOFFER message to the client. This message contains the client's MAC address, the IP address that the server is offering, the subnet mask, the lease duration, and the IP address of the DHCP server making the offer.
The server determines the configuration based on the client's hardware address as specified in the CHADDR (Client Hardware Address) field. Here the server, 192.168.1.1, specifies the IP address in the YIADDR (Your IP Address) field.
DHCPOFFER
UDP Src=192.168.1.1 sPort=67
Dest=255.255.255.255 dPort=68
OP HTYPE HLEN HOPS
0x02 0x01 0x06 0x00
XID
0x3903F326
SECS FLAGS
0x0000 0x0000
CIADDR
0x00000000
YIADDR
0xC0A80164
SIADDR
0xC0A80101
GIADDR
0x00000000
CHADDR
0x00053C04
0x8D590000
0x00000000
0x00000000
192 octets of 0's. BOOTP legacy
Magic Cookie
0x63825363
DHCP Options
DHCP option 53: DHCP Offer
DHCP option 1: 255.255.255.0 subnet mask
DHCP option 3: 192.168.1.1 router
DHCP option 51: 86400s (1 day) IP lease time
DHCP option 54: 192.168.1.1 DHCP server
DHCP option 6: DNS servers 9.7.10.15, 9.7.10.16, 9.7.10.18

DHCP request

A client can receive DHCP offers from multiple servers, but it will accept only one DHCP offer and broadcast a DHCP request message. Based on the Transaction ID field in the request, servers are informed whose offer the client has accepted. When other DHCP servers receive this message, they withdraw any offers that they might have made to the client and return the offered address to the pool of available addresses. The DHCP request message is broadcast, instead of being unicast to a particular DHCP server, because the DHCP client has still not received an IP address. Also, this way one message can let all other DHCP servers know that another server will be supplying the IP address without missing any of the servers with a series of unicast messages.
DHCPREQUEST
UDP Src=0.0.0.0 sPort=68
Dest=255.255.255.255 dPort=67
OP HTYPE HLEN HOPS
0x01 0x01 0x06 0x00
XID
0x3903F326
SECS FLAGS
0x0000 0x0000
CIADDR
0x00000000
YIADDR
0xC0A80164
SIADDR
0xC0A80101
GIADDR
0x00000000
CHADDR
0x00053C04
0x8D590000
0x00000000
0x00000000
192 octets of 0's. BOOTP legacy
Magic Cookie
0x63825363
DHCP Options
DHCP option 53: DHCP Request
DHCP option 50: 192.168.1.100 requested
DHCP option 54: 192.168.1.1 DHCP server.

DHCP acknowledgement

When the DHCP server receives the DHCPREQUEST message from the client, the configuration process enters its final phase. The acknowledgement phase involves sending a DHCPACK packet to the client. This packet includes the lease duration and any other configuration information that the client might have requested. At this point, the IP configuration process is completed.
The protocol expects the DHCP client to configure its network interface with the negotiated parameters.
DHCPACK
UDP Src=192.168.1.1 sPort=67
Dest=255.255.255.255 dPort=68
OP HTYPE HLEN HOPS
0x02 0x01 0x06 0x00
XID
0x3903F326
SECS FLAGS
0x0000 0x0000
CIADDR (Client IP Address)
0x00000000
YIADDR (Your IP Address)
0xC0A80164
SIADDR (Server IP Address)
0xC0A80101
GIADDR (Gateway IP Address switched by relay)
0x00000000
CHADDR (Client Hardware Address)
0x00053C04
0x8D590000
0x00000000
0x00000000
192 octets of 0's. BOOTP legacy
Magic Cookie
0x63825363
DHCP Options
DHCP option 53: DHCP ACK
DHCP option 1: 255.255.255.0 subnet mask
DHCP option 3: 192.168.1.1 router
DHCP option 51: 86400s (1 day) IP lease time
DHCP option 54: 192.168.1.1 DHCP server
DHCP option 6: DNS servers 9.7.10.15, 9.7.10.16, 9.7.10.18
After the client obtains an IP address, the client may use the Address Resolution Protocol (ARP) to prevent IP conflicts caused by overlapping address pools of DHCP servers.

DHCP information

A DHCP client may request more information than the server sent with the original DHCPOFFER. The client may also request repeat data for a particular application. For example, browsers use DHCP Inform to obtain web proxy settings via WPAD. Such queries do not cause the DHCP server to refresh the IP expiry time in its database.

DHCP releasing

The client sends a request to the DHCP server to release the DHCP information and the client deactivates its IP address. As client devices usually do not know when users may unplug them from the network, the protocol does not mandate the sending of DHCP Release.

Client configuration parameters in DHCP

A DHCP server can provide optional configuration parameters to the client. RFC 2132 describes the available DHCP options defined by Internet Assigned Numbers Authority (IANA) - DHCP and BOOTP PARAMETERS.
A DHCP client can select, manipulate and overwrite parameters provided by a DHCP server.[3]

Options

An option exists to identify the vendor and functionality of a DHCP client. The information is a variable-length string of characters or octets which has a meaning specified by the vendor of the DHCP client. One method that a DHCP client can utilize to communicate to the server that it is using a certain type of hardware or firmware is to set a value in its DHCP requests called the Vendor Class Identifier (VCI) (Option 60). This method allows a DHCP server to differentiate between the two kinds of client machines and process the requests from the two types of modems appropriately. Some types of set-top boxes also set the VCI (Option 60) to inform the DHCP server about the hardware type and functionality of the device. The value that this option is set to give the DHCP server a hint about any required extra information that this client needs in a DHCP response.

DHCP Relaying

In small networks DHCP typically uses broadcasts. However, in some circumstances, unicast addresses will be used, for example: when networks have a single DHCP server that provides IP addresses for multiple subnets. When a router for such a subnet receives a DHCP broadcast, it converts it to unicast (with a destination MAC/IP address of the configured DHCP server, source MAC/IP of the router itself). The GIADDR field of this modified request is populated with the IP address of the router interface on which it received the original DHCP request. The DHCP server uses the GIADDR field to identify the subnet of the originating device in order to select an IP address from the correct pool. The DHCP server then sends the DHCP OFFER back to the router via unicast. The router then converts the DHCP OFFER back to a broadcast, sent out on the interface of the original device.

Reliability

A standard for implementing fault-tolerant DHCP servers has been discussed by the Internet Engineering Task Force,[4] but the draft standard has expired. The draft standard proposed redundant servers, one primary and one backup. The backup server tracks the IP address allocations made by the primary and takes over if the primary fails.

Security

The basic DHCP protocol became a standard before network security became a significant issue: it includes no security features, and is potentially vulnerable to two types of attacks:[5]
  • Unauthorized DHCP Servers: as you cannot specify the server you want, an unauthorized server can respond to client requests, sending client network configuration values that are beneficial to the attacker. As an example, a hacker can hijack the DHCP process to configure clients to use a malicious DNS server or router (see also DNS cache poisoning).
  • Unauthorized DHCP Clients: By masquerading as a legitimate client, an unauthorized client can gain access to network configuration and an IP address on a network it should otherwise not be allowed to use. Also, by flooding the DHCP server with requests for IP addresses, it is possible for an attacker to exhaust the pool of available IP addresses, disrupting normal network activity (a denial of service attack).
To combat these threats RFC 3118 ("Authentication for DHCP Messages") introduced authentication information into DHCP messages, allowing clients and servers to reject information from invalid sources. Although support for this protocol is widespread, a large number of clients and servers still do not fully support authentication, thus forcing servers to support clients that do not support this feature. As a result, other security measures are usually implemented around the DHCP server (such as IPsec) to ensure that only authenticated clients and servers are granted access to the network.
Addresses should be dynamically linked to a secure DNS server, to allow troubleshooting by name rather than by a potentially unknown address.[citation needed] Effective DHCP-DNS linkage requires having a file of either MAC addresses or local names that will be sent to DNS that uniquely identifies physical hosts, IP addresses, and other parameters such as the default gateway, subnet mask, and IP addresses of DNS servers from a DHCP server. The DHCP server ensures that all IP addresses are unique, i.e., no IP address is assigned to a second client while the first client's assignment is valid (its lease has not expired). Thus IP address pool management is done by the server and not by a network administrator.

See also

References

  1. ^ Ralph Droms; Ted Lemon (2003). The DHCP Handbook. SAMS Publishing. p. 436. ISBN 0-672-32327-3. 
  2. ^ Bill Croft; John Gilmore (September 1985). "RFC 951 - Bootstrap Protocol". Network Working Group. http://tools.ietf.org/html/rfc951#section-6. 
  3. ^ In Unix-like systems this client-level refinement typically takes place according to the values in a /etc/dhclient.conf configuration file.
  4. ^ Droms, Ralph; Kinnear, Kim; Stapp, Mark; Volz, Bernie; Gonczi, Steve; Rabil, Greg; Dooley, Michael; Kapur, Arun (March 2003). DHCP Failover Protocol. IETF. I-D draft-ietf-dhc-failover-12. http://tools.ietf.org/html/draft-ietf-dhc-failover-12. Retrieved May 09, 2010. 
  5. ^ The TCP/IP Guide - Security Issues

External links

FTP Server

File Transfer Protocol (FTP) is a standard network protocol used to copy a file from one host to another over a TCP/IP-based network, such as the Internet. FTP is built on a client-server architecture and utilizes separate control and data connections between the client and server.[1] FTP is used with user-based password authentication or with anonymous user access.
Applications were originally interactive command-line tools with a standardized command syntax, but graphical user interfaces have been developed for all desktop operating systems in use today.

History

The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as RFC 114 on 16 April 1971 and later replaced by RFC 765 (June 1980) and RFC 959 (October 1985), the current specification. Several proposed standards amend RFC 959, for example RFC 2228 (June 1997) proposes security extensions and RFC 2428 (September 1998) adds support for IPv6 and defines a new type of passive mode. [2]

Protocol overview

The protocol is specified in RFC 959, which is summarized below.[3]
A client makes a TCP connection to the server's port 21. This connection, called the control connection, remains open for the duration of the session, with a second connection, called the data connection, opened by the server from its port 20 to a client port (specified in the negotiation dialog) as required to transfer file data. The control connection is used for session administration (i.e., commands, identification, passwords)[4] exchanged between the client and server using a telnet-like protocol. For example "RETR filename" would transfer the specified file from the server to the client. Due to this two-port structure, FTP is considered an out-of-band, as opposed to an in-band protocol such as HTTP[4].
The server responds on the control connection with three digit status codes in ASCII with an optional text message, for example "200" (or "200 OK.") means that the last command was successful. The numbers represent the code number and the optional text represent explanations (i.e., ) or needed parameters (i.e., )[1]. A file transfer in progress over the data connection can be aborted using an interrupt message sent over the control connection.
FTP can be run in active or passive mode, which determine how the data connection is established. In active mode, the client sends the server the IP address and port number on which the client will listen and the server initiates the TCP connection. In situations where the client is behind a firewall and unable to accept incoming TCP connections, passive mode may be used, where the client sends a PASV command to the server and receives an IP address and port number in return, which the client uses to open the data connection to the server.[3] Both modes were updated in September 1998 to add support for IPv6 and made some other changes to passive mode, making it extended passive mode[5].
While transferring data over the network, four data representations can be used[2]:
  • ASCII mode: used for text. Data is converted, if needed, from the sending host's character representation to "8-bit ASCII" before transmission, and (again, if necessary) to the receiving host's character representation. As a consequence, this mode is inappropriate for files that contain numeric data in binary, floating point or binary coded decimal form.
  • Image mode (commonly called Binary mode): the sending machine sends each file byte for byte and as such the recipient stores the bytestream as it receives it. (Image mode support has been recommended for all implementations of FTP).
  • EBCDIC mode: use for plain text between hosts using the EBCDIC character set. This mode is otherwise like ASCII mode.
  • Local mode: Allows two computers with identical setups to send data in a proprietary format without the need to convert it to ASCII
For text files, different format control and record structure options are provided. These features were designed to facilitate files containing Telnet or ASA formatting.
Data transfer can be done in any of three modes[1]:
  • Stream mode: Data is sent as a continuous stream, relieving FTP from doing any processing. Rather, all processing is left up to TCP. No End-of-file indicator is needed, unless the data is divided into records.
  • Block mode: FTP breaks the data into several blocks (block header, byte count, and data field) and then passes it on to TCP.[2]
  • Compressed mode: Data is compressed using a single algorithm (usually Run-length encoding).

Security

The original FTP specification has many security concerns. In May 1999, the following flaws were addressed[6]:
FTP has no encryption tools meaning all transmissions are in clear text; user names, passwords, FTP commands and transferred files can be read by anyone sniffing on the network. This is a problem common to many Internet protocol specifications written prior to the creation of SSL, such as HTTP, SMTP and Telnet[2]. The common solution to this problem is to use either SFTP (SSH File Transfer Protocol), or FTPS (FTP over SSL), which adds SSL or TLS encryption to FTP as specified in RFC 4217.

Anonymous FTP

A host that provides an FTP service may additionally provide anonymous FTP access. Users typically log into the service with an 'anonymous' account when prompted for user name. Although users are commonly asked to send their email address in lieu of a password, no verification is actually performed on the supplied data[7]; examples of anonymous FTP servers can be found here.

Remote FTP or FTPmail

Where FTP access is restricted, a remote FTP (or FTPmail) service can be used to circumvent the problem. An e-mail containing the FTP commands to be performed is sent to a remote FTP server, which is a mail server that parses the incoming e-mail, executes the FTP commands, and sends back an e-mail with any downloaded files as an attachment. Obviously this is less flexible than an FTP client, as it is not possible to view directories interactively or to modify commands, and there can also be problems with large file attachments in the response not getting through mail servers. As most internet users these days have ready access to FTP, this procedure is no longer in everyday use.

Web browser support

Most recent web browsers can retrieve files hosted on FTP servers, although they may not support protocol extensions such as FTPS[8]. When an FTP—rather than HTTP—URL is supplied, the accessible contents of the remote server is presented in a manner similar to that used for other Web content. Firefox has a full-featured FTP client in the form of an extension called FireFTP[1]
FTP URL syntax is described in RFC1738[9], taking the form:
ftp://[[:]@][:]/[9]
(The bracketed parts are optional.) For example:
ftp://public.ftp-servers.example.com/mydirectory/myfile.txt
or:
ftp://user001:secretpassword@private.ftp-servers.example.com/mydirectory/myfile.txt
More details on specifying a user name and password may be found in the browsers' documentation, such as, for example, Firefox and Internet Explorer.
By default, most web browsers use passive (PASV) mode, which more easily traverses end-user firewalls.

NAT and Firewall traversal

FTP normally transfers data by having the server connect back to the client, after the PORT command is sent by the client. This is problematic for both NATs and firewalls, which do not allow connections from the Internet towards internal hosts. For NATs, an additional complication is the representation of the IP addresses and port number in the PORT command refer to the internal host's IP address and port, rather than the public IP address and port of the NAT.
There are two approaches to this problem. One is that the FTP client and FTP server use the PASV command, which causes the data connection to be established from the FTP client to the server. This is widely used by modern FTP clients. Another approach is for the NAT to alter the values of the PORT command, using an application layer gateways for this purpose.

FTP over SSH (not SFTP)

FTP over SSH (not SFTP) refers to the practice of tunneling a normal FTP session over an SSH connection.
Because FTP uses multiple TCP connections (unusual for a TCP/IP protocol that is still in use), it is particularly difficult to tunnel over SSH. With many SSH clients, attempting to set up a tunnel for the control channel (the initial client-to-server connection on port 21) will protect only that channel; when data is transferred, the FTP software at either end will set up new TCP connections (data channels), which bypass the SSH connection, and thus have no confidentiality, integrity protection, etc.
Otherwise, it is necessary for the SSH client software to have specific knowledge of the FTP protocol, and monitor and rewrite FTP control channel messages and autonomously open new forwardings for FTP data channels. Version 3 of SSH Communications Security's software suite, the GPL licensed FONC, and Co:Z FTPSSH Proxy are three software packages that support this mode.
FTP over SSH is sometimes referred to as secure FTP; this should not be confused with other methods of securing FTP, such as with SSL/TLS (FTPS). Other methods of transferring files using SSH that are not related to FTP include SFTP and SCP; in each of these, the entire conversation (credentials and data) is always protected by the SSH protocol.

See also

References

  1. ^ a b c Forouzan, B.A. (2000). TCP/IP: Protocol Suite. 1st ed. New Delhi, India: Tata McGraw-Hill Publishing Company Limited.
  2. ^ a b c d Clark, M.P. (2003). Data Networks IP and the Internet. 1st ed. West Sussex, England: John Wiley & Sons Ltd.
  3. ^ a b Postel, J., & Reynolds. J. (October 1985). RFC 959. In The Internet Engineering Task Force. Retrieved from http://www.ietf.org/rfc/rfc0959.txt
  4. ^ a b Kurose, J.F. & Ross, K.W. (2010). Computer Networking. 5th ed. Boston, MA: Pearson Education, Inc.
  5. ^ Allman, M. & Metz, C. & Ostermann, S. (September 1998). RFC 2428. In The Internet Engineering Task Force. Retrieved from http://www.ietf.org/rfc/rfc2428.txt
  6. ^ Allman, M. & Ostermann, S. (May 1999). RFC 2577. In The Internet Engineering Task Force. Retrieved from http://www.ietf.org/rfc/rfc2577.txt
  7. ^ Deutsch, P. & Emtage, A. & Marine, A. (May 1994). RFC 1635. In The Internet Engineering Task Force. Retrieved from http://www.ietf.org/rfc/rfc1635.txt
  8. ^ Matthews, J. (2005). Computer Networking: Internet Protocols in Action. 1st ed. Danvers, MA: John Wiley & Sons Inc.
  9. ^ a b Berners-Lee, T. & Masinter, L. & McCahill, M. (December 1994). RFC 1738. In The Internet Engineering Task Force. Retrieved from http://www.ietf.org/rfc/rfc1738.txt

Further reading

  • RFC 959 – (Standard) File Transfer Protocol (FTP). J. Postel, J. Reynolds. October 1985.
  • RFC 1579 – (Informational) Firewall-Friendly FTP.
  • RFC 2228 – (Proposed Standard) FTP Security Extensions.
  • RFC 2389 – (Proposed Standard) Feature negotiation mechanism for the File Transfer Protocol. August 1998.
  • RFC 2428 – (Proposed Standard) Extensions for IPv6, NAT, and Extended passive mode. September 1998.
  • RFC 2640 – (Proposed Standard) Internationalization of the File Transfer Protocol.
  • RFC 3659 – (Proposed Standard) Extensions to FTP. P.Hethmon. March 2007.
  • RFC 5797 – (Proposed Standard) FTP Command and Extension Registry. March 2010.

External links

Web Server

A web server is a computer program that delivers (serves) content, such as web pages, using the Hypertext Transfer Protocol (HTTP), over the World Wide Web. The term web server can also refer to the computer or virtual machine running the program.
In large commercial deployments, a server computer running a web server can be rack-mounted with other servers to operate a web farm.

Overview

The primary function of a web server is to deliver web pages to clients. This means delivery of HTML documents and any additional content that may be included by a document, such as images, style sheets and JavaScripts.
A client, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource, or an error message if unable to do so. The resource is typically a real file on the server's secondary memory, but this is not necessarily the case and depends on how the web server is implemented.
While the primary function is to serve content, a full implementation of HTTP also includes a way of receiving content from clients. This feature is used for submitting web forms, including uploading of files.
Many generic web servers also support server-side scripting, e.g., Apache HTTP Server and PHP. This means that the behaviour of the web server can be scripted in separate files, while the actual server software remains unchanged. Usually, this functionality is used to create HTML documents on-the-fly as opposed to return fixed documents. This is referred to as dynamic and static content respectively. The former is primarily used for retrieving and/or modifying information in databases. The latter is, however, typically much faster and easily cached.
Web servers are not always used for serving the world wide web, rather they can also be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring and/or administrating the device in question. This usually means that no additional software has to be installed on the client computer, since only a web browser is required (which by now is included with most operating systems).

History of web servers

The world's first web server.
In 1989 Tim Berners-Lee proposed to his employer CERN (European Organization for Nuclear Research) a new project, which had the goal of easing the exchange of information between scientists by using a hypertext system. As a result of the implementation of this project, in 1990 Berners-Lee wrote two programs:
Between 1991 and 1994 the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among lots of different social groups of people, first in scientific organizations, then in universities and finally in industry.
In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium (W3C) to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.

Common features

  1. Virtual hosting to serve many Web sites using one IP address.
  2. Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS.
  3. Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients.
  4. Server-side scripting to generate dynamic Web pages, but still keeping Web server and Web site implementations separate from each other.

Path translation

Web servers are able to map the path component of a Uniform Resource Locator (URL) into:
  • a local file system resource (for static requests);
  • an internal or external program name (for dynamic requests).
For a static request the URL path specified by the client is relative to the Web server's root directory.
Consider the following URL as it would be requested by a client:
http://www.example.com/path/file.html
The client's user agent will translate it into a connection to www.example.com with the following HTTP 1.1 request:
GET /path/file.html HTTP/1.1
Host: www.example.com
The Web server on www.example.com will append the given path to the path of its root directory. On Unix machines, this is commonly /var/www. The result is the local file system resource:
/var/www/path/file.html
The Web server will then read the file, if it exists, and send a response to the client's Web browser. The response will describe the content of the file and contain the file itself.

Load limits

A Web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on:
  • its own settings;
  • the HTTP request type;
  • content origin (static or dynamic);
  • the fact that the served content is or is not cached;
  • the hardware and software limits of the OS where it is working.
When a Web server is near to or over its limits, it becomes unresponsive.

Kernel-mode and user-mode Web servers

A Web server can be either implemented into the OS kernel, or in user space (like other regular applications).
An in-kernel Web server (like TUX on GNU/Linux or Microsoft IIS on Windows) will usually work faster, because, as part of the system, it can directly use all the hardware resources it needs, such as non-paged memory, CPU time-slices, network adapters, or buffers.
Web servers that run in user-mode have to ask the system the permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications.
Also, applications cannot access the system's internal buffers, which causes useless buffer copies that create another handicap for user-mode web servers. As a consequence, the only way for a user-mode web server to match kernel-mode performance is to raise the quality of its code to much higher standards, similar to that of the code used in web servers that run in the kernel. This is a significant issue under Windows, where the user-mode overhead is about six times greater than that under Linux.[1]

Overload causes

At any time Web servers can be overloaded because of:
  • Too much legitimate Web traffic. Thousands or even millions of clients connecting to the Web site in a short interval, e.g., Slashdot effect;
  • DDoS. Distributed Denial of Service attacks;
  • Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
  • XSS viruses can cause high traffic because of millions of infected browsers and/or Web servers;
  • Internet Web robots. Traffic not filtered/limited on large Web sites with very few resources (bandwidth, etc.);
  • Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
  • Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., DB) failures, etc.; in these cases the remaining Web servers get too much traffic and become overloaded.

Overload symptoms

The symptoms of an overloaded Web server are:
  • requests are served with (possibly long) delays (from 1 second to a few hundred seconds);
  • 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned);
  • TCP connections are refused or reset (interrupted) before any content is sent to clients;
  • in very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).

Anti-overload techniques

To partially overcome above load limits and to prevent overload, most popular Web sites use common techniques like:
  • managing network traffic, by using:
    • Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
    • HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
    • Bandwidth management and traffic shaping, in order to smooth down peaks in network usage;
  • deploying Web cache techniques;
  • using different domain names to serve different (static and dynamic) content by separate Web servers, i.e.:
    • http://images.example.com
    • http://www.example.com
  • using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 - 1000 MB) files by using different settings;
  • using many Web servers (programs) per computer, each one bound to its own network card and IP address;
  • using many Web servers (computers) that are grouped together so that they act or are seen as one big Web server, see also: Load balancer;
  • adding more hardware resources (i.e. RAM, disks) to each computer;
  • tuning OS parameters for hardware capabilities and usage;
  • using more efficient computer programs for Web servers, etc.;
  • using other workarounds, especially if dynamic content is involved.

Market structure

Market share of major Web servers
Given below is a list of top web server software vendors published in a Netcraft survey in January 2010.
Vendor Product Web Sites Hosted (millions) Percent
Apache Apache 111 54%
Microsoft IIS 50 24%
Igor Sysoev nginx 16 8%
Google GWS 15 7%
lighttpd lighttpd 1 0.46%

Proxy Server

In computer networks, a proxy server is a server (a computer system or an application program) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server evaluates the request according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly.
A proxy server has a large variety of potential purposes, including:
  • To keep machines behind it anonymous (mainly for security).[1]
  • To speed up access to resources (using caching). Web proxies are commonly used to cache web pages from a web server.[2]
  • To apply access policy to network services or content, e.g. to block undesired sites.
  • To log / audit usage, i.e. to provide company employee Internet usage reporting.
  • To bypass security/ parental controls.
  • To scan transmitted content for malware before delivery.
  • To scan outbound content, e.g., for data leak protection.
  • To circumvent regional restrictions.
A proxy server that passes requests and replies unmodified is usually called a gateway or sometimes tunneling proxy.
A proxy server can be placed in the user's local computer or at various points between the user and the destination servers on the Internet.
A reverse proxy is (usually) an Internet-facing proxy used as a front-end to control and protect access to a server on a private network, commonly also performing tasks such as load-balancing, authentication, decryption or caching.

Types and functions

Proxy servers implement one or more of the following functions:

Caching proxy server

A caching proxy server accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. Caching proxies were the first kind of proxy server.
Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user authentication). Some problems are described in RFC 3143 (Known HTTP Proxy/Caching Problems).
Another important use of the proxy server is to reduce the hardware cost. An organization may have many systems on the same network or under control of a single server, prohibiting the possibility of an individual connection to the Internet for each system. In such a case, the individual systems can be connected to one proxy server, and the proxy server connected to the main server.

Web proxy

A proxy that focuses on World Wide Web traffic is called a "web proxy". The most common use of a web proxy is to serve as a web cache. Most proxy programs provide a means to deny access to URLs specified in a blacklist, thus providing content filtering. This is often used in a corporate, educational, or library environment, and anywhere else where content filtering is desired. Some web proxies reformat web pages for a specific purpose or audience, such as for cell phones and PDAs.

Content-filtering web proxy

A content-filtering web proxy server provides administrative control over the content that may be relayed through the proxy. It is commonly used in both commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms to acceptable use policy. In some cases users can circumvent the proxy, since there are services designed to proxy information from a filtered website through a non filtered site to allow it through the user's proxy.
Some common methods used for content filtering include: URL or DNS blacklists, URL regex filtering, MIME filtering, or content keyword filtering. Some products have been known to employ content analysis techniques to look for traits commonly used by certain types of content providers.
A content filtering proxy will often support user authentication, to control web access. It also usually produces logs, either to give detailed information about the URLs accessed by specific users, or to monitor bandwidth usage statistics. It may also communicate to daemon-based and/or ICAP-based antivirus software to provide security against virus and other malware by scanning incoming content in real time before it enters the network.

Anonymizing proxy server

An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing. There are different varieties of anonymizers. One of the more common variations is the open proxy. Because they are typically difficult to track, open proxies are especially useful to those seeking online anonymity, from political dissidents to computer criminals. Some users are merely interested in anonymity for added security, hiding their identities from potentially malicious websites for instance, or on principle, to facilitate constitutional human rights of freedom of speech, for instance. The server receives requests from the anonymizing proxy server, and thus does not receive information about the end user's address. However, the requests are not anonymous to the anonymizing proxy server, and so a degree of trust is present between that server and the user. Many of them are funded through a continued advertising link to the user.
Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to the web. The organization can thereby track usage to individuals.
Some anonymizing proxy servers may forward data packets with header lines such as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the IP address of the client. Other anonymizing proxy servers, known as elite or high anonymity proxies, only include the REMOTE_ADDR header with the IP address of the proxy server, making it appear that the proxy server is the client. A website could still suspect a proxy is being used if the client sends packets which include a cookie from a previous visit that did not use the high anonymity proxy server. Clearing cookies, and possibly the cache, would solve this problem.

Hostile proxy

Proxies can also be installed in order to eavesdrop upon the data-flow between client machines and the web. All accessed pages, as well as all forms submitted, can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL.

Intercepting proxy server

An intercepting proxy combines a proxy server with a gateway or router (commonly with NAT capabilities). Connections made by client browsers through the gateway are diverted to the proxy without client-side configuration (or often knowledge). Connections may also be diverted from a SOCKS server or other circuit-level proxies.
Intercepting proxies are also commonly referred to as "transparent" proxies, or "forced" proxies, presumably because the existence of the proxy is transparent to the user, or the user is forced to use the proxy regardless of local settings.
Purpose
Intercepting proxies are commonly used in businesses to prevent avoidance of acceptable use policy, and to ease administrative burden, since no client browser configuration is required. This second reason however is mitigated by features such as Active Directory group policy, or DHCP and automatic proxy detection.
Intercepting proxies are also commonly used by ISPs in some countries to save upstream bandwidth and improve customer response times by caching. This is more common in countries where bandwidth is more limited (e.g. island nations) or must be paid for.
Issues
The diversion / interception of a TCP connection creates several issues. Firstly the original destination IP and port must somehow be communicated to the proxy. This is not always possible (e.g. where the gateway and proxy reside on different hosts). There is a class of cross site attacks which depend on certain behaviour of intercepting proxies that do not check or have access to information about the original (intercepted) destination. This problem can be resolved by using an integrated packet-level and application level appliance or software which is then able to communicate this information between the packet handler and the proxy.
Intercepting also creates problems for HTTP authentication, especially connection-oriented authentication such as NTLM, since the client browser believes it is talking to a server rather than a proxy. This can cause problems where an intercepting proxy requires authentication, then the user connects to a site which also requires authentication.
Finally intercepting connections can cause problems for HTTP caches, since some requests and responses become uncacheble by a shared cache.
Therefore intercepting connections is generally discouraged. However due to the simplicity of deploying such systems, they are in widespread use.
Implementation Methods
Interception can be performed using Cisco's WCCP (Web Cache Control Protocol). This proprietary protocol resides on the router and is configured from the cache, allowing the cache to determine what ports and traffic is sent to it via transparent redirection from the router. This redirection can occur in one of two ways: GRE Tunneling (OSI Layer 3) or MAC rewrites (OSI Layer 2).
Once traffic reaches the proxy machine itself interception is commonly performed with NAT (Network Address Translation). Such setups are invisible to the client browser, but leave the proxy visible to the web server and other devices on the Internet side of the proxy. Recent releases of Linux and some BSD provide TPROXY (Transparent Proxy) which performs IP-level (OSI Layer 3) transparent interception and Spoofing of outbound traffic. Hiding the proxy IP address from other network devices.
Detecting
It is often possible to detect the use of an intercepting proxy server by comparing the client's external IP address to the address seen by an external web server, or sometimes by examining the HTTP headers received by a server. A number of sites have been created to address this issue, by reporting the user's IP address as seen by the site back to the user in a web page.

Transparent and non-transparent proxy server

The term "transparent proxy" is most often used incorrectly to mean "intercepting proxy" (because the client does not need to configure a proxy and cannot directly detect that its requests are being proxied).
However, RFC 2616 (Hypertext Transfer Protocol—HTTP/1.1) offers different definitions:
"A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification".
"A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering".
A security flaw in the way that transparent proxies operate was published by Robert Auger in 2009 [3] and advisory by the Computer Emergency Response Team [4] was issued listing dozens of affected transparent, and intercepting proxy servers.

Forced proxy

The term "forced proxy" is used were the user is being force to use the proxy when they would prefer not to. This can be an "intercepting proxy" or a normal configured proxy where the direct (non-proxied) route is being blocked by a packet filter or similar.
It is also sometimes necessary (ie forced) to use a configured proxy due to issues with the interception of TCP connections and HTTP. For instance, interception of HTTP requests can affect the usability of a proxy cache, and can greatly affect certain authentication mechanisms. This is primarily because the client thinks it is talking to a server, and so request headers required by a proxy are unable to be distinguished from headers that may be required by an upstream server (esp authorization headers). Also the HTTP specification prohibits caching of responses where the request contained an authorization header.

 Suffix proxy

A suffix proxy server allows a user to access web content by appending the name of the proxy server to the URL of the requested content (e.g. "en.wikipedia.org.example.com").
Suffix proxy servers are easier to use than regular proxy servers. The concept appeared in 2003 in form of the IPv6Gate and in 2004 in form of the Coral Content Distribution Network, but the term suffix proxy was only coined in October 2008 by "6a.nl"[citation needed].

Open proxy server

Because proxies might be used to abuse, system administrators have developed a number of ways to refuse service to open proxies. Many IRC networks automatically test client systems for known types of open proxy. Likewise, an email server may be configured to automatically test e-mail senders for open proxies.
Groups of IRC and electronic mail operators run DNSBLs publishing lists of the IP addresses of known open proxies, such as AHBL, CBL, NJABL, and SORBS.
The ethics of automatically testing clients for open proxies are controversial. Some experts, such as Vernon Schryver, consider such testing to be equivalent to an attacker portscanning the client host. [5] Others consider the client to have solicited the scan by connecting to a server whose terms of service include testing.

Forward proxy

The terms "forward proxy" and "forwarding proxy" are a general description of behaviour (forwarding traffic) and thus ambiguous. It is used to refer to a proxy able to retrieve from a wide range of sources (in most cases anywhere on Internet). Except for Reverse proxy the types described on this article are more specialized sub-types of the general forward proxy concept.

Reverse proxy server

A reverse proxy is a proxy server that is installed in the neighborhood of one or more web servers. All traffic coming from the Internet and with a destination of one of the web servers goes through the proxy server. The use of "reverse" originates in its counterpart "forward proxy" since the reverse proxy sits closer to the web server and serves only a restricted set of websites.
There are several reasons for installing reverse proxy servers:
  • Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. See Secure Sockets Layer. Furthermore, a host can provide a single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts; removing the need for a separate SSL Server Certificate for each host, with the downside that all hosts behind the SSL proxy have to share a common DNS name or IP address for SSL connections. This problem can partly be overcome by using the SubjectAltName feature of X.509 certificates.
  • Load balancing: the reverse proxy can distribute the load to several web servers, each web server serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations).
  • Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content.
  • Compression: the proxy server can optimize and compress the content to speed up the load time.
  • Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeding" it to the client. This especially benefits dynamically generated pages.
  • Security: the proxy server is an additional layer of defense and can protect against some OS and WebServer specific attacks. However, it does not provide any protection to attacks against the web application or service itself, which is generally considered the larger threat.
  • Extranet Publishing: a reverse proxy server facing the Internet can be used to communicate to a firewalled server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet.

Tunneling proxy server

A tunneling proxy server is a method of defeating blocking policies implemented using proxy servers. Tunneling proxy servers are used by people who have been blocked from viewing a particular web site. Most tunneling proxy servers are also proxy servers, of varying degrees of sophistication, which effectively implement "bypass policies".
A tunneling proxy server is a web-based page that takes a site that is blocked and "tunnels" it, allowing the user to view blocked pages. A famous example is elgooG, which allowed users in China to use Google after it had been blocked there. elgooG differs from most tunneling proxy servers in that it circumvents only one block.
A September 2007 report from Citizen Lab and BBC.co.uk recommended secure based proxies HTTP Tunnel, StupidCensorship, and CGIProxy. Alternatively, users could partner with individuals outside the censored network running Psiphon or Peacefire/tunneling proxy server. A more elaborate approach suggested was to run free tunneling software such as FreeGate, or pay services Anonymizer and Ghost Surf. Also listed were free application tunneling software PaperBus, Gpass and HTTP Tunnel, and pay application software Relakks and Guardster. Lastly, anonymous communication networks JAP ANON, Tor, and I2P offer a range of possibilities for secure publication and browsing.[6]
Other options include Garden and GTunnel by Garden Networks.
Students are able to access blocked sites (games, chatrooms, messenger, offensive material, internet pornography, social networking, etc.) through a tunneling proxy server. As fast as the filtering software blocks tunneling proxy servers, others spring up. However, in some cases the filter may still intercept traffic to the tunneling proxy server, thus the person who manages the filter can still see the sites that are being visited.
Another use of a tunneling proxy server is to allow access to country-specific services, so that Internet users from other countries may also make use of them. An example is country-restricted reproduction of media and webcasting.
The use of tunneling proxy servers is usually safe with the exception that tunneling proxy server sites run by an untrusted third party can be run with hidden intentions, such as collecting personal information, and as a result users are typically advised against running personal data such as credit card numbers or passwords through a tunneling proxy server.
In some network configurations, clients attempting to access the proxy server are given different levels of access privilege on the grounds of their computer location or even the MAC address of the network card. However, if one has access to a system with higher access rights, one could use that system as a proxy server for which the other clients use to access the original proxy server, consequently altering their access privileges.

Content filter

Many work places, schools, and colleges restrict the web sites and online services that are made available in their buildings. This is done either with a specialized proxy, called a content filter (both commercial and free products are available), or by using a cache-extension protocol such as ICAP, that allows plug-in extensions to an open caching architecture.
Requests made to the open internet must first pass through an outbound proxy filter. The web-filtering company provides a database of URL patterns (regular expressions) with associated content attributes. This database is updated weekly by site-wide subscription, much like a virus filter subscription. The administrator instructs the web filter to ban broad classes of content (such as sports, pornography, online shopping, gambling, or social networking). Requests that match a banned URL pattern are rejected immediately.
Assuming the requested URL is acceptable, the content is then fetched by the proxy. At this point a dynamic filter may be applied on the return path. For example, JPEG files could be blocked based on fleshtone matches, or language filters could dynamically detect unwanted language. If the content is rejected then an HTTP fetch error is returned and nothing is cached.
Most web filtering companies use an internet-wide crawling robot that assesses the likelihood that a content is a certain type (e.g. "This content is 70% chance of porn, 40% chance of sports, and 30% chance of news" could be the outcome for one web page). The resultant database is then corrected by manual labor based on complaints or known flaws in the content-matching algorithms.
Web filtering proxies are not able to peer inside secure sockets HTTP transactions, assuming the chain-of-trust of SSL/TLS has not been tampered with. As a result, users wanting to bypass web filtering will typically search the internet for an open and anonymous HTTPS transparent proxy. They will then program their browser to proxy all requests through the web filter to this anonymous proxy. Those requests will be encrypted with https. The web filter cannot distinguish these transactions from, say, a legitimate access to a financial website. Thus, content filters are only effective against unsophisticated users.
As mentioned above, the SSL/TLS chain-of-trust does rely on trusted root certificate authorities; in a workplace setting where the client is managed by the organization, trust might be granted to a root certificate whose private key is known to the proxy. Concretely, a root certificate generated by the proxy is installed into the browser CA list by IT staff. In such scenarios, proxy analysis of the contents of a SSL/TLS transaction becomes possible. The proxy is effectively operating a man-in-the-middle attack, allowed by the client's trust of a root certificate the proxy owns.
A special case of web proxies is "CGI proxies". These are web sites that allow a user to access a site through them. They generally use PHP or CGI to implement the proxy functionality. These types of proxies are frequently used to gain access to web sites blocked by corporate or school proxies. Since they also hide the user's own IP address from the web sites they access through the proxy, they are sometimes also used to gain a degree of anonymity, called "Proxy Avoidance".

Risks of using anonymous proxy servers

In using a proxy server (for example, anonymizing HTTP proxy), all data sent to the service being used (for example, HTTP server in a website) must pass through the proxy server before being sent to the service, mostly in unencrypted form. It is therefore a feasible risk that a malicious proxy server may record everything sent: including unencrypted logins and passwords.
By chaining proxies which do not reveal data about the original requester, it is possible to obfuscate activities from the eyes of the user's destination. However, more traces will be left on the intermediate hops, which could be used or offered up to trace the user's activities. If the policies and administrators of these other proxies are unknown, the user may fall victim to a false sense of security just because those details are out of sight and mind.
In what is more of an inconvenience than a risk, proxy users may find themselves being blocked from certain Web sites, as numerous forums and Web sites block IP addresses from proxies known to have spammed or trolled the site. Proxy bouncing can be used to mantain your privacy.


Selasa, 31 Agustus 2010

Cheat PB 31 Agustus 2010

This is Cheat PB at 31 Agustus 2010.

From Jonita.


Download Sekarang

Passwordnya : "hp"