I still remember the first time I login to a remote UNIX server many years ago. It was really exciting to see remote server login shell displayed on the "Telnet" session of my local computer. During those days, most of the time I login to a remote server using telnet for checking emails, compiling and debugging computer programs, and to run some commands. Nearly all activity on the Internet was conducted through remote login sessions to large servers running at my university servers. Network bandwidth was quite expensive then. And Data speed was slow too. Network security was not much a concern for such a connection. Seldom do we encounter any news regarding people hacking into the network or server for mischievous reasons ...
When Mosaic - the first Internet browser that displays images was invented in 1993, gradually everything changed! Internet started to gain more and more attention from the public. The "hacker" no longer means an ethically good programmer but someone who breaks into computers ... Security was getting more and more attention from both enterprises and the public. It was in this situation that many new network security measures were introduced. For example, SSH (Secure Shell) was created as a secure replacement for telnet. It has all the functionality of telnet with security and added features. Nevertheless, it was only in 1996 that I started to use this tool (you could tell network security wasn't even in my consideration before that :)
As in the case of FTP (please refer to my June 2013 article), everything transmitted in a telnet session are not encrypted. This means your user identification and password are all sent in clear text! In SSH, however, the passwords are encrypted. SSH Secure Shell uses RSA public key cryptography for connection and authentication. These encryption algorithms include DES, 3DES, Blowfish and IDEA.
SSH commonly uses TCP port 22 to connect your computer to another computer on the network. In fact SSH comprised of a suite of three utilities, slogin, ssh and scp. "slogin" is used to securely log into another computer over a network, "ssh" for executing commands in a remote machine, and "scp" for moving files from one machine to another. These utilities are based on earlier versions of UNIX utilities rlogin, rsh and rcp which are insecure by nature. For example, when using ssh's slogin (instead of rlogin) the entire login session, including transmission of password, is encrypted; therefore it is almost impossible for an outsider to collect passwords. This means an attacker who has managed to take over a network can only force ssh to disconnect. He or she cannot play back the traffic or hijack the connection when encryption is enabled.
There are two components in a SSH Secure Shell software package - the server and the client. Both components will need to be installed and configured prior to use. The server component is not limited to a traditional UNIX or Microsoft server but could also be available in a router, firewall, or many other devices. To use SSH on Windows, you have to download an SSH client. A number of SSH clients are available in the Internet nowadays and some are available as freeware. You may like to try out OpenSSH, PuTTY, Tera Term to experience how this tools work. If you like to setup SSH server too, download OpenSSH and setup "sshd" at the server side.
netsic - Network Basic
PC System & Networking blog and website, with information on computer networking systems, CMS, Cisco networking related configuration, freeware, news, resources and opinion.
Thursday, September 12, 2013
SSH as secure telnet alternative
Friday, March 15, 2013
HTTP looks simple, but not ...
Recently I had a chance to relook into Hypertext Transfer Protocol (HTTP), one of the protocol that we use almost everyday. The HTTP is an application layer protocol functions as a request-response protocol in the client-server computing model. A web browser like Chrome or Firefox, for example, may function as the client and an application running on a computer hosting a web site may be the server. The server provides resources such as HTML files and other content, or performs other functions on behalf of the client. When a HTTP request message is sent from client to the server, the server will response a message to the client with completion status information about the request and may also contain requested content in its message body.
The original version of HTTP, HTTP/0.9 (1991) was written by Sir Tim Berners-Lee. It was a simple protocol for transferring raw data across the Internet. There are currently two versions of HTTP, namely, HTTP/1.0 and HTTP/1.1 in use today. HTTP 1.0 defined in RFC 1945 was officially introduced and recognized in 1996 improved the protocol by allowing MIME-like messages. However, HTTP/1.0 does not address the issues of proxies, caching, persistent connection, virtual hosts, and range download. These features were provided in HTTP/1.1 which is defined in RFC 2616.
In HTTP 1.0 and before, TCP connections are closed after each request and response, so each resource to be retrieved requires its own connection. This increases the load on HTTP servers and causing congestion on the Internet. Opening and closing TCP connections takes a substantial amount of CPU time, bandwidth, and memory. In practice, most Web pages consist of several files on the same server. This requires a client to make multiple requests of the same server in a short amount of time. In HTTP/1.1 a keep-alive-mechanism was introduced, where a connection could be reused for more than one request. Several requests and responses are allowed to be sent through a single persistent connection. Such persistent connections experience less latency, because the client does not need to re-negotiate the TCP connection after the first request has been sent.
There are many more improvements in HTTP 1.1. For example, it improves bandwidth optimization by supporting not only compression, but also the negotiation of compression parameters and different compression styles. HTTP 1.1 also allows for partial transmission of objects where a server transmits just the portion of a resource explicitly requested by a client.
Another improvement to the protocol was HTTP pipelining. This feature further reduces lag time, allowing a client to make multiple requests without waiting for each response, allowing a single TCP connection to be used much more efficiently, with much lower elapsed time. This feature speed up browsing by opening up multiple “pipes,” , each pipe downloading a different part of the web page, then assembling them correctly at the browser. Certain browsers even enable user to configure maximum number of pipelining connection. For example, in Firefox, you may set a value to the “network.http.pipelining.maxrequests” for such purpose.
All common desktop browsers like Chrome, Firefox, Internet Explorer and etc support and enable HTTP 1.1 by default. However, you are able to disable the use of HTTP 1.1 in these browsers. Many web sites still use HTTP 1.0, so if you are having difficulties connecting to certain web sites, you might want to clear this check box. For example, you can modify HTTP 1.1 settings in Internet Explorer by using the Advanced tab in the Internet Options dialog box. As for Firefox, you can do so by typing about:config in the address bar, search for network.http.version and change the value to 1.0. As for Chrome, you may like to tweak settings such as HTTP Pipelining in the chrome://flags setup page.
The original version of HTTP, HTTP/0.9 (1991) was written by Sir Tim Berners-Lee. It was a simple protocol for transferring raw data across the Internet. There are currently two versions of HTTP, namely, HTTP/1.0 and HTTP/1.1 in use today. HTTP 1.0 defined in RFC 1945 was officially introduced and recognized in 1996 improved the protocol by allowing MIME-like messages. However, HTTP/1.0 does not address the issues of proxies, caching, persistent connection, virtual hosts, and range download. These features were provided in HTTP/1.1 which is defined in RFC 2616.
In HTTP 1.0 and before, TCP connections are closed after each request and response, so each resource to be retrieved requires its own connection. This increases the load on HTTP servers and causing congestion on the Internet. Opening and closing TCP connections takes a substantial amount of CPU time, bandwidth, and memory. In practice, most Web pages consist of several files on the same server. This requires a client to make multiple requests of the same server in a short amount of time. In HTTP/1.1 a keep-alive-mechanism was introduced, where a connection could be reused for more than one request. Several requests and responses are allowed to be sent through a single persistent connection. Such persistent connections experience less latency, because the client does not need to re-negotiate the TCP connection after the first request has been sent.
There are many more improvements in HTTP 1.1. For example, it improves bandwidth optimization by supporting not only compression, but also the negotiation of compression parameters and different compression styles. HTTP 1.1 also allows for partial transmission of objects where a server transmits just the portion of a resource explicitly requested by a client.
Another improvement to the protocol was HTTP pipelining. This feature further reduces lag time, allowing a client to make multiple requests without waiting for each response, allowing a single TCP connection to be used much more efficiently, with much lower elapsed time. This feature speed up browsing by opening up multiple “pipes,” , each pipe downloading a different part of the web page, then assembling them correctly at the browser. Certain browsers even enable user to configure maximum number of pipelining connection. For example, in Firefox, you may set a value to the “network.http.pipelining.maxrequests” for such purpose.
All common desktop browsers like Chrome, Firefox, Internet Explorer and etc support and enable HTTP 1.1 by default. However, you are able to disable the use of HTTP 1.1 in these browsers. Many web sites still use HTTP 1.0, so if you are having difficulties connecting to certain web sites, you might want to clear this check box. For example, you can modify HTTP 1.1 settings in Internet Explorer by using the Advanced tab in the Internet Options dialog box. As for Firefox, you can do so by typing about:config in the address bar, search for network.http.version and change the value to 1.0. As for Chrome, you may like to tweak settings such as HTTP Pipelining in the chrome://flags setup page.
Tuesday, February 19, 2013
How to backup and restore a site with GoDaddy
A. Backup of folder in Godaddy
The archive feature in the GoDaddy Hosting Control Center’s File Manager is very limit on the amount of files that it can archive. It imposed a limit of 20MB for achiving of files. As such we need to create a cron script to do backup of files for a folder.
For example, in order to backup $HOME/html/test folder, we will create a cron script like follows:
---
#!/bin/bash
NOWDATE=`date +%m%d%y` # Sets the date variable format for zipped file
tar -cvf $HOME/html/_db_backups/site_backup_test.$NOWDATE.tar $HOME/html/test
---
After that schedule this script as a cron job. This will do the job!
B. Backup of Database in Godaddy
This is simple. Use phpMyAdmin in Manage Database and export the database
C. Restore of data into local system
Download both the database and tar files generated to your local drive. Extract the tar files into a directory. As for the Database, as it is too big for phpMyAdmin to import, use the following command line to import it into database system. For example, if the Database name is testDB and the user name is testDB.user
C:\mysql.exe -u testDB -p -h localhost testDB.user < "test.sql" Enter password: *********
You may need to modify the .htaccess file in the extracted folder by comment out the following configuration statement:
#RewriteBase /
Hope help!
The archive feature in the GoDaddy Hosting Control Center’s File Manager is very limit on the amount of files that it can archive. It imposed a limit of 20MB for achiving of files. As such we need to create a cron script to do backup of files for a folder.
For example, in order to backup $HOME/html/test folder, we will create a cron script like follows:
---
#!/bin/bash
NOWDATE=`date +%m%d%y` # Sets the date variable format for zipped file
tar -cvf $HOME/html/_db_backups/site_backup_test.$NOWDATE.tar $HOME/html/test
---
After that schedule this script as a cron job. This will do the job!
B. Backup of Database in Godaddy
This is simple. Use phpMyAdmin in Manage Database and export the database
C. Restore of data into local system
Download both the database and tar files generated to your local drive. Extract the tar files into a directory. As for the Database, as it is too big for phpMyAdmin to import, use the following command line to import it into database system. For example, if the Database name is testDB and the user name is testDB.user
C:\mysql.exe -u testDB -p -h localhost testDB.user < "test.sql" Enter password: *********
You may need to modify the .htaccess file in the extracted folder by comment out the following configuration statement:
#RewriteBase /
Hope help!
Thursday, January 10, 2013
IPv6 Basic
The Internet is running out of Internet Protocol Version 4 (IPv4) addresses. In 1998, the Internet Engineering Task Force (IETF), a standards body, created Internet Protocol Version 6 (IPv6) as a replacement to IPv4 with the goal to increase the Internet's address space. However, IPv6 also has some enhancements, including autoconfiguration, easier network renumbering and built-in security through the IPsec protocol. Transitioning to IPv6 enables the Internet to continue to grow and enables new, innovative services to be developed because more devices can connect to the Internet.
In contrast to IPv4, which defined an IP address as a 32-bit value, IPv6 addresses have a size of 128 bits. Therefore, IPv6 expands the number of available addresses (about 4 billion addresses as in IPv4)to 340 trillion trillion trillion (or, 340,000,000,000,000,000,000,000,000,000,000,000,000) addresses. With this number of available addresses, it will be able to accommodate devices that are online today and those that may be in the future. These may includes TVs, fridges, computers, phones and so on.
Many major websites and Internet Service Providers now support IPv6, but there are still many more who need to switch.
IPv6 Address representation
IPv6 addresses in long form are represented as eight sets of four hexadecimal digits separated by colons, but that makes for long addresses. Here is an example below:
2001:0db8:0000:0000:0000:0000:0002
The address above is using the hexadecimal colon notation. Every two bytes are written in hexa format with a colon separating them.
IPv6 addresses can be written in short hand using two conventions:
1. Zero Suppression
- all IPv6 address segments are 16 bits
- The leading zeroes in each segment can be left out of the address segment.
2. Zero Compression
- Since all addresses contain 8 segments, following sections of zeroes can be collapsed to a double colon.
- However this double colon can appear only once in the address representation.
Using these two rules, our example IPv6 address 2001:0db8:0000:0000:0000:0000:0002 collapses to 2001:db8::2.
IPv6 Prefix numbers
Prefixes for IPv6 subnet identifiers and routes are expressed in the same way as Classless Inter-Domain Routing (CIDR) notation for IPv4. An IPv6 prefix is written in address/prefix-length notation.
For example:
805B:2D9D:DC28::/48
805B:2D9D:DC28:0000:0000:FC57:D4C8:1FFF
In the above example the first 48 bits of the address represents the main prefix or network ID and the last 80 bits are used for individual host ID. The prefix notation will be found in routing tables and used to express main networks or subnets.
IPv6 Address Categories
Three categories of IP addresses are supported in IPv6:
Unicast Addresses: Unicast addresses assigned to hosts and router interfaces. Packets destined to unicast address are delivered to a single interface.
Multicast Addresses: These are addresses that represent various groups of IP devices. A packet sent to a multicast address is delivered to all interfaces identified by that address.
Anycast Addresses: Anycast addresse identifies multiple interfaces. A packet sent to an anycast address is delivered to the closest member of a group, according to the routing protocols' measure of distance. Anycast addressing is used when a message must be sent to any member of a group, but does not need to be sent to them all.
There are no broadcast addresses in IPv6. All broadcast types of IPv4 are performed using multicast address type of IPv6.
In contrast to IPv4, which defined an IP address as a 32-bit value, IPv6 addresses have a size of 128 bits. Therefore, IPv6 expands the number of available addresses (about 4 billion addresses as in IPv4)to 340 trillion trillion trillion (or, 340,000,000,000,000,000,000,000,000,000,000,000,000) addresses. With this number of available addresses, it will be able to accommodate devices that are online today and those that may be in the future. These may includes TVs, fridges, computers, phones and so on.
Many major websites and Internet Service Providers now support IPv6, but there are still many more who need to switch.
IPv6 Address representation
IPv6 addresses in long form are represented as eight sets of four hexadecimal digits separated by colons, but that makes for long addresses. Here is an example below:
2001:0db8:0000:0000:0000:0000:0002
The address above is using the hexadecimal colon notation. Every two bytes are written in hexa format with a colon separating them.
IPv6 addresses can be written in short hand using two conventions:
1. Zero Suppression
- all IPv6 address segments are 16 bits
- The leading zeroes in each segment can be left out of the address segment.
2. Zero Compression
- Since all addresses contain 8 segments, following sections of zeroes can be collapsed to a double colon.
- However this double colon can appear only once in the address representation.
Using these two rules, our example IPv6 address 2001:0db8:0000:0000:0000:0000:0002 collapses to 2001:db8::2.
IPv6 Prefix numbers
Prefixes for IPv6 subnet identifiers and routes are expressed in the same way as Classless Inter-Domain Routing (CIDR) notation for IPv4. An IPv6 prefix is written in address/prefix-length notation.
For example:
805B:2D9D:DC28::/48
805B:2D9D:DC28:0000:0000:FC57:D4C8:1FFF
In the above example the first 48 bits of the address represents the main prefix or network ID and the last 80 bits are used for individual host ID. The prefix notation will be found in routing tables and used to express main networks or subnets.
IPv6 Address Categories
Three categories of IP addresses are supported in IPv6:
Unicast Addresses: Unicast addresses assigned to hosts and router interfaces. Packets destined to unicast address are delivered to a single interface.
Multicast Addresses: These are addresses that represent various groups of IP devices. A packet sent to a multicast address is delivered to all interfaces identified by that address.
Anycast Addresses: Anycast addresse identifies multiple interfaces. A packet sent to an anycast address is delivered to the closest member of a group, according to the routing protocols' measure of distance. Anycast addressing is used when a message must be sent to any member of a group, but does not need to be sent to them all.
There are no broadcast addresses in IPv6. All broadcast types of IPv4 are performed using multicast address type of IPv6.
Thursday, December 6, 2012
Securing network - using ACL
An Access Control Lists (ACLs) is a collection of sequential permit and deny conditions that applies to packets. It let you control whether network traffic is forwarded or blocked at interfaces on a router or switch. Typical criteria are the packet source address, the packet destination address, or the upper-layer protocol in the packet. For example, network users are allowed to access the Internet except using the Telnet program; ACLs allow you to do this.
Basically the ACL definitions provide criteria that are applied to packets that enter or exit a network interface. It provides a mechanism for defining security policies by grouping various access control entries (ACEs) together to form a set of rules. Access and security permission that one network device has to another network device are affected by the entries that make up the ACL. ACEs are not necessarily a negative restriction; in some cases, an ACE is a method of granting a person or device access to something.
Most of the Security Software, for example, Cisco IOS, tests a packet against each ACE in the order they are defined until a match is found. Thus, if a network packet matches the criteria of the first ACE, the switch will apply the specified action to the packet. Otherwise, the switch continues to compare the packet to subsequent ACEs. If there is no match in any of the ACEs, the switch will drop the packet. However, if there are no restrictions, the switch forwards the packet. Since switches process ACEs in order and stops testing conditions after the first match, ACLs should be designed with care to provide good performance. By studying traffic flow, you can design the list so that the most commonly matched conditions be listed first to minimize processing time. Fewer conditions to check per packet means better throughput. As such it is advisable to order the list with the most general statements at the top and the most specific statements at the bottom, with the last statement being the general, implicit deny-all statement.
Basically the ACL definitions provide criteria that are applied to packets that enter or exit a network interface. It provides a mechanism for defining security policies by grouping various access control entries (ACEs) together to form a set of rules. Access and security permission that one network device has to another network device are affected by the entries that make up the ACL. ACEs are not necessarily a negative restriction; in some cases, an ACE is a method of granting a person or device access to something.
Most of the Security Software, for example, Cisco IOS, tests a packet against each ACE in the order they are defined until a match is found. Thus, if a network packet matches the criteria of the first ACE, the switch will apply the specified action to the packet. Otherwise, the switch continues to compare the packet to subsequent ACEs. If there is no match in any of the ACEs, the switch will drop the packet. However, if there are no restrictions, the switch forwards the packet. Since switches process ACEs in order and stops testing conditions after the first match, ACLs should be designed with care to provide good performance. By studying traffic flow, you can design the list so that the most commonly matched conditions be listed first to minimize processing time. Fewer conditions to check per packet means better throughput. As such it is advisable to order the list with the most general statements at the top and the most specific statements at the bottom, with the last statement being the general, implicit deny-all statement.
Tuesday, November 20, 2012
How to Add a Google Sitemap for Blogger Blog
Sitemaps are a way to tell search engines such as Google and Bing about pages on your site which the search engines might not otherwise discover. It lists pages on your website. Creating and submitting a Sitemap helps make sure that search engines knows about all the pages on your site, including URLs that may not be discoverable by them in normal crawling process.
The default XML sitemap file of any Blogger blog will have only the 26 most recent blog posts. This is a limitation because some of your older blog pages, that are missing in the default XML sitemap file, may never get indexed in search engines. There’s however a simple solution to fix this problem.
Open the Sitemap Generator at http://ctrlq.org/blogger/ and type the full address of your blogspot blog (or your self-hosted Blogger blog). Click the Create Sitemap button and this tool will generate a complete XML sitemap of your Blogger blog that mentions all your blog posts and not just the recently published blog posts.
The default XML sitemap file of any Blogger blog will have only the 26 most recent blog posts. This is a limitation because some of your older blog pages, that are missing in the default XML sitemap file, may never get indexed in search engines. There’s however a simple solution to fix this problem.
Open the Sitemap Generator at http://ctrlq.org/blogger/ and type the full address of your blogspot blog (or your self-hosted Blogger blog). Click the Create Sitemap button and this tool will generate a complete XML sitemap of your Blogger blog that mentions all your blog posts and not just the recently published blog posts.
Monday, November 19, 2012
SkyDrive
Found this service from Microsoft the other day. Like Google toolbar that save my bookmark which I can use whenever access to the Internet, SkyDrive offer free disk space for me to access from anywhere of the Internet using web browser!
What is really cool is 7GB of free online storage! On top of that, maximum file size can be up to 2GB per file. This is really nice feature! User can also configure their file private or share it with contacts or make it to be public.
This is really nice service! See SkyDrive homepage to subscribe for it!
What is really cool is 7GB of free online storage! On top of that, maximum file size can be up to 2GB per file. This is really nice feature! User can also configure their file private or share it with contacts or make it to be public.
This is really nice service! See SkyDrive homepage to subscribe for it!
Subscribe to:
Posts (Atom)