Thursday, December 16, 2010

Some hidden goods in MySQL 5.5

The announcement of MySQL 5.5 released as GA has outlined the improvements in this version, which indeed has enough good new features to excite most any user.
There are two additions, though, that were lost in the noise of the bigger features, and I would like to spend a few words for each of them. The first addition is something that users of stored routines have been waiting for since MySQL 5.0. No, it is not SIGNAL and its close associate RESIGNAL, which have been publicized enough. I am talking about the stored routine parameters, for which now there is a dedicated table in the information_schema.
Let's see an example, with a simple procedure that uses three parameters.


drop procedure if exists add_to_date ;
create procedure add_to_date(in d date, in i int, out nd date)
deterministic
set nd = d + interval i day;
This works as expected in both 5.1 and 5.5. (Never mind that it's redundant. I know it. It's only for the sake of keeping the example short).

call add_to_date('2010-12-15',10,@new_date);
Query OK, 0 rows affected (0.00 sec)

select @new_date;
+------------+
| @new_date |
+------------+
| 2010-12-25 |
+------------+
1 row in set (0.00 sec)
The difference starts to show when you want to deal with this procedure programmatically. If you need to find out which parameters are expected by this procedure, your only option in MySQL 5.1 is parsing the result of SHOW CREATE PROCEDURE add_to_date. Not terribly difficult in any scripting language, but a hassle in SQL.
In MySQL 5.5, instead, you can easily get the routine parameters with a simple query:

select parameter_name, parameter_mode,data_type from information_schema. parameters where specific_schema='test' and specific_name= 'add_to_date' order by ordinal_position;
+----------------+----------------+-----------+
| parameter_name | parameter_mode | data_type |
+----------------+----------------+-----------+
| d | IN | date |
| i | IN | int |
| nd | OUT | date |
+----------------+----------------+-----------+
3 rows in set (0.00 sec)

Speaking of the information_Schema, there are more goodies that were not emphasized enough. The Innodb engine that you find in the server is the evolution of the InnoDB plugin that ships with MySQL 5.1. Only that it is now built-in. What many people forget to mention is that the plugin (and thus the current InnoDB engine in 5.5) comes provided with its own InnoDB-specific instrumentation tables in the information_schema.

show tables like 'innodb%';
+----------------------------------------+
| Tables_in_information_schema (innodb%) |
+----------------------------------------+
| INNODB_CMP_RESET |
| INNODB_TRX |
| INNODB_CMPMEM_RESET |
| INNODB_LOCK_WAITS |
| INNODB_CMPMEM |
| INNODB_CMP |
| INNODB_LOCKS |
+----------------------------------------+
7 rows in set (0.00 sec)
This is the same set of tables that you may have seen if you have worked with the InnoDB plugin in 5.1. In short, you can get a lot of the info that you used to look at in the output of SHOW ENGINE INNODB STATUS. For more information, you should look at what the InnoDB plugin manual says on this topic.
I don't know if the tables can replace the SHOW ENGINE INNODB STATUS. Perhaps someone can comment on this issue and provide more information?

Source


   

What if Jessus Born Today..??? MUST SEE..!

The Antikythera Mechanism... Built With LEGO

Antikythera
I'll be honest, I had little clue about what the "Antikythera Mechanism" was. Although I'd heard of it, I didn't know who built it, when it was built or why it was built.
As it turns out, in 1901, divers off the coast of the Greek island of Antikythera found a device on board a shipwreck dating back over 2,000 years. Not much was known about the "device" until, in 2006, scientists carried out X-ray tomography on what remained of the complex artifact.

According to the recent Nature article Ancient astronomy: Mechanical inspiration, by Jo Marchant:
"The device, which dates from the second or early first century BC, was enclosed in a wooden box roughly 30 centimetres high by 20 centimetres wide, contained more than 30 bronze gearwheels and was covered with Greek inscriptions. On the front was a large circular dial with two concentric scales. One, inscribed with names of the months, was divided into the 365 days of the year; the other, divided into 360 degrees, was marked with the 12 signs of the zodiac."
The device -- which sounds like something that belongs in a Dan Brown novel -- is an ancient celestial computer, driven by gears to carry out the calculations and dials to accurately predict heavenly events, such as solar eclipses. The technology used to construct the device wasn't thought to be available for another 1,000 years.
According to Adam Rutherford, editor of Nature, the science journal has a long standing relationship with the Antikythera Mechanism. In a recent email, Rutherford pointed to a video he had commissioned in the spirit of continuing Nature coverage of this fascinating device. But he hadn't commissioned a bland documentary about the history of the Antikythera Mechanism, he'd commissioned an engineer to build the thing out of LEGO!
The result is an engrossing stop animation production of a LEGO replica of this ancient celestial calculator. For me, this video really put the device in perspective. The Greeks, over 2,000 years ago, built a means of predicting the positions of the known planets, the sun, even the elliptical motions of planetary orbits. They'd drawn inspiration from the Babylonians (according to new research reported on by Nature) and re-written the history of what we understand of the ancient civilization's technical prowess.
Sadly for the ancient Greeks, the Antikythera Mechanism was lost for 2,000 years at the bottom of the ocean and only now are we beginning to understand just how advanced this fascinating piece of technology truly is.

Watch this video, it's awesome:




Source

Tuesday, December 14, 2010

Critics raise doubts on NASA's arsenic bacteria

Critics raise doubts on NASA's arsenic bacteria

December 9, 2010 by Lin Edwards Critics raise doubts on NASA’s arsenic bacteriaEnlarge
A microscopic image of GFAJ-1 grown on arsenic.
(PhysOrg.com) -- NASA’s announcement last week that bacteria had been discovered that appeared to replace phosphorus with arsenic and thrive even in the most poisonous environments, has now come under fire from a number of scientists.


The findings reported last week, were that some bacteria (GFAJ-1) thrived when access to phosphate was removed and the bacteria were grown in a highly toxic culture rich is arsenate. The scientists suggested the bacteria thrived because they were able to replace , which has always been thought vital to , with , which is directly under it on the periodic table and has similar chemical properties. The researchers also suggested the bacteria were replacing phosphorus with arsenic within the bases that make up DNA.
These findings, if correct, would mean the scientists had found a new form of life on Earth, and it would also re-write the guide book on the essential requirements for life to exist elsewhere.
After the findings were published in Science, other scientists began immediately to express their doubts at the conclusions of the paper, with some even expressing the opinion the paper should not have been published at all.
One of the critics was Dr. Alex Bradley, from Harvard University, who blogged that there were a number of problems with the research. Firstly, if arsenic had replaced phosphorus in the DNA the molecule would have broken into fragments when the DNA was placed in water, since the arsenic would have hydrolyzed, and yet it did not. Secondly, the paper showed there was a small amount of phosphorus in the medium and Bradley argued that even though small, this could have been enough, since bacteria metabolism is extremely efficient.
Dr. Bradley also pointed out the bacteria live in Mono Lake, which is rich in arsenic but which also contains a higher concentration of phosphate than almost anywhere else on Earth, and this means there would be no selective pressure for a life based on arsenic to evolve.

Dr. Bradley also suggested a mass spectrum of the DNA sequences would have shown whether or not the nucleotides contained arsenic in place of phosphorus, but this was not done.
Another critic was University of British Columbia biologist Rosie Redfield, who reviewed the paper on her blog, and has more recently submitted a letter to the journal. Among her conclusions are that the paper “doesn't present ANY convincing evidence that arsenic has been incorporated into DNA (or any other biological molecule).” She also writes: “If this data was presented by a PhD student at their committee meeting, I'd send them back to the bench to do more cleanup and controls.”
Dr. Redfield also points out there was phosphate in the culture and that the authors did not calculate whether the amount of growth they saw in the arsenate-only medium could be supported by the phosphate present. She calculates on the blog that the growth of the bacteria is actually limited by the amount of phosphorus.
Another point made by Redfield is that the arsenic bacteria were “like plump little corn kernels” and contain granules, which are usually produced by bacteria when they have ample supplies of carbon and energy sources but there are shortages of other nutrients needed for growth.
The authors of the arsenic bacteria paper initially refused to be drawn into the arguments, saying the discussion should be confined to peer-reviewed journals, but one of the authors, Ronald Ormeland, did answer questions on the controversy after giving a lecture on the findings at headquarters yesterday. He said the amount of phosphorus in the sample was too small to sustain growth, and a mass spectrum was not done because they did not have enough money, and wanted to get the result published quickly. He also pointed out that the are still there and other scientists could duplicate the work and carry out further experiments if they wished.

Source

Monday, December 13, 2010

10 Cool Nmap Tricks and Techniques

Nmap (“Network Mapper”) is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime.

In addition to my list you can also check out this Comprehensive Guide to Nmap here and of course the man pages
Here are some really cool scanning techniques using Nmap

1) Get info about remote host ports and OS detection

nmap -sS -P0 -sV -O <target>
Where < target > may be a single IP, a hostname or a subnet
-sS TCP SYN scanning (also known as half-open, or stealth scanning)
-P0 option allows you to switch off ICMP pings.
-sV option enables version detection
-O flag attempt to identify the remote operating system
Other option:
-A option enables both OS fingerprinting and version detection
-v use -v twice for more verbosity.
nmap -sS -P0 -A -v < target >

2) Get list of servers with a specific port open

nmap -sT -p 80 -oG – 192.168.1.* | grep open
Change the -p argument for the port number. See “man nmap” for different ways to specify address ranges.

3) Find all active IP addresses in a network

nmap -sP 192.168.0.*
There are several other options. This one is plain and simple.
Another option is:
nmap -sP 192.168.0.0/24
for specific  subnets

4)  Ping a range of IP addresses

nmap -sP 192.168.1.100-254
nmap accepts a wide variety of addressing notation, multiple targets/ranges, etc.

5) Find unused IPs on a given subnet

nmap -T4 -sP 192.168.2.0/24 && egrep “00:00:00:00:00:00″ /proc/net/arp

6) Scan for the Conficker virus on your LAN ect.

nmap -PN -T4 -p139,445 -n -v –script=smb-check-vulns –script-args safe=1 192.168.0.1-254
replace 192.168.0.1-256 with the IP’s you want to check.

7) Scan Network for Rogue APs.

nmap -A -p1-85,113,443,8080-8100 -T4 –min-hostgroup 50 –max-rtt-timeout 2000 –initial-rtt-timeout 300 –max-retries 3 –host-timeout 20m –max-scan-delay 1000 -oA wapscan 10.0.0.0/8
I’ve used this scan to successfully find many rogue APs on a very, very large network.

8) Use a decoy while scanning ports to avoid getting caught by the sys admin

sudo nmap -sS 192.168.0.10 -D 192.168.0.2
Scan for open ports on the target device/computer (192.168.0.10) while setting up a decoy address (192.168.0.2). This will show the decoy ip address instead of your ip in targets security logs. Decoy address needs to be alive. Check the targets security log at /var/log/secure to make sure it worked.

9) List of reverse DNS records for a subnet

nmap -R -sL 209.85.229.99/27 | awk ‘{if($3==”not”)print”(“$2″) no PTR”;else print$3″ is “$2}’ | grep ‘(‘
This command uses nmap to perform reverse DNS lookups on a subnet. It produces a list of IP addresses with the corresponding PTR record for a given subnet. You can enter the subnet in CDIR notation (i.e. /24 for a Class C)). You could add “–dns-servers x.x.x.x” after the “-sL” if you need the lookups to be performed on a specific DNS server. On some installations nmap needs sudo I believe. Also I hope awk is standard on most distros.

10) How Many Linux And Windows Devices Are On Your Network?

sudo nmap -F -O 192.168.0.1-255 | grep “Running: ” > /tmp/os; echo “$(cat /tmp/os | grep Linux | wc -l) Linux device(s)”; echo “$(cat /tmp/os | grep Windows | wc -l) Window(s) devices”

Hope you have fun, and remember don’t practice these techniques on machines or networks that are not yours.

 Source

Sunday, December 12, 2010

Bacteria cells used as secure information storage device






Cambridge - A technique for encryption/ compression/decryption of data and the use of bacteria as a secure storage device was successfully produced by a team of Chinese biochemistry students as an alternative solution for storing electronic data.
A team of instructors and students of the Chinese University of Hong Kong (CUHK) have managed to store enormous amounts of data in bacteria. The system is based on a novel cryptographic system for data encoding and the application of a compression algorithm which reduces its size dramatically. Following the reduction in size, the researchers were able to enter the information into bacteria in the form of modified DNA sequences. They used the DH5-alpha strain of Escherichia coli, a bacterium normally found in the intestines of most animals. This bacterium is often used as a model organism in microbiology and biotechnology. Modified E. coli has also been used in bioengineering for the development of vaccines, bio-remediation and the production of certain enzymes. Two research groups have already conducted unsuccessful experiments in 2001 and 2007 aiming to the use of biological systems as data storage devices. The researchers of the Chinese University of Hong Kong used encoded E. coli plasmid DNA (a molecule of DNA usually present in bacteria that replicate independently of chromosomal DNA) to encrypt the data and store it in the bacteria. Then, by using a novel information processing system they were able to reconstruct and recover the data with error checking. Another advantage of the system is that the bacteria cells abundantly replicate the data storage units thereby ensuring the integrity and permanence of the data by redundancy. Based on the procedures tested, they estimate the ability to store about 900000 gigabytes (GB) in one gram of bacteria cells. That is the equivalent of 450 hard drives, each with the capacity of 2 terabytes (2000 GB).
As an example of the potential for storage they explain that the text of the Declaration of Independence of the United States (8047 characters) could be stored in just 18 bacteria cells. One gram of bacteria cells contains approximately 10 million cells.
"We believe this could be an industry standard for large-scale manipulation of data storage in living cells"
said the researchers responsible for the project on their website where they describe the potential of data bio-encryption and storage. The researchers envision a wide range of applications for this technology. The capabilities of what they describe as a “bio-hard-disk” include the storage of text, images, music and even movies, or the insertion of barcodes into synthetic organisms as part of security protocols to discriminate between synthetic and natural organisms. The team of researchers was integrated by 3 instructors and 10 undergraduate biochemistry students of CUHK. They carried out their study as part of a worldwide Synthetic Biology competition called The International Genetically Engineered Machine (iGEM) organized by the Massachusetts Institute of Technology (MIT) of the USA. The CUHK team obtained a gold award in the iGEM competition.
“Biology students learn engineering approaches and tools to organize, model, and assemble complex systems, while engineering students are able to immerse themselves in applied molecular biology.”
declared iGEM organizers. The iGEM competition started in 2003. The 2010 version included over 1,900 participants in 138 teams from around the world. They were required to specify, design, build, and test simple biological systems made from standard, interchangeable biological parts. The achievements of the iGEM research teams often lead to important advances in medicine, energy, biotechnology and the environment.

Read more
 http://www.scribd.com/doc/44687672/Bacterial-based-storage-and-encryption-device

Cyber war will hit all web users - BBC










The conflict between Wikileaks supporters and the companies withdrawing their services from the whistle-blowing website has been dubbed a "cyber war".
Activists have targeted firms such as PayPal, Mastercard and Visa for their opposition to the site's publication of thousands of secret US diplomatic messages.
But there are fears the online battle could lead to everyday internet use becoming much more heavily regulated.
Source - BBC

Wednesday, December 8, 2010

Who’s to Blame for the Linux Kernel?

Finger-pointing time! Let’s see who’s responsible for kernel development in the last year. Once again, the Linux foundation has released its report on who wrote Linux. As always, it has some interesting insight into who did what when it comes to kernel development, and the direction of the kernel. Unsurprisingly, embedded/mobile is becoming a major factor in kernel development.
The Linux Foundation publishes an annual Linux report that shows (approximately) who has written and contributed to the Linux kernel. The report is put together by LWN’s Jon Corbet (also a kernel contributor) and kernel developer Greg Kroah-Hartman, with additional contributions from the Linux Foundation’s Amanda McPherson.

The Top 5
Everybody wants to know, who’s at the top of the list. Consistently at the top is “none,” which is to say that nearly 20% of the kernel development is done by people who aren’t affiliated with a company — at least as far as their kernel contributions go. Yes, Virginia, independent kernel contributions still exist.
The report provides two lists — contributions since 2.6.12, when Git logs became available, and since the last report (2.6.30). Red Hat tops both lists, with 12.4% of kernel changes since 2.6.12, and 12.0% since 2.6.30. A tiny decline, but remember that the number of developers participating in each release cycle grows by about 10%. Meaning that the proverbial pond keeps getting bigger, and the Red Hat fish isn’t getting much smaller in comparison.
The red fish keeps growing, but the green fish isn’t keeping up quite as well. Novell had 7.0% of kernel contributions since 2.6.12, but only 5.0% since 2.6.30. It’s dropped from second to third in kernel contributions, after Intel, which had 7.8% of kernel contributions since 2.6.30. Some of that may be that more X.org is being moved into the kernel, and a lot of X.org development is being done by Intel, and Intel is also doing more with its work on MeeGo.
Intel comes in second on most recent contributions, bumping Novell to its third place spot. IBM is also displaced by Intel, landing at fourth (Intel’s old slot). Who’s in fifth (sorry Abbot, Costello)? Nokia. Yep, Nokia — who were behind SGI, Parallels, and Fujitsu in 2009.
If you’re looking for individuals, the top five since 2.6.30 are Paul Mundt, Johannes Berg, Peter Zijlstra, Bartlomiej Zolnierkiewicz, Greg Kroah-Hartman. Mundt explains Renesas’ place in the list — he’s working for them, after a stint at the CE Linux Forum (CELF). Berg is on Intel’s payroll, working on wireless, Zijlstra works for Red Hat, and Zolnierkiewicz is a student at Warsaw University of Technology. Kroah-Hartman, of course, is at Novell.
Linus Torvalds doesn’t make the list not because he’s not doing anything, but because the list doesn’t measure what Torvalds does very well. That is to say, Torvalds spends much of his time merging commits from others and not so much writing his own code. Still quite important, but not as easily measured.
I beat Oracle up pretty heavily lately because of their antagonism towards Google and open source Java, as well as their mishandling of OpenSolaris, OpenOffice.org, and virtually all of the properties they got from Sun. Nothing that’s related to open source has gotten better since Oracle took it over. Still, the company turns in a respectable — if somewhat reduced — showing in kernel development. Oracle clocks in with 1.9% of kernel changes since 2.6.30, and 2.3% since 2.6.12.
Then there’s Canonical. Or rather, there Canonical isn’t. Once again, the most popular Linux desktop vendor and would-be enterprise Linux player doesn’t rank highly enough in kernel development to show up — even in the past year. I might get flamed for mentioning this, but I do think it’s worth pointing out. Yes, Canonical makes valuable contributions to Linux in other areas — even if the seem ashamed or reluctant to mention that Ubuntu is Linux underneath. Does Canonical need to contribute to the kernel to be successful? Apparently not. Should Canonical be contributing more given its standing and dependency on the Linux kernel? I believe so.
Embedded
Nokia’s placement on the list shows that much more development is being driven by mobile and embedded Linux. In the past, server Linux was the big money behind the kernel. Still is, but it’s making room for embedded Linux.
Nokia has jumped up in the standings and has doubled its percentage of contribution. Wolfson Microelectronics and Renesas Technology appear in the top 20 for the first time. Both companies are working with embedded Linux. Texas Instruments also makes the list — Linux on a calculator, anyone?
Broadcom and Atheros also make the top 20 since 2.6.30 — which is good, we might see fewer and fewer chipsets that aren’t supported in Linux.
What’s disappointing is that Google isn’t higher in the ranks here. Actually — Google has dropped off the top 20 altogether since 2.6.30. The search giant had less than a percent (0.8%) of kernel changes since 2.6.12, and only 0.7% since 2.6.30. Google is behind Pengutronix, for goodness sakes. Have you heard of Pengutronix? Nope, me either. For a company that is arguably using more Linux than anybody — pushing two Linux-based OSes and likely to have more Linux servers in use than any other entity — Google’s kernel contributions are actually quite paltry.
Summary
2011 should be interesting. If Google finally merges Android’s changes into the mainline kernel, that should bump Google up in the standings. I suspect, and hope, SUSE/Novell will move past Intel in 2011, now that its future is a bit more clear. As MeeGo continues to gather steam, I suspect Nokia will also show up a bit higher in the standings.
In all, Linux kernel development is as healthy as ever. I’d be curious to see a similar report for other major system utilities and such (GCC, the GNU utilities, X.org, Apache Web server). The kernel is very important, but just a part of the overall ecosystem. There’s plenty of userspace goodies that companies should get credit for as well.
Make sure to check out the full report PDF too. It makes for good reading, and it’s short and well-written.

Source

Why it's bad to use feof() to control a loop

When reading in a file, and processing it line by line, it's logical to think of the code loop as "while not at the end of the file, read and process data". This often ends up looking something like this:
i = 0;
  
while (!feof(fp))
{
  fgets(buf, sizeof(buf), fp);
  printf ("Line %4d: %s", i, buf);
  i++;
}

This apparently simple snippet of code has a bug in it, though. The problem stems from the method feof() uses to determine if EOF has actually been reached. Let's have a look at the C standard:
7.19.10.2 The feof function

Synopsis

1 #include <stdio.h>
int feof(FILE *stream);

Description
2 The feof function tests the end-of-file indicator for the stream pointed to by stream.

Returns
3 The feof function returns nonzero if and only if the end-of-file indicator is set for stream.

Do you see the problem yet? The function tests the end-of-file indicator, not the stream itself. This means that another function is actually responsible for setting the indicator to denote EOF has been reached. This would normally be done by the function that performed the read that hit EOF. We can then follow the problem to that function, and we find that most read functions will set EOF once they've read all the data, and then performed a final read resulting in no data, only EOF.With this in mind, how does it manifest itself into a bug in our snippet of code? Simple... as the program goes through the loop to get the last line of data, fgets() works normally, without setting EOF, and we print out the data. The loop returns to the top, and the call to feof() returns FALSE, and we start to go through the loop again. This time, the fgets() sees and sets EOF, but thanks to our poor logic, we go on to process the buffer anyway, without realising that its content is now undefined (most likely untouched from the last loop).
This problem results in the last line being printed twice. Now, with the various code and compilers I've tried, I've seen varying results when using this poor quality code. Some give the wrong answer as described here, but some do seem to get it right, and print the last line only once.
Here is a full example of the broken code. It's pointless providing sample results, as they're not necessarily going to be the same as yours. However, if you compile this code, and run it against an empty file (0 bytes), it should output nothing. If it's doing it wrong, as I expect it will, you'll get a line similar to this:
Line 0: Garbage
Here, Garbage was left in the buffer from the initialisation, but should not have been printed. Anyway, enough talk, here's the code.
#include <stdio.h> 
#include <stdlib.h> 

#define MYFILE "junk1.txt" 

int main(void)
{
  FILE *fp;
  char buf[BUFSIZ] = "Garbage";
  int i;
  
  if ((fp = fopen(MYFILE, "r")) == NULL)
  {
    perror (MYFILE);
    return (EXIT_FAILURE);
  }
  
  i = 0;
  
  while (!feof(fp))
  {
    fgets(buf, sizeof(buf), fp);
    printf ("Line %4d: %s", i, buf);
    i++;
  }
  
  fclose(fp);
    
  return(0);
}

To correct the problem, always follow this rule: use the return code from the read function to determine when you've hit EOF. Here is a revised edition of the same code, this time checking the return code from fgets() to determine when the read fails. The code is exactly the same, except for the loop.
#include <stdio.h> 
#include <stdlib.h> 

#define MYFILE "junk1.txt" 

int main(void)
{
  FILE *fp;
  char buf[BUFSIZ] = "Garbage";
  int i;
  
  if ((fp = fopen(MYFILE, "r")) == NULL)
  {
    perror (MYFILE);
    return (EXIT_FAILURE);
  }
  
  i = 0;

  while (fgets(buf, sizeof(buf), fp) != NULL)
  {
    printf ("Line %4d: %s", i, buf);
    i++;
  }
  
  fclose(fp);
    
  return(0);
}

When this is run against an empty file (0 bytes), it will not print anything.Here are some other read functions being used to control loops:
total = 0;
  
while (fscanf(fp, "%d", &num) == 1)
{
  total += num;
}

printf ("Total is %d\n", total);

int c;
  
while ((c = fgetc(fp)) != EOF)
{
  putchar (c);
}
Source

Definition of EOF and how to use it effectively

The use and meaning of EOF seems to cause a lot of confusion with some new coders, hopefully this explanation will help you understand better. Before I go into too much detail about what EOF is, I'll tell you what it isn't.
EOF is NOT:

  • A char




  • A value that exists at the end of a file




  • A value that could exist in the middle of a file



  • And now to what it actually is.
    EOF is a macro defined as an int with a negative value. It is normally returned by functions that perform read operations to denote either an error or end of input. Due to variable promotion rules (discussed in detail later), it is important to ensure you use an int to store the return code from these functions, even if the function appears to be returning a char, such as getchar() or fgetc().
    Here are some code examples that you might use:

    int c;
      
    while ((c = fgetc(fp)) != EOF)
    {
      putchar (c);
    }
    

    int ch;

    while ((ch = cin.get()) != EOF)
    {
    cout <<(char)ch;
    }

    char to int PromotionBy definition an int is larger than a char, therefore a negative valued int can never hold the same value as a char. However, when you compare an int with a char, the char will get promoted to an int to account for the difference in size of the variables. The value of a promoted char is affected by its sign, and unfortunately, a char can be either signed or unsigned by default, this is compiler dependant.
    To understand this better, let's look at the representation of a few numbers in both ints and chars.
    The following assumes 2 byte ints (your compiler might use a larger amount). A char uses only 1 byte (this will be the same amount on your compiler). With the exception of the first column, the values are shown in hexadecimal.
    -----------------------------        ------------------------------
    |  char and int comparison  |        |     char to int promotion  |
    -----------------------------        ------------------------------
    | Decimal |  int    |  char |        |  char | unsigned | signed  |
    |---------|---------|-------|        |-------|----------|---------|
    |  2      |  00 02  |  02   |        |  02   |  00 02   |  00 02  |
    |  1      |  00 01  |  01   |        |  01   |  00 01   |  00 01  |
    |  0      |  00 00  |  00   |        |  00   |  00 00   |  00 00  |
    | -1      |  FF FF  |  FF   |        |  FF   |  00 FF   |  FF FF  |
    | -2      |  FF FE  |  FE   |        |  FE   |  00 FE   |  FF FE  |
    -----------------------------        ------------------------------

    The "char to int promotion" table makes it clear that the sign of a char produces a very different number in the int.So what does all this mean to me as a programmer?
    Well, let's have a look at a revised version of the code shown above, this time incorrectly using a char variable to store the return code from fgetc().

    char c;

    while ((c = fgetc(fp)) != EOF)
    {
    putchar (c);
    }

    Now let's assume that within the file we are reading from is a byte with value 0xff. fgetc() returns this value within an int, so it looks like this: 0x00 0xff (again, I'm assuming 2 byte ints). To store this value in a char, it must be demoted, and the char value becomes 0xff. Next, the char c is compared with the int EOF. Promotion rules apply, and c must be promoted to an int. However, in the sample code, the sign of c isn't explicitly declared, so we don't know if it's signed or unsigned, so the int value could become either 0xff 0xff or 0x00 0xff. Therefore, the code is is not guaranteed to work in the way we require.
    The following is a short program to help show the promotion:

    #include <stdio.h>

    int main(void)
    {
    int i = -1;
    signed char sc = 0xff;
    unsigned char usc = 0xff;

    printf ("Comparing %x with %x\n", i, sc);
    if (i == sc) puts("i == sc");
    else puts("i != sc");
    putchar ('\n');
    printf ("Comparing %x with %x\n", i, usc);
    if (i == usc) puts("i == usc");
    else puts("i != usc");

    return 0;
    }

    /*
    * Output

    Comparing ffff with ffff <--- Notice this has been promoted
    i == sc

    Comparing ffff with ff
    i != usc

    *
    */
    Another scenario to consider is where the char is unsigned.  In  this case, the process of demoting and promoting the returned value  from fgetc() will have the affect of corrupting the EOF value, and the program will get stuck in a infinite loop.  Let's follow that process through:
    - EOF (0xff 0xff) is returned by fgetc() due to end of input
    - Value demoted to 0xff to be stored in unsigned char c
    - unsigned char c promoted to an int, value goes from 0xff to 0x00 0xff
    - EOF is compared with c, meaning comparison is between 0xff 0xff and 0x00 0xff.
    - The result is FALSE (the values are different), which is undesirable.
    - fgetc() is called again, and still returns EOF.  The endless loop begins.
    

    The following code demonstrates this problem.

    #include <stdio.h>

    int main(void)
    {
    FILE *fp;
    unsigned char c;

    if ((fp = fopen("myfile.txt", "rb")) == NULL)
    {
    perror ("myfile.txt");
    return 0;
    }

    while ((c = fgetc(fp)) != EOF)
    {
    putchar (c);
    }

    fclose(fp);
    return 0;
    Source
     

    Friday, December 3, 2010

    How I got Xampp to work on 64 bit

    Here's what I did to get Xampp to work on 64 bit Ubuntu Studio (Hardy)

    From Synaptic:

    -Install ia32-libs

    In a terminal:

    -Pull package from Apache Friends (version may change)
    wget http://www.apachefriends.org/download.php?xampp-linux-1.6.6.tar.gz

    -su to root, or use sudo for each of the commands below:

    -Extract, w/ overwrite into /opt:
    tar xvfz xampp-linux-1.6.6.tar.gz -C /opt

    -Start xampp:
    /opt/lampp/lampp start

    -Test Xampp:
    type localhost in a browser

    -Start Xampp on boot:
    gedit /etc/init.d/rc.local
    Below the #! /bin/sh line, type:
    /opt/lampp/lampp start

    -Make Xampp more secure:
    /opt/lampp/lampp security
    (Follow prompts.)

    That's it. Your pages go in /opt/lampp/htdocs.

    If you use PHP scripts in your html & you want to keep the .html or .htm extension on your pages, you can:

    -Open text editor and type:

    RemoveHandler .htm. .htm
    AddType application/x-httpd-php .php .htm .html

    -Save the file as .htaccess (note the dot) and place in /opt/lampp/htdocs.



    Source

     

    Tuesday, November 30, 2010

    Templated Functions

    C++ templates can be used both for classes and for functions in C++. Templated functions are actually a bit easier to use than templated classes, as the compiler can often deduce the desired type from the function's argument list.

    The syntax for declaring a templated function is similar to that for a templated class:
    template <class type> type func_name(type arg1, ...);
    For instance, to declare a templated function to add two values together, you could use the following syntax:
    template <class type> type add(type a, type b)
    {
        return a + b;
    }
    Now, when you actually use the add function, you can simply treat it like any other function because the desired type is also the type given for the arguments. This means that upon compiling the code, the compiler will know what type is desired:
    int x = add(1, 2);
    will correctly deduce that "type" should be int. This would be the equivalent of saying:
    int x = add<int>(1, 2);
    where the template is explicitly instantiated by giving the type as a template parameter.

    On the other hand, type inference of this sort isn't always possible because it's not always feasible to guess the desired types from the arguments to the function. For instance, if you wanted a function that performed some kind of cast on the arguments, you might have a template with multiple parameters:
    template <class type1, class type2> type2 cast(type1 x)
    {
        return (type2)x;
    }
    Using this function without specifying the correct type for type2 would be impossible. On the other hand, it is possible to take advantage of some type inference if the template parameters are correctly ordered. In particular, if the first argument must be specified and the second deduced, it is only necessary to specify the first, and the second parameter can be deduced.

    For instance, given the following declaration
    template <class rettype, class argtype> rettype cast(argtype x)
    {
        return (rettype)x;
    }
    this function call specifies everything that is necessary to allow the compiler deduce the correct type:
    cast<double>(10);
    which will cast an int to a double. Note that arguments to be deduced must always follow arguments to be specified. (This is similar to the way that default arguments to functions work.)

    You might wonder why you cannot use type inference for classes in C++. The problem is that it would be a much more complex process with classes, especially as constructors may have multiple versions that take different numbers of parameters, and not all of the necessary template parameters may be used in any given constructor.


    Templated Classes with Templated Functions

    It is also possible to have a templated class that has a member function that is itself a template, separate from the class template. For instance,
    template <class type> class TClass
    {
        // constructors, etc
        
        template <class type2> type2 myFunc(type2 arg);
    };
    The function myFunc is a templated function inside of a templated class, and when you actually define the function, you must respect this by using the template keyword twice:
    template <class type>  // For the class
        template <class type2>  // For the function
        type2 TClass<type>::myFunc(type2 arg)
        {
            // code
        }
    The following attempt to combine the two is wrong and will not work:
    // bad code!
    template <class type, class type2> type2 TClass<type>::myFunc(type2 arg)
    {
        // ...
    }
    because it suggests that the template is entirely the class template and not a function template at all.


    Source

       

    Templates and Templated Classes in C++

    What's better than having several classes that do the same thing to different datatypes? One class that lets you choose which datatype it acts on.
    Templates are a way of making your classes more abstract by letting you define the behavior of the class without actually knowing what datatype will be handled by the operations of the class. In essence, this is what is known as generic programming; this term is a useful way to think about templates because it helps remind the programmer that a templated class does not depend on the datatype (or types) it deals with. To a large degree, a templated class is more focused on the algorithmic thought rather than the specific nuances of a single datatype. Templates can be used in conjunction with abstract datatypes in order to allow them to handle any type of data. For example, you could make a templated stack class that can handle a stack of any datatype, rather than having to create a stack class for every different datatype for which you want the stack to function. The ability to have a single class that can handle several different datatypes means the code is easier to maintain, and it makes classes more reusable.
    The basic syntax for declaring a templated class is as follows:
    template <class a_type> class a_class {...};
    The keyword 'class' above simply means that the identifier a_type will stand for a datatype. NB: a_type is not a keyword; it is an identifier that during the execution of the program will represent a single datatype. For example, you could, when defining variables in the class, use the following line:
    a_type a_var;
    and when the programmer defines which datatype 'a_type' is to be when the program instantiates a particular instance of a_class, a_var will be of that type.
    When defining a function as a member of a templated class, it is necessary to define it as a templated function:
    template<class a_type> void a_class<a_type>::a_function(){...}
                   
    When declaring an instance of a templated class, the syntax is as follows:
    a_class<int> an_example_class;
                  
    An instantiated object of a templated class is called a specialization; the term specialization is useful to remember because it reminds us that the original class is a generic class, whereas a specific instantiation of a class is specialized for a single datatype (although it is possible to template multiple types).
    Usually when writing code it is easiest to precede from concrete to abstract; therefore, it is easier to write a class for a specific datatype and then proceed to a templated - generic - class. For that brevity is the soul of wit, this example will be brief and therefore of little practical application.
    We will define the first class to act only on integers.
    class calc
    {
      public:
        int multiply(int x, int y);
        int add(int x, int y);
     };
    int calc::multiply(int x, int y)
    {
      return x*y;
    }
    int calc::add(int x, int y)
    {
      return x+y;
    }
    We now have a perfectly harmless little class that functions perfectly well for integers; but what if we decided we wanted a generic class that would work equally well for floating point numbers? We would use a template.
    template <class A_Type> class calc
    {
      public:
        A_Type multiply(A_Type x, A_Type y);
        A_Type add(A_Type x, A_Type y);
    };
    template <class A_Type> A_Type calc<A_Type>::multiply(A_Type x,A_Type y)
    {
      return x*y;
    }
    template <class A_Type> A_Type calc<A_Type>::add(A_Type x, A_Type y)
    {
      return x+y;
    }
    To understand the templated class, just think about replacing the identifier A_Type everywhere it appears, except as part of the template or class definition, with the keyword int. It would be the same as the above class; now when you instantiate an
    object of class calc you can choose which datatype the class will handle.
    calc <double> a_calc_class;
    Templates are handy for making your programs more generic and allowing your code to be reused later.

    Source

    Monday, November 29, 2010

    The C Preprocessor

    The C preprocessor modifies a source code file before handing it over to the compiler. You're most likely used to using the preprocessor to include files directly into other files, or #define constants, but the preprocessor can also be used to create "inlined" code using macros expanded at compile time and to prevent code from being compiled twice.

    There are essentially three uses of the preprocessor--directives, constants, and macros. Directives are commands that tell the preprocessor to skip part of a file, include another file, or define a constant or macro. Directives always begin with a sharp sign (#) and for readability should be placed flush to the left of the page. All other uses of the preprocessor involve processing #define'd constants or macros. Typically, constants and macros are written in ALL CAPS to indicate they are special (as we will see).

    Header Files

    The #include directive tells the preprocessor to grab the text of a file and place it directly into the current file. Typically, such statements are placed at the top of a program--hence the name "header file" for files thus included.

    Constants

    If we write
    #define [identifier name] [value]
    whenever [identifier name] shows up in the file, it will be replaced by [value].

    If you are defining a constant in terms of a mathematical expression, it is wise to surround the entire value in parentheses:
    #define PI_PLUS_ONE (3.14 + 1)
    By doing so, you avoid the possibility that an order of operations issue will destroy the meaning of your constant:
    x = PI_PLUS_ONE * 5;
    Without parentheses, the above would be converted to
    x = 3.14 + 1 * 5;
    which would result in 1 * 5 being evaluated before the addition, not after. Oops!

    It is also possible to write simply
    #define [identifier name]
    which defines [identifier name] without giving it a value. This can be useful in conjunction with another set of directives that allow conditional compilation.

    Conditional Compilation

    There are a whole set of options that can be used to determine whether the preprocessor will remove lines of code before handing the file to the compiler. They include #if, #elif, #else, #ifdef, and #ifndef. An #if or #if/#elif/#else block or a #ifdef or #ifndef block must be terminated with a closing #endif.

    The #if directive takes a numerical argument that evaluates to true if it's non-zero. If its argument is false, then code until the closing #else, #elif, of #endif will be excluded.

    Commenting out Code

    Conditional compilation is a particularly useful way to comment out a block of code that contains multi-line comments (which cannot be nested).
    #if 0
    /* comment ...
    */
    
    // code
    
    /* comment */
    #endif

    Avoiding Including Files Multiple Times (idempotency)

    Another common problem is that a header file is required in multiple other header files that are later included into a source code file, with the result often being that variables, structs, classes or functions appear to be defined multiple times (once for each time the header file is included). This can result in a lot of compile-time headaches. Fortunately, the preprocessor provides an easy technique for ensuring that any given file is included once and only once.

    By using the #ifndef directive, you can include a block of text only if a particular expression is undefined; then, within the header file, you can define the expression. This ensures that the code in the #ifndef is included only the first time the file is loaded.
    #ifndef _FILE_NAME_H_
    #define _FILE_NAME_H_
    
    /* code */
    
    #endif // #ifndef _FILE_NAME_H_
    Notice that it's not necessary to actually give a value to the expression _FILE_NAME_H_. It's sufficient to include the line "#define _FILE_NAME_H_" to make it "defined". (Note that there is an n in #ifndef--it stands for "if not defined").

    A similar tactic can be used for defining specific constants, such as NULL:
    #ifndef NULL
    #define NULL (void *)0
    #endif // #ifndef NULL
    Notice that it's useful to comment which conditional statement a particular #endif terminates. This is particularly true because preprocessor directives are rarely indented, so it can be hard to follow the flow of execution.

    Macros

    The other major use of the preprocessor is to define macros. The advantage of a macro is that it can be type-neutral (this can also be a disadvantage, of course), and it's inlined directly into the code, so there isn't any function call overhead. (Note that in C++, it's possible to get around both of these issues with templated functions and the inline keyword.)

    A macro definition is usually of the following form:
    #define MACRO_NAME(arg1, arg2, ...) [code to expand to]
    For instance, a simple increment macro might look like this:
    #define INCREMENT(x) x++
    They look a lot like function calls, but they're not so simple. There are actually a couple of tricky points when it comes to working with macros. First, remember that the exact text of the macro argument is "pasted in" to the macro. For instance, if you wrote something like this:
    #define MULT(x, y) x * y
    and then wrote
    int z = MULT(3 + 2, 4 + 2);
    what value do you expect z to end up with? The obvious answer, 30, is wrong! That's because what happens when the macro MULT expands is that it looks like this:
    int z = 3 + 2 * 4 + 2;    // 2 * 4 will be evaluated first!
    So z would end up with the value 13! This is almost certainly not what you want to happen. The way to avoid it is to force the arguments themselves to be evaluated before the rest of the macro body. You can do this by surrounding them by parentheses in the macro definition:
    #define MULT(x, y) (x) * (y)
    // now MULT(3 + 2, 4 + 2) will expand to (3 + 2) * (4 + 2)
    But this isn't the only gotcha! It is also generally a good idea to surround the macro's code in parentheses if you expect it to return a value. Otherwise, you can get similar problems as when you define a constant. For instance, the following macro, which adds 5 to a given argument, has problems when embedded within a larger statement:
    #define ADD_FIVE(a) (a) + 5
    
    int x = ADD_FIVE(3) * 3;
    // this expands to (3) + 5 * 3, so 5 * 3 is evaluated first
    // Now x is 18, not 24!
    To fix this, you generally want to surround the whole macro body with parentheses to prevent the surrounding context from affecting the macro body.
    #define ADD_FIVE(a) ((a) + 5)
    
    int x = ADD_FIVE(3) * 3;
    On the other hand, if you have a multiline macro that you are using for its side effects, rather than to compute a value, you probably want to wrap it within curly braces so you don't have problems when using it following an if statement.
    // We use a trick involving exclusive-or to swap two variables
    #define SWAP(a, b)  a ^= b; b ^= a; a ^= b; 
    
    int x = 10;
    int y = 5;
    
    // works OK
    SWAP(x, y);
    
    // What happens now?
    if(x < 0)
        SWAP(x, y);
    When SWAP is expanded in the second example, only the first statement, a ^= b, is governed by the conditional; the other two statements will always execute. What we really meant was that all of the statements should be grouped together, which we can enforce using curly braces:
    #define SWAP(a, b)  {a ^= b; b ^= a; a ^= b;} 
    Now, there is still a bit more to our story! What if you write code like so:
    #define SWAP(a, b)  { a ^= b; b ^= a; a ^= b; }
    
    int x = 10;
    int y = 5;
    int z = 4;
    
    // What happens now?
    if(x < 0)
        SWAP(x, y);
    else
        SWAP(x, z); 
    Then it will not compile because semicolon after the closing curly brace will break the flow between if and else. The solution? Use a do-while loop:
    #define SWAP(a, b)  do { a ^= b; b ^= a; a ^= b; } while ( 0 )
    
    int x = 10;
    int y = 5;
    int z = 4;
    
    // What happens now?
    if(x < 0)
        SWAP(x, y);
    else
        SWAP(x, z); 
    Now the semi-colon doesn't break anything because it is part of the expression. (By the way, note that we didn't surround the arguments in parentheses because we don't expect anyone to pass an expression into swap!)

    More Gotchas

    By now, you've probably realized why people don't really like using macros. They're dangerous, they're picky, and they're just not that safe. Perhaps the most irritating problem with macros is that you don't want to pass arguments with "side effects" to macros. By side effects, I mean any expression that does something besides evaluate to a value. For instance, ++x evaluates to x+1, but it also increments x. This increment operation is a side effect.

    The problem with side effects is that macros don't evaluate their arguments; they just paste them into the macro text when performing the substitution. So something like
    #define MAX(a, b) ((a) < (b) ? (b) : (a))
    int x = 5, y = 10;
    int z = MAX(x++, y++);
    will end up looking like this:
    int x = (x++ < y++ ? y++ : x++)
    The problem here is that y++ ends up being evaluated twice! The nasty consequence is that after this expression, y will have a value of 12 rather than the expected 11. This can be a real pain to debug!

    Multiline macros

    Until now, we've seen only short, one line macros (possibly taking advantage of the semicolon to put multiple statements on one line.) It turns out that by using a the "\" to indicate a line continuation, we can write our macros across multiple lines to make them a bit more readable.

    For instance, we could rewrite swap as
    #define SWAP(a, b)  {                   \
                            a ^= b;         \
                            b ^= a;         \ 
                            a ^= b;         \
                        } 
    Notice that you do not need a slash at the end of the last line! The slash tells the preprocessor that the macro continues to the next line, not that the line is a continuation from a previous line.

    Aside from readability, writing multi-line macros may make it more obvious that you need to use curly braces to surround the body because it's more clear that multiple effects are happening at once.

    Advanced Macro Tricks

    In addition to simple substitution, the preprocessor can also perform a bit of extra work on macro arguments, such as turning them into strings or pasting them together.

    Pasting Tokens

    Each argument passed to a macro is a token, and sometimes it might be expedient to paste arguments together to form a new token. This could come in handy if you have a complicated structure and you'd like to debug your program by printing out different fields. Instead of writing out the whole structure each time, you might use a macro to pass in the field of the structure to print.

    To paste tokens in a macro, use ## between the two things to paste together.

    For instance
    #define BUILD_FIELD(field) my_struct.inner_struct.union_a.##field
    Now, when used with a particular field name, it will expand to something like
    my_struct.inner_struct.union_a.field1
    The tokens are literally pasted together.

    String-izing Tokens

    Another potentially useful macro option is to turn a token into a string containing the literal text of the token. This might be useful for printing out the token. The syntax is simple--simply prefix the token with a pound sign (#).
    #define PRINT_TOKEN(token) printf(#token " is %d", token)
    For instance, PRINT_TOKEN(foo) would expand to
    printf("<foo>" " is %d" <foo>)
    (Note that in C, string literals next to each other are concatenated, so something like "token" " is " " this " will effectively become "token is this". This can be useful for formatting printf statements.)

    For instance, you might use it to print the value of an expression as well as the expression itself (for debugging purposes).
    PRINT_TOKEN(x + y);

    Avoiding Macros in C++

    In C++, you should generally avoid macros when possible. You won't be able to avoid them entirely if you need the ability to paste tokens together, but with templated classes and type inference for templated functions, you shouldn't need to use macros to create type-neutral code. Inline functions should also get rid of the need for macros for efficiency reasons. (Though you aren't guaranteed that the compiler will inline your code.)

    Moreover, you should use const to declare typed constants rather than #define to create untyped (and therefore less safe) constants. Const should work in pretty much all contexts where you would want to use a #define, including declaring static sized arrays or as template parameters.


    Source


       

    Saturday, November 27, 2010

    Using Firewall Builder To Configure Router Access Lists - PT 3

    Getting Started: Configuring Cisco Router ACL


    For the following sections we are going to assume that the following rules have been defined for the router configuration shown above.


    Step 4: Compile and Install

    In Firewall Builder the process of converting the rules from the Firewall Builder GUI syntax to the target device commands is called compiling the configuration.
    To compile, click on the Compile icon which looks like a hammer . If you haven't saved your configuration file yet you will be asked to do so. After you save your file a wizard will be displayed that lets you select which firewall(s) you want to compile. In this example we are going to complie the firewall called la-rtr-1 configured with the rules above.
    If there aren't any errors, you should see some messages scroll by in the main window and a message at the top left stating Success.
    To view the output of the compile, click on the button that says Inspect Generated Files. This will open the file that contains the commands in Cisco command format. Note that any line that starts with "!" is a comment.

    The output from the compiler is automatically saved in a file in the same directory as the data file that was used to create it. The generated files are named with the firewall name and a .fw extension. In our example the generated configuration file is called la-rtr-1.fw. You can copy and copy and paste the commands from this file to your router or you can use the built-in Firewall Builder installer.

     

    Installing

    Firewall Builder can install the generated configuration file for you using SSH. To use the installer we need to identify one of the router interfaces as the "Management Interface". This tells Firewall Builder which IP address to connect to on the router.
    Do this by double-clicking the firewall object to expand it, and then double-clicking on the interface name that you want to assign as the management interface. In our case this is interface FastEthernet0/1 which is the interface connected to the internal network.

    CAUTION! Any time you are changing access lists on your router you face the risk of locking yourself out of the device. Please be careful to always inspect your access lists closely and make sure that you will be able to access the router after the access list is installed.
    To install your access lists on the router, click on the install icon . This will bring up a wizard where you will select the firewall to install. Click Next > to install the selected firewall.

    Firewall Builder will compile your rules converting them in to Cisco access list command line format. After the compile completes successfully click Next >. Enter your username, password and enable password.

    After the access list configuration is installed you see a message at the bottom of the main window and the status indicator in the upper left corner of the wizard will indicate if the installation was successful.

    By default Firewall Builder will connect to your router using SSH and send the commands line-by-line to the router. Depending on the size of your access lists this can be slow.
    If your router is running IOS version 12.4 you can select an option to have Firewall Builder scp the generated configuration file to the router instead of applying it line-by-line. This is much faster and is recommended if your router supports it.
    This requires ssh version 2 to be enabled on the router and scp server to be enabled. You can find complete instructions for enabling SCP installation in the Firewall Builder Users Guide.




    Source

    Using Firewall Builder To Configure Router Access Lists - PT 2

    Getting Started: Configuring Cisco Router ACL

    Reminder - In this tutorial we are configuring access lists on a router that has the following interface configuration.

    Our goal is to implement the following four rules as access control lists on the router.
    • Allow inside traffic (10.0.0.0/24) through the router to any Internet address for the HTTP and HTTPS protocols
    • Allow inside traffic (10.0.0.0/24) through the router to a specific IP address (198.51.100.1) for the POP3 protocol.
    • Allow inside traffic (10.0.0.0/24) to the router's inside interface (FastEthernet0/1) for the SSH protocol.
    • Block all incoming traffic to the router's outside interface (FastEthernet0/0).

    Step 3: Configure Access Lists

    After we created the firewall object la-rtr-1 it was automatically opened in the object tree and its Policy object was opened in the main window for editing. The Policy object is where access list rules are configured.
    To add a new rule to the Policy, click on the green icon at the top left of the main window. This creates a new rule with default values set to deny all.

    In Firewall Builder everything is based on the concept of objects. To configure rules that will be converted in to access lists you simply find the object you want in the tree and drag-and-drop it to the correct section of the rule.
    The first rule in our example is to allow internal network traffic to use the HTTP and HTTPS protocols to access the Internet. In configuration the router is NAT'ing the internal network to the IP address on the FastEthernet interface. Since the order of operations on Cisco routers is that NAT takes place before the outbound access list is checked the Source for the outbound rules must be the post-NAT IP address which is represented by the IP interface object under the outside FastEthernet0/0 interface.

    After you drop the interface IP object into the rule the Source section will change from Any to la-rtr-1:FastEthernet0/0:ip.

    Since we want this rule to allow traffic to the Internet we will leave the Destination object set to Any. The Any object in Firewall Builder is the same as the "any" parameter in Cisco CLI commands for access lists.
    Next we want to define the protocols or services this rule will allow. The example calls for the HTTP and HTTPS services to be allowed out to the Internet.
    Firewall Builder comes with hundreds of predefined objects including almost all standard protocols. To access these objects switch to the Standard library by selecting it from the drop down at the top of the Object tree window.

    Click here to find out more!

    After you have switched to the Standard library you can navigate to the HTTP service by opening the Services folder, then opening the TCP folder and scrolling down until you find the http object.
    Once you find the http object, drag-and-drop from the tree on the left in to the Service section of the rule in the Rules window.

    Repeat this process to add the HTTPS service to the rule. Drag-and-drop the https object from the tree on the left to the Service section of the rule in the Rules window.
    NOTE: Notice that you can have more than one service in a single rule. Firewall Builder will automatically expand this rule in to multiple rules in the Cisco command syntax.
    IMPORTANT! To access the objects you previously created, including the router, you need to switch back to the User library. Do this by going to the drop down menu at the top of the object tree panel and switch the selected library from Standard to User.
    Due to the NAT configuration that is setup on the router traffic from the 10.0.0.0/24 network will be NAT'ed by the router to its outside IP address (192.0.2.1). This means the traffic that we want to match with our rule will be sent out the FastEthernet0/0 interface. Set the interface in the rule by dragging-and-dropping the FastEthernet0/0 interface object from the tree to the Interface section of the rule.

    Traffic will be going in the outbound direction on this interface, so we right-click in the Direction section and select Outbound. We want this traffic to be allowed, so we need to change the Action associated with this rule from Deny to Accept. Do this by right-clicking on the Action section of the rule an selecting Accept. Finally, since this is a rule that we expect to match a lot of traffic disable logging by right-clicking in the Options section and selecting Logging Off. You should now see a rule that looks like:

    The next rule in our example allows the internal network to access an external POP3 server. Click on the rule you just created and then right-click in the rule number section and select "Add New Rule Below".

    To access the objects that we created earlier we need to switch back to the User library. Click on the drop down menu that says Standard and select User from the list. Drag-and-drop the IP address object for the router's outside inteface from the tree on the left to the rule you just created placing it in the Source section.
    NOTE: You can also copy-and-paste objects. For example, you can right-click on the la-rtr-01:FastEthernet0/0:ip object in the first rule and select Copy. Navigate to the Source section of the new rule you just created and right-click and select Paste.
    This rule requires both the Source and Destination to be set, so go to the Addresses folder and drag-and-drop the POP3 Server object to the Destination section of the rule.
    The POP3 protocol object is located in the Standard library, so select it from the dropdown menu at the top of the Object Window. To find the POP3 object you can scroll down through the object tree, or you can simply type pop3 in to the filter field. This will display all objects in the current library that contain pop3.

    Drag-and-drop the filtered object from the tree to the Sevice section of the rule you are currently editing. Clear the filter field by clicking the X to the right of the input box and then switch back to the User library by selecting it in the dropdown menu at the top of the object panel.
    To set the interface the rule should be applied to drag-and-drop the "outside" interface FastEthernet0/0 to the Interface section of the rule.
    To change the Action to Accept right-click in the Action section of the rule and select Accept. To disable logging for this rule, right-click on the Options section and select Logging Off.
    You should now have 2 rules that look like this:

    Now we need to add our 3rd rule. This rule is designed to allow SSH traffic from the internal network to the routerĂ¢€™s inside interface.
    Create a new rule below the last rule by selecting the last rule and right-clicking and selecting Add New Rule Below from the menu. This will create a new rule configured with the default values to deny all.
    Modify this rule by dragging-and-dropping the Internal Network object from the tree to the Source section of the newly created rule. To restrict the rule to only allow traffic destined to the IP address of the router's FastEthernet1/0 interface, double-click on the firewall object's FastEthernet1/0 interface to expand it. Drag-and- drop the IP address of the interface to the Destination section of the rule.
    To set the service to SSH switch to the Standard library by selecting it from the dropdown menu above the object tree and then type in "ssh" in the filter box. Drag-and-drop the ssh object from the tree to the Service section. Clear the filter by clicking on the X next to the filter input text box.
    Switch back to the User library by selecting it from the dropdown menu above the object tree. Double click the la-rtr-1 object to expand it and drag-and-drop the FastEthernet1/0 interface to the Interface section of the rule.
    Since this rule only applies to inbound traffic on this interface set the direction to Inbound by right-clicking in the Direction section and selecting Inbound. Finally, change the action for the rule by right- clicking on the Action section and selecting Accept. Since this rule defines access to the router via SSH we will leave logging enabled for this rule.
    You should now have 3 rules that look like:

    Since we added a rule that will create an inbound access list on the inside FastEthernet0/1 interface, we need to add rules that allow the traffic for the first two rules inbound on inside interface. Otherwise this traffic would be blocked coming in to the router and it would never reach the outbound access list on the outside interface. Follow the same steps from above, but set the interface to the inside FastEthernet0/1 interface and set the Direction as Inbound.
    Your rules should now look like this:

    Finally, we need to add a rule to the router's outside interface that blocks all traffic trying to access the router directly on its outside interface IP address.
    To do this we follow the same process from the earlier examples. Since this rule should match all traffic coming from the Internet we leave the Source section as Any. Set the Destination section by dragging-and-dropping the IP address object for outside interface FastEthernet0/0. We want to block all serices, so leave the Service section set to Any. We want this rule to match incoming traffic, so we right-click in the Direction section and select Inbound. The desired Action is to deny the traffic, so we leave that as the default. Finally since this rule will potentially match a lot of traffic we disable logging by right-clicking on the Options section and selecting Logging Off.
    We are now done configuring the rules for our access lists and the configuration should look like:

    In the next section we will go through the process of converting these rules in to Cisco commands and installing them on the router.




    Source

    Using Firewall Builder To Configure Router Access Lists - PT 1

    Firewall Builder is a firewall configuration and management GUI that supports configuring a wide range of firewalls from a single application. Supported firewalls include Linux iptables, BSD pf, Cisco ASA/PIX, Cisco router access lists and many more. The complete list of supported platforms along with downloadable binary packages and soure code can be found at http://www.fwbuilder.org.
    This tutorial is the first in a series of howtos that will walk through the basic steps of using Firewall Builder to configure each of the supported firewall platforms. In this tutorial we will configure Access Control Lists (ACL) on a Cisco router.
    The diagram below shows a simple 2 interface router configuration with the router acting as a gateway to the Internet for a private LAN network.

    We will use Firewall Builder to implement the following basic rules as access lists on the router.
    • Allow inside traffic (10.0.0.0/24) through the router to any Internet address for the HTTP and HTTPS protocols
    • Allow inside traffic (10.0.0.0/24) through the router to a specific IP address (198.51.100.1) for the POP3 protocol.
    • Allow inside traffic (10.0.0.0/24) to the router's inside interface (FastEthernet0/1) for the SSH protocol.
    • Block all incoming traffic to the rotuer's outside interface FastEthernet0/0.
    Note that Cisco router access lists have an implicit deny all at the end of every access list, so anything that we do not setup a rule to explicitly permit will be denied.
    The NAT configuration on the router is as follows:
    interface FastEthernet0/0
    ip nat outside

    interface FastEthernet0/1
    ip nat inside

    access-list 1 permit 10.0.0.0 0.0.0.255

    ip nat inside source list 1 interface FastEthernet0/0 overload

    Step 1: Create Network Objects

    We are going to start by creating the objects that will be used in the rules. Firewall Builder includes hundreds of predefined objects, including most standard protocols, so to implement the rules above we will only need to create the objects that are specific to our network. For our rules this means we need to create objects for the internal 10.0.0.0/24 network and for the POP3 server with an IP address of 198.51.100.1.
    Click here to find out more!
     





    Source