Thursday, December 16, 2010

Some hidden goods in MySQL 5.5

The announcement of MySQL 5.5 released as GA has outlined the improvements in this version, which indeed has enough good new features to excite most any user.
There are two additions, though, that were lost in the noise of the bigger features, and I would like to spend a few words for each of them. The first addition is something that users of stored routines have been waiting for since MySQL 5.0. No, it is not SIGNAL and its close associate RESIGNAL, which have been publicized enough. I am talking about the stored routine parameters, for which now there is a dedicated table in the information_schema.
Let's see an example, with a simple procedure that uses three parameters.


drop procedure if exists add_to_date ;
create procedure add_to_date(in d date, in i int, out nd date)
deterministic
set nd = d + interval i day;
This works as expected in both 5.1 and 5.5. (Never mind that it's redundant. I know it. It's only for the sake of keeping the example short).

call add_to_date('2010-12-15',10,@new_date);
Query OK, 0 rows affected (0.00 sec)

select @new_date;
+------------+
| @new_date |
+------------+
| 2010-12-25 |
+------------+
1 row in set (0.00 sec)
The difference starts to show when you want to deal with this procedure programmatically. If you need to find out which parameters are expected by this procedure, your only option in MySQL 5.1 is parsing the result of SHOW CREATE PROCEDURE add_to_date. Not terribly difficult in any scripting language, but a hassle in SQL.
In MySQL 5.5, instead, you can easily get the routine parameters with a simple query:

select parameter_name, parameter_mode,data_type from information_schema. parameters where specific_schema='test' and specific_name= 'add_to_date' order by ordinal_position;
+----------------+----------------+-----------+
| parameter_name | parameter_mode | data_type |
+----------------+----------------+-----------+
| d | IN | date |
| i | IN | int |
| nd | OUT | date |
+----------------+----------------+-----------+
3 rows in set (0.00 sec)

Speaking of the information_Schema, there are more goodies that were not emphasized enough. The Innodb engine that you find in the server is the evolution of the InnoDB plugin that ships with MySQL 5.1. Only that it is now built-in. What many people forget to mention is that the plugin (and thus the current InnoDB engine in 5.5) comes provided with its own InnoDB-specific instrumentation tables in the information_schema.

show tables like 'innodb%';
+----------------------------------------+
| Tables_in_information_schema (innodb%) |
+----------------------------------------+
| INNODB_CMP_RESET |
| INNODB_TRX |
| INNODB_CMPMEM_RESET |
| INNODB_LOCK_WAITS |
| INNODB_CMPMEM |
| INNODB_CMP |
| INNODB_LOCKS |
+----------------------------------------+
7 rows in set (0.00 sec)
This is the same set of tables that you may have seen if you have worked with the InnoDB plugin in 5.1. In short, you can get a lot of the info that you used to look at in the output of SHOW ENGINE INNODB STATUS. For more information, you should look at what the InnoDB plugin manual says on this topic.
I don't know if the tables can replace the SHOW ENGINE INNODB STATUS. Perhaps someone can comment on this issue and provide more information?

Source


   

What if Jessus Born Today..??? MUST SEE..!

The Antikythera Mechanism... Built With LEGO

Antikythera
I'll be honest, I had little clue about what the "Antikythera Mechanism" was. Although I'd heard of it, I didn't know who built it, when it was built or why it was built.
As it turns out, in 1901, divers off the coast of the Greek island of Antikythera found a device on board a shipwreck dating back over 2,000 years. Not much was known about the "device" until, in 2006, scientists carried out X-ray tomography on what remained of the complex artifact.

According to the recent Nature article Ancient astronomy: Mechanical inspiration, by Jo Marchant:
"The device, which dates from the second or early first century BC, was enclosed in a wooden box roughly 30 centimetres high by 20 centimetres wide, contained more than 30 bronze gearwheels and was covered with Greek inscriptions. On the front was a large circular dial with two concentric scales. One, inscribed with names of the months, was divided into the 365 days of the year; the other, divided into 360 degrees, was marked with the 12 signs of the zodiac."
The device -- which sounds like something that belongs in a Dan Brown novel -- is an ancient celestial computer, driven by gears to carry out the calculations and dials to accurately predict heavenly events, such as solar eclipses. The technology used to construct the device wasn't thought to be available for another 1,000 years.
According to Adam Rutherford, editor of Nature, the science journal has a long standing relationship with the Antikythera Mechanism. In a recent email, Rutherford pointed to a video he had commissioned in the spirit of continuing Nature coverage of this fascinating device. But he hadn't commissioned a bland documentary about the history of the Antikythera Mechanism, he'd commissioned an engineer to build the thing out of LEGO!
The result is an engrossing stop animation production of a LEGO replica of this ancient celestial calculator. For me, this video really put the device in perspective. The Greeks, over 2,000 years ago, built a means of predicting the positions of the known planets, the sun, even the elliptical motions of planetary orbits. They'd drawn inspiration from the Babylonians (according to new research reported on by Nature) and re-written the history of what we understand of the ancient civilization's technical prowess.
Sadly for the ancient Greeks, the Antikythera Mechanism was lost for 2,000 years at the bottom of the ocean and only now are we beginning to understand just how advanced this fascinating piece of technology truly is.

Watch this video, it's awesome:




Source

Tuesday, December 14, 2010

Critics raise doubts on NASA's arsenic bacteria

Critics raise doubts on NASA's arsenic bacteria

December 9, 2010 by Lin Edwards Critics raise doubts on NASA’s arsenic bacteriaEnlarge
A microscopic image of GFAJ-1 grown on arsenic.
(PhysOrg.com) -- NASA’s announcement last week that bacteria had been discovered that appeared to replace phosphorus with arsenic and thrive even in the most poisonous environments, has now come under fire from a number of scientists.


The findings reported last week, were that some bacteria (GFAJ-1) thrived when access to phosphate was removed and the bacteria were grown in a highly toxic culture rich is arsenate. The scientists suggested the bacteria thrived because they were able to replace , which has always been thought vital to , with , which is directly under it on the periodic table and has similar chemical properties. The researchers also suggested the bacteria were replacing phosphorus with arsenic within the bases that make up DNA.
These findings, if correct, would mean the scientists had found a new form of life on Earth, and it would also re-write the guide book on the essential requirements for life to exist elsewhere.
After the findings were published in Science, other scientists began immediately to express their doubts at the conclusions of the paper, with some even expressing the opinion the paper should not have been published at all.
One of the critics was Dr. Alex Bradley, from Harvard University, who blogged that there were a number of problems with the research. Firstly, if arsenic had replaced phosphorus in the DNA the molecule would have broken into fragments when the DNA was placed in water, since the arsenic would have hydrolyzed, and yet it did not. Secondly, the paper showed there was a small amount of phosphorus in the medium and Bradley argued that even though small, this could have been enough, since bacteria metabolism is extremely efficient.
Dr. Bradley also pointed out the bacteria live in Mono Lake, which is rich in arsenic but which also contains a higher concentration of phosphate than almost anywhere else on Earth, and this means there would be no selective pressure for a life based on arsenic to evolve.

Dr. Bradley also suggested a mass spectrum of the DNA sequences would have shown whether or not the nucleotides contained arsenic in place of phosphorus, but this was not done.
Another critic was University of British Columbia biologist Rosie Redfield, who reviewed the paper on her blog, and has more recently submitted a letter to the journal. Among her conclusions are that the paper “doesn't present ANY convincing evidence that arsenic has been incorporated into DNA (or any other biological molecule).” She also writes: “If this data was presented by a PhD student at their committee meeting, I'd send them back to the bench to do more cleanup and controls.”
Dr. Redfield also points out there was phosphate in the culture and that the authors did not calculate whether the amount of growth they saw in the arsenate-only medium could be supported by the phosphate present. She calculates on the blog that the growth of the bacteria is actually limited by the amount of phosphorus.
Another point made by Redfield is that the arsenic bacteria were “like plump little corn kernels” and contain granules, which are usually produced by bacteria when they have ample supplies of carbon and energy sources but there are shortages of other nutrients needed for growth.
The authors of the arsenic bacteria paper initially refused to be drawn into the arguments, saying the discussion should be confined to peer-reviewed journals, but one of the authors, Ronald Ormeland, did answer questions on the controversy after giving a lecture on the findings at headquarters yesterday. He said the amount of phosphorus in the sample was too small to sustain growth, and a mass spectrum was not done because they did not have enough money, and wanted to get the result published quickly. He also pointed out that the are still there and other scientists could duplicate the work and carry out further experiments if they wished.

Source

Monday, December 13, 2010

10 Cool Nmap Tricks and Techniques

Nmap (“Network Mapper”) is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime.

In addition to my list you can also check out this Comprehensive Guide to Nmap here and of course the man pages
Here are some really cool scanning techniques using Nmap

1) Get info about remote host ports and OS detection

nmap -sS -P0 -sV -O <target>
Where < target > may be a single IP, a hostname or a subnet
-sS TCP SYN scanning (also known as half-open, or stealth scanning)
-P0 option allows you to switch off ICMP pings.
-sV option enables version detection
-O flag attempt to identify the remote operating system
Other option:
-A option enables both OS fingerprinting and version detection
-v use -v twice for more verbosity.
nmap -sS -P0 -A -v < target >

2) Get list of servers with a specific port open

nmap -sT -p 80 -oG – 192.168.1.* | grep open
Change the -p argument for the port number. See “man nmap” for different ways to specify address ranges.

3) Find all active IP addresses in a network

nmap -sP 192.168.0.*
There are several other options. This one is plain and simple.
Another option is:
nmap -sP 192.168.0.0/24
for specific  subnets

4)  Ping a range of IP addresses

nmap -sP 192.168.1.100-254
nmap accepts a wide variety of addressing notation, multiple targets/ranges, etc.

5) Find unused IPs on a given subnet

nmap -T4 -sP 192.168.2.0/24 && egrep “00:00:00:00:00:00″ /proc/net/arp

6) Scan for the Conficker virus on your LAN ect.

nmap -PN -T4 -p139,445 -n -v –script=smb-check-vulns –script-args safe=1 192.168.0.1-254
replace 192.168.0.1-256 with the IP’s you want to check.

7) Scan Network for Rogue APs.

nmap -A -p1-85,113,443,8080-8100 -T4 –min-hostgroup 50 –max-rtt-timeout 2000 –initial-rtt-timeout 300 –max-retries 3 –host-timeout 20m –max-scan-delay 1000 -oA wapscan 10.0.0.0/8
I’ve used this scan to successfully find many rogue APs on a very, very large network.

8) Use a decoy while scanning ports to avoid getting caught by the sys admin

sudo nmap -sS 192.168.0.10 -D 192.168.0.2
Scan for open ports on the target device/computer (192.168.0.10) while setting up a decoy address (192.168.0.2). This will show the decoy ip address instead of your ip in targets security logs. Decoy address needs to be alive. Check the targets security log at /var/log/secure to make sure it worked.

9) List of reverse DNS records for a subnet

nmap -R -sL 209.85.229.99/27 | awk ‘{if($3==”not”)print”(“$2″) no PTR”;else print$3″ is “$2}’ | grep ‘(‘
This command uses nmap to perform reverse DNS lookups on a subnet. It produces a list of IP addresses with the corresponding PTR record for a given subnet. You can enter the subnet in CDIR notation (i.e. /24 for a Class C)). You could add “–dns-servers x.x.x.x” after the “-sL” if you need the lookups to be performed on a specific DNS server. On some installations nmap needs sudo I believe. Also I hope awk is standard on most distros.

10) How Many Linux And Windows Devices Are On Your Network?

sudo nmap -F -O 192.168.0.1-255 | grep “Running: ” > /tmp/os; echo “$(cat /tmp/os | grep Linux | wc -l) Linux device(s)”; echo “$(cat /tmp/os | grep Windows | wc -l) Window(s) devices”

Hope you have fun, and remember don’t practice these techniques on machines or networks that are not yours.

 Source

Sunday, December 12, 2010

Bacteria cells used as secure information storage device






Cambridge - A technique for encryption/ compression/decryption of data and the use of bacteria as a secure storage device was successfully produced by a team of Chinese biochemistry students as an alternative solution for storing electronic data.
A team of instructors and students of the Chinese University of Hong Kong (CUHK) have managed to store enormous amounts of data in bacteria. The system is based on a novel cryptographic system for data encoding and the application of a compression algorithm which reduces its size dramatically. Following the reduction in size, the researchers were able to enter the information into bacteria in the form of modified DNA sequences. They used the DH5-alpha strain of Escherichia coli, a bacterium normally found in the intestines of most animals. This bacterium is often used as a model organism in microbiology and biotechnology. Modified E. coli has also been used in bioengineering for the development of vaccines, bio-remediation and the production of certain enzymes. Two research groups have already conducted unsuccessful experiments in 2001 and 2007 aiming to the use of biological systems as data storage devices. The researchers of the Chinese University of Hong Kong used encoded E. coli plasmid DNA (a molecule of DNA usually present in bacteria that replicate independently of chromosomal DNA) to encrypt the data and store it in the bacteria. Then, by using a novel information processing system they were able to reconstruct and recover the data with error checking. Another advantage of the system is that the bacteria cells abundantly replicate the data storage units thereby ensuring the integrity and permanence of the data by redundancy. Based on the procedures tested, they estimate the ability to store about 900000 gigabytes (GB) in one gram of bacteria cells. That is the equivalent of 450 hard drives, each with the capacity of 2 terabytes (2000 GB).
As an example of the potential for storage they explain that the text of the Declaration of Independence of the United States (8047 characters) could be stored in just 18 bacteria cells. One gram of bacteria cells contains approximately 10 million cells.
"We believe this could be an industry standard for large-scale manipulation of data storage in living cells"
said the researchers responsible for the project on their website where they describe the potential of data bio-encryption and storage. The researchers envision a wide range of applications for this technology. The capabilities of what they describe as a “bio-hard-disk” include the storage of text, images, music and even movies, or the insertion of barcodes into synthetic organisms as part of security protocols to discriminate between synthetic and natural organisms. The team of researchers was integrated by 3 instructors and 10 undergraduate biochemistry students of CUHK. They carried out their study as part of a worldwide Synthetic Biology competition called The International Genetically Engineered Machine (iGEM) organized by the Massachusetts Institute of Technology (MIT) of the USA. The CUHK team obtained a gold award in the iGEM competition.
“Biology students learn engineering approaches and tools to organize, model, and assemble complex systems, while engineering students are able to immerse themselves in applied molecular biology.”
declared iGEM organizers. The iGEM competition started in 2003. The 2010 version included over 1,900 participants in 138 teams from around the world. They were required to specify, design, build, and test simple biological systems made from standard, interchangeable biological parts. The achievements of the iGEM research teams often lead to important advances in medicine, energy, biotechnology and the environment.

Read more
 http://www.scribd.com/doc/44687672/Bacterial-based-storage-and-encryption-device

Cyber war will hit all web users - BBC










The conflict between Wikileaks supporters and the companies withdrawing their services from the whistle-blowing website has been dubbed a "cyber war".
Activists have targeted firms such as PayPal, Mastercard and Visa for their opposition to the site's publication of thousands of secret US diplomatic messages.
But there are fears the online battle could lead to everyday internet use becoming much more heavily regulated.
Source - BBC

Wednesday, December 8, 2010

Who’s to Blame for the Linux Kernel?

Finger-pointing time! Let’s see who’s responsible for kernel development in the last year. Once again, the Linux foundation has released its report on who wrote Linux. As always, it has some interesting insight into who did what when it comes to kernel development, and the direction of the kernel. Unsurprisingly, embedded/mobile is becoming a major factor in kernel development.
The Linux Foundation publishes an annual Linux report that shows (approximately) who has written and contributed to the Linux kernel. The report is put together by LWN’s Jon Corbet (also a kernel contributor) and kernel developer Greg Kroah-Hartman, with additional contributions from the Linux Foundation’s Amanda McPherson.

The Top 5
Everybody wants to know, who’s at the top of the list. Consistently at the top is “none,” which is to say that nearly 20% of the kernel development is done by people who aren’t affiliated with a company — at least as far as their kernel contributions go. Yes, Virginia, independent kernel contributions still exist.
The report provides two lists — contributions since 2.6.12, when Git logs became available, and since the last report (2.6.30). Red Hat tops both lists, with 12.4% of kernel changes since 2.6.12, and 12.0% since 2.6.30. A tiny decline, but remember that the number of developers participating in each release cycle grows by about 10%. Meaning that the proverbial pond keeps getting bigger, and the Red Hat fish isn’t getting much smaller in comparison.
The red fish keeps growing, but the green fish isn’t keeping up quite as well. Novell had 7.0% of kernel contributions since 2.6.12, but only 5.0% since 2.6.30. It’s dropped from second to third in kernel contributions, after Intel, which had 7.8% of kernel contributions since 2.6.30. Some of that may be that more X.org is being moved into the kernel, and a lot of X.org development is being done by Intel, and Intel is also doing more with its work on MeeGo.
Intel comes in second on most recent contributions, bumping Novell to its third place spot. IBM is also displaced by Intel, landing at fourth (Intel’s old slot). Who’s in fifth (sorry Abbot, Costello)? Nokia. Yep, Nokia — who were behind SGI, Parallels, and Fujitsu in 2009.
If you’re looking for individuals, the top five since 2.6.30 are Paul Mundt, Johannes Berg, Peter Zijlstra, Bartlomiej Zolnierkiewicz, Greg Kroah-Hartman. Mundt explains Renesas’ place in the list — he’s working for them, after a stint at the CE Linux Forum (CELF). Berg is on Intel’s payroll, working on wireless, Zijlstra works for Red Hat, and Zolnierkiewicz is a student at Warsaw University of Technology. Kroah-Hartman, of course, is at Novell.
Linus Torvalds doesn’t make the list not because he’s not doing anything, but because the list doesn’t measure what Torvalds does very well. That is to say, Torvalds spends much of his time merging commits from others and not so much writing his own code. Still quite important, but not as easily measured.
I beat Oracle up pretty heavily lately because of their antagonism towards Google and open source Java, as well as their mishandling of OpenSolaris, OpenOffice.org, and virtually all of the properties they got from Sun. Nothing that’s related to open source has gotten better since Oracle took it over. Still, the company turns in a respectable — if somewhat reduced — showing in kernel development. Oracle clocks in with 1.9% of kernel changes since 2.6.30, and 2.3% since 2.6.12.
Then there’s Canonical. Or rather, there Canonical isn’t. Once again, the most popular Linux desktop vendor and would-be enterprise Linux player doesn’t rank highly enough in kernel development to show up — even in the past year. I might get flamed for mentioning this, but I do think it’s worth pointing out. Yes, Canonical makes valuable contributions to Linux in other areas — even if the seem ashamed or reluctant to mention that Ubuntu is Linux underneath. Does Canonical need to contribute to the kernel to be successful? Apparently not. Should Canonical be contributing more given its standing and dependency on the Linux kernel? I believe so.
Embedded
Nokia’s placement on the list shows that much more development is being driven by mobile and embedded Linux. In the past, server Linux was the big money behind the kernel. Still is, but it’s making room for embedded Linux.
Nokia has jumped up in the standings and has doubled its percentage of contribution. Wolfson Microelectronics and Renesas Technology appear in the top 20 for the first time. Both companies are working with embedded Linux. Texas Instruments also makes the list — Linux on a calculator, anyone?
Broadcom and Atheros also make the top 20 since 2.6.30 — which is good, we might see fewer and fewer chipsets that aren’t supported in Linux.
What’s disappointing is that Google isn’t higher in the ranks here. Actually — Google has dropped off the top 20 altogether since 2.6.30. The search giant had less than a percent (0.8%) of kernel changes since 2.6.12, and only 0.7% since 2.6.30. Google is behind Pengutronix, for goodness sakes. Have you heard of Pengutronix? Nope, me either. For a company that is arguably using more Linux than anybody — pushing two Linux-based OSes and likely to have more Linux servers in use than any other entity — Google’s kernel contributions are actually quite paltry.
Summary
2011 should be interesting. If Google finally merges Android’s changes into the mainline kernel, that should bump Google up in the standings. I suspect, and hope, SUSE/Novell will move past Intel in 2011, now that its future is a bit more clear. As MeeGo continues to gather steam, I suspect Nokia will also show up a bit higher in the standings.
In all, Linux kernel development is as healthy as ever. I’d be curious to see a similar report for other major system utilities and such (GCC, the GNU utilities, X.org, Apache Web server). The kernel is very important, but just a part of the overall ecosystem. There’s plenty of userspace goodies that companies should get credit for as well.
Make sure to check out the full report PDF too. It makes for good reading, and it’s short and well-written.

Source

Why it's bad to use feof() to control a loop

When reading in a file, and processing it line by line, it's logical to think of the code loop as "while not at the end of the file, read and process data". This often ends up looking something like this:
i = 0;
  
while (!feof(fp))
{
  fgets(buf, sizeof(buf), fp);
  printf ("Line %4d: %s", i, buf);
  i++;
}

This apparently simple snippet of code has a bug in it, though. The problem stems from the method feof() uses to determine if EOF has actually been reached. Let's have a look at the C standard:
7.19.10.2 The feof function

Synopsis

1 #include <stdio.h>
int feof(FILE *stream);

Description
2 The feof function tests the end-of-file indicator for the stream pointed to by stream.

Returns
3 The feof function returns nonzero if and only if the end-of-file indicator is set for stream.

Do you see the problem yet? The function tests the end-of-file indicator, not the stream itself. This means that another function is actually responsible for setting the indicator to denote EOF has been reached. This would normally be done by the function that performed the read that hit EOF. We can then follow the problem to that function, and we find that most read functions will set EOF once they've read all the data, and then performed a final read resulting in no data, only EOF.With this in mind, how does it manifest itself into a bug in our snippet of code? Simple... as the program goes through the loop to get the last line of data, fgets() works normally, without setting EOF, and we print out the data. The loop returns to the top, and the call to feof() returns FALSE, and we start to go through the loop again. This time, the fgets() sees and sets EOF, but thanks to our poor logic, we go on to process the buffer anyway, without realising that its content is now undefined (most likely untouched from the last loop).
This problem results in the last line being printed twice. Now, with the various code and compilers I've tried, I've seen varying results when using this poor quality code. Some give the wrong answer as described here, but some do seem to get it right, and print the last line only once.
Here is a full example of the broken code. It's pointless providing sample results, as they're not necessarily going to be the same as yours. However, if you compile this code, and run it against an empty file (0 bytes), it should output nothing. If it's doing it wrong, as I expect it will, you'll get a line similar to this:
Line 0: Garbage
Here, Garbage was left in the buffer from the initialisation, but should not have been printed. Anyway, enough talk, here's the code.
#include <stdio.h> 
#include <stdlib.h> 

#define MYFILE "junk1.txt" 

int main(void)
{
  FILE *fp;
  char buf[BUFSIZ] = "Garbage";
  int i;
  
  if ((fp = fopen(MYFILE, "r")) == NULL)
  {
    perror (MYFILE);
    return (EXIT_FAILURE);
  }
  
  i = 0;
  
  while (!feof(fp))
  {
    fgets(buf, sizeof(buf), fp);
    printf ("Line %4d: %s", i, buf);
    i++;
  }
  
  fclose(fp);
    
  return(0);
}

To correct the problem, always follow this rule: use the return code from the read function to determine when you've hit EOF. Here is a revised edition of the same code, this time checking the return code from fgets() to determine when the read fails. The code is exactly the same, except for the loop.
#include <stdio.h> 
#include <stdlib.h> 

#define MYFILE "junk1.txt" 

int main(void)
{
  FILE *fp;
  char buf[BUFSIZ] = "Garbage";
  int i;
  
  if ((fp = fopen(MYFILE, "r")) == NULL)
  {
    perror (MYFILE);
    return (EXIT_FAILURE);
  }
  
  i = 0;

  while (fgets(buf, sizeof(buf), fp) != NULL)
  {
    printf ("Line %4d: %s", i, buf);
    i++;
  }
  
  fclose(fp);
    
  return(0);
}

When this is run against an empty file (0 bytes), it will not print anything.Here are some other read functions being used to control loops:
total = 0;
  
while (fscanf(fp, "%d", &num) == 1)
{
  total += num;
}

printf ("Total is %d\n", total);

int c;
  
while ((c = fgetc(fp)) != EOF)
{
  putchar (c);
}
Source

Definition of EOF and how to use it effectively

The use and meaning of EOF seems to cause a lot of confusion with some new coders, hopefully this explanation will help you understand better. Before I go into too much detail about what EOF is, I'll tell you what it isn't.
EOF is NOT:

  • A char




  • A value that exists at the end of a file




  • A value that could exist in the middle of a file



  • And now to what it actually is.
    EOF is a macro defined as an int with a negative value. It is normally returned by functions that perform read operations to denote either an error or end of input. Due to variable promotion rules (discussed in detail later), it is important to ensure you use an int to store the return code from these functions, even if the function appears to be returning a char, such as getchar() or fgetc().
    Here are some code examples that you might use:

    int c;
      
    while ((c = fgetc(fp)) != EOF)
    {
      putchar (c);
    }
    

    int ch;

    while ((ch = cin.get()) != EOF)
    {
    cout <<(char)ch;
    }

    char to int PromotionBy definition an int is larger than a char, therefore a negative valued int can never hold the same value as a char. However, when you compare an int with a char, the char will get promoted to an int to account for the difference in size of the variables. The value of a promoted char is affected by its sign, and unfortunately, a char can be either signed or unsigned by default, this is compiler dependant.
    To understand this better, let's look at the representation of a few numbers in both ints and chars.
    The following assumes 2 byte ints (your compiler might use a larger amount). A char uses only 1 byte (this will be the same amount on your compiler). With the exception of the first column, the values are shown in hexadecimal.
    -----------------------------        ------------------------------
    |  char and int comparison  |        |     char to int promotion  |
    -----------------------------        ------------------------------
    | Decimal |  int    |  char |        |  char | unsigned | signed  |
    |---------|---------|-------|        |-------|----------|---------|
    |  2      |  00 02  |  02   |        |  02   |  00 02   |  00 02  |
    |  1      |  00 01  |  01   |        |  01   |  00 01   |  00 01  |
    |  0      |  00 00  |  00   |        |  00   |  00 00   |  00 00  |
    | -1      |  FF FF  |  FF   |        |  FF   |  00 FF   |  FF FF  |
    | -2      |  FF FE  |  FE   |        |  FE   |  00 FE   |  FF FE  |
    -----------------------------        ------------------------------

    The "char to int promotion" table makes it clear that the sign of a char produces a very different number in the int.So what does all this mean to me as a programmer?
    Well, let's have a look at a revised version of the code shown above, this time incorrectly using a char variable to store the return code from fgetc().

    char c;

    while ((c = fgetc(fp)) != EOF)
    {
    putchar (c);
    }

    Now let's assume that within the file we are reading from is a byte with value 0xff. fgetc() returns this value within an int, so it looks like this: 0x00 0xff (again, I'm assuming 2 byte ints). To store this value in a char, it must be demoted, and the char value becomes 0xff. Next, the char c is compared with the int EOF. Promotion rules apply, and c must be promoted to an int. However, in the sample code, the sign of c isn't explicitly declared, so we don't know if it's signed or unsigned, so the int value could become either 0xff 0xff or 0x00 0xff. Therefore, the code is is not guaranteed to work in the way we require.
    The following is a short program to help show the promotion:

    #include <stdio.h>

    int main(void)
    {
    int i = -1;
    signed char sc = 0xff;
    unsigned char usc = 0xff;

    printf ("Comparing %x with %x\n", i, sc);
    if (i == sc) puts("i == sc");
    else puts("i != sc");
    putchar ('\n');
    printf ("Comparing %x with %x\n", i, usc);
    if (i == usc) puts("i == usc");
    else puts("i != usc");

    return 0;
    }

    /*
    * Output

    Comparing ffff with ffff <--- Notice this has been promoted
    i == sc

    Comparing ffff with ff
    i != usc

    *
    */
    Another scenario to consider is where the char is unsigned.  In  this case, the process of demoting and promoting the returned value  from fgetc() will have the affect of corrupting the EOF value, and the program will get stuck in a infinite loop.  Let's follow that process through:
    - EOF (0xff 0xff) is returned by fgetc() due to end of input
    - Value demoted to 0xff to be stored in unsigned char c
    - unsigned char c promoted to an int, value goes from 0xff to 0x00 0xff
    - EOF is compared with c, meaning comparison is between 0xff 0xff and 0x00 0xff.
    - The result is FALSE (the values are different), which is undesirable.
    - fgetc() is called again, and still returns EOF.  The endless loop begins.
    

    The following code demonstrates this problem.

    #include <stdio.h>

    int main(void)
    {
    FILE *fp;
    unsigned char c;

    if ((fp = fopen("myfile.txt", "rb")) == NULL)
    {
    perror ("myfile.txt");
    return 0;
    }

    while ((c = fgetc(fp)) != EOF)
    {
    putchar (c);
    }

    fclose(fp);
    return 0;
    Source
     

    Friday, December 3, 2010

    How I got Xampp to work on 64 bit

    Here's what I did to get Xampp to work on 64 bit Ubuntu Studio (Hardy)

    From Synaptic:

    -Install ia32-libs

    In a terminal:

    -Pull package from Apache Friends (version may change)
    wget http://www.apachefriends.org/download.php?xampp-linux-1.6.6.tar.gz

    -su to root, or use sudo for each of the commands below:

    -Extract, w/ overwrite into /opt:
    tar xvfz xampp-linux-1.6.6.tar.gz -C /opt

    -Start xampp:
    /opt/lampp/lampp start

    -Test Xampp:
    type localhost in a browser

    -Start Xampp on boot:
    gedit /etc/init.d/rc.local
    Below the #! /bin/sh line, type:
    /opt/lampp/lampp start

    -Make Xampp more secure:
    /opt/lampp/lampp security
    (Follow prompts.)

    That's it. Your pages go in /opt/lampp/htdocs.

    If you use PHP scripts in your html & you want to keep the .html or .htm extension on your pages, you can:

    -Open text editor and type:

    RemoveHandler .htm. .htm
    AddType application/x-httpd-php .php .htm .html

    -Save the file as .htaccess (note the dot) and place in /opt/lampp/htdocs.



    Source