Showing posts with label News. Show all posts
Showing posts with label News. Show all posts

Saturday, September 10, 2011

Quantum computing with light

A switch that lets one photon alter the quantum state of another could point the way to both practical quantum computers and a quantum Internet.

 Larry Hardesty, MIT News Office - SOURCE

Quantum computers are largely theoretical devices that would exploit the weird properties of matter at extremely small scales to perform calculations, in some cases much more rapidly than conventional computers can. To date, the most promising approach to building quantum computers has been to use ions trapped in electric fields. Using photons instead would have many advantages, but it's notoriously difficult to get photons to interact: Two photons that collide in a vacuum simply pass through each other.

 

In Science, researchers at the Massachusetts Institute of Technology (MIT) and Harvard Universitydescribe an experiment that allows a single photon to control the quantum state of another photon. The result could have wide-ranging consequences for quantum computing and quantum communication, the quantum analog to conventional telecommunications.

A quantum particle has the odd property that it can be in "superposition," meaning it's in two different states at the same time: Fire a single photon at a barrier with two slits in it, for instance, and it will, in some sense, pass through both of them. Where the bits in an ordinary computer can represent either zero or one, a bit made from a qubit could thus represent both zero and one at the same time.

For this reason, a string of only 16 qubits could represent 64,000 different numbers simultaneously. It's because a quantum computer could, in principle, evaluate possible solutions to the same problem in parallel that quantum computing promises major increases in computational speed.

But one of the difficulties in building quantum computers is that superpositions of states can be very fragile: Any interaction with its environment can cause a subatomic particle to snap into just one of its possible states. Photons are much more resistant to outside influences than subatomic particles, but that also makes them harder to control; over the course of a computation, a quantum computer needs to repeatedly alter the states of qubits.

The MIT and Harvard researchers' new paper points toward a quantum computer that offers the best of both worlds: stability and control. Moreover, photons in superposition could carry information stored as qubits rather than as ordinary bits, opening the possibility of a quantum Internet.

Slowing light
Vladan Vuletic, the Lester Wolfe Professor of Physics at MIT; his student, Haruka Tanji-Suzuki, a member of the MIT-Harvard Center for Ultracold Atoms (CUA); Wenlan Chen, an MIT graduate student, and Renate Landig, a visiting student, both at CUA; and Jonathan Simon, a postdoc at Harvard, developed an optical switch that consists of a small cluster of cesium atoms suspended between two tiny mirrors in a vacuum cavity. "The only way to make two photons interact with one another is to use atoms as a mediator," Vuletic says. "The [first] photon changes the state of the atom, and therefore it modifies the atom’s interaction with the other photon."

When a photon enters the cavity, it begins bouncing back and forth between the mirrors, delaying its emission on the other side. If another photon has already struck the cesium atoms, then each pass through them delays this second photon even more. The delay induced by a single pass through the atoms would be imperceptible, but the mirror-lined cavity, Vuletic explains, "allows us to pass the photon many, many times through the atoms. In our case, it’s like passing the photon 40,000 times through the atoms."

When it emerges from the cavity, the second photon thus has two possible states—delayed or extra-delayed—depending on whether another photon has preceded it. With these two states, it could, in principle, represent a bit of information. And if the first photon was in some weird quantum state, where it can't be said to have struck the atoms or not, the second photon will be both extra-delayed and not extra-delayed at the same time. The cavity would thus serve as a quantum switch, the fundamental building block of a quantum computer.

Counting photons
Currently, the extra delay is not quite long enough that delayed and extra-delayed photons can be entirely distinguished, but if the researchers can increase its duration, the switch could have other uses as well. Many potential applications of quantum optics, such as quantum cryptography, quantum communication, and quantum-enhanced imaging, require photons that are emitted in definite numbers—usually one or two. But the most practical method of emitting small numbers of photons—a very weak laser—can promise only an average of one photon at a time: There might sometimes be two, or three, or none. The CUA researchers' switch could be tailored to separate photons into groups of one, two, or three and route them onto different paths.

Because the switch allows the state of one photon to determine that of another, it could also serve as an amplifier in a quantum Internet, increasing the strength of an optical signal without knocking the individual photons out of superposition. By the same token, it could serve as a probe that detects photons without knocking them out of superposition, improving the efficiency of quantum computation.


 

Sunday, July 10, 2011

Freelance Freedom

Resources: Freelance Freedom

Inner eye Project


While Microsoft tend to support more Natural User Interface (NUI) projects, as most celebrated example its early NUI work is the Kinect Xbox body motion sensor, there’s also quite a bit of focus by Microsoft’s NUI researchers on the intersection between NUI and healthcare. There are a number of Microsoft Research projects exploring the NUI-health connection. One of these projects, named “Inner Eye” is focused on  the medical automation analysis of Computed Tomography ( CT ) scans, using modern machine learning techniques as 3D model navigation and visualization.
InnerEye takes advantage of advances in computer-human interactions that have put computers on a path to work for us and collaborate with us. The development of a natural user interface (NUI) enables computers to adapt to you and be more integrated into your environment via speech, touch, and gesture. As NUI systems become more powerful and are imbued with more situational awareness, they can provide beneficial, real-time interactions that will be seamless and naturally suited to your context—in short, systems will understand where you are and what you’re doing
Antonio Criminisi, as a leader of the research group of Microsoft’s Research center in Cambridge, who develop the system that will make it easier for doctors to work with databases of medical imagery. This system indexes the images generated during the scans. It automatically recognizes organs, and they are working to train the system to detect certain kinds of brain tumors.
This software snaps a collection of 2D and 3D images and index them all together. After combining them all together, medical imaging databases are created using the text comments linked to the image for doctors to search. This gives them the ability to search, but it takes time because not all of the results are relevant. These kinds of systems will allow doctors to easily navigate from new images to old images in the same patient, side-by-side. It will also allow doctors to easily pull up images from other patients for comparison.
Criminisi’s team is also working on embedding the technology found in Kinect. This will give surgeons the ability to navigate through the images with gestures. This will give them access to the images mid-procedure without them having to touch a mouse, keyboard, or even a touch screen. As these are all things that could compromise the sterility of the operation, this will be a very useful tool. The team plans for this tool to be implemented at a large scale, making automatic indexes of images as they are scanned and tying them into the greater database seamlessly.
Using Kinect technology, they would only have to motion their hands to access the parts they need to focus on. The potential Microsoft solution is quicker and slicker: And it could help to save lives. Criminisi said: “Our solution enables surgeons to wave at the screen and access the patients images without touching any physical device, thus maintaining asepsis. By gesturing in mid-air surgeons can zoom in on specific organs or lesions and manipulate 3D views; they can also search for images of other patients with similar conditions. It’s amazing how such images can offer clues of disease and potential cure. Pre-filtering patient data can be an important tool for doctors and surgeons.
Although needs in each hospitals different and levels of sophistication, the general outcome was sufficiently encouraging to drive scientific research towards a new, efficient tool to aid surgery.




Resources: 
  • Tecnology Review
  • Pappas Evangelos. 30/03/2010 - Assesment for CO3 6th Semester Academic English. University of Wales

Academic Search Engines: Beyond Google Scholar

 

Scirus

 

 

how to use the invisible web

Scirus has a pure scientific focus. It is a far reaching research engine that can scour journals, scientists’ homepages, courseware, pre-print server material, patents and institutional intranets.

 

 

 

 

 

 

 

 

InfoMine

invisible web search engines

Infomine has been built by a pool of libraries in the United States. Some of them are University of California, Wake Forest University, California State University, and the University of Detroit. Infomine “˜mines’ information from databases, electronic journals, electronic books, bulletin boards, mailing lists, online library card catalogs, articles, directories of researchers, and many other resources.

You can search by subject category and further tweak your search using the search options. Infomine is not only a standalone search engine for the Deep Web but also a staging point for a lot of other reference information. Check out its Other Search Tools and General Reference links at the bottom.

 

 

 

 

 

 

The WWW Virtual Library

 

invisible web search engines

This is considered to be the oldest catalog on the web and was started by started by Tim Berners-Lee, the creator of the web. So, isn’t it strange that it finds a place in the list of Invisible Web resources? Maybe, but the WWW Virtual Library lists quite a lot of relevant resources on quite a lot of subjects. You can go vertically into the categories or use the search bar. The screenshot shows the alphabetical arrangement of subjects covered at the site.

 

 

 

 

 

 

 

 

 

DeepPeep

 

 

search invisible web

DeepPeep aims to enter the Invisible Web through forms that query databases and web services for information. Typed queries open up dynamic but short lived results which cannot be indexed by normal search engines. By indexing databases, DeepPeep hopes to track 45,000 forms across 7 domains.

The domains covered by DeepPeep (Beta) are Auto, Airfare, Biology, Book, Hotel, Job, and Rental. Being a beta service, there are occasional glitches as some results don’t load in the browser.

 

 

 

 

 

 

Reference: 10 Search Engines to Explore the Invisible Web

Evolution machine: Genetic engineering on fast forward

Source: NewScientist

Automated genetic tinkering is just the start – this machine could be used to rewrite the language of life and create new species of humans

IT IS a strange combination of clumsiness and beauty. Sitting on a cheap-looking worktop is a motley ensemble of flasks, trays and tubes squeezed onto a home-made frame. Arrays of empty pipette tips wait expectantly. Bunches of black and grey wires adorn its corners. On the top, robotic arms slide purposefully back and forth along metal tracks, dropping liquids from one compartment to another in an intricately choreographed dance. Inside, bacteria are shunted through slim plastic tubes, and alternately coddled, chilled and electrocuted. The whole assembly is about a metre and a half across, and controlled by an ordinary computer.

Say hello to the evolution machine. It can achieve in days what takes genetic engineers years. So far it is just a prototype, but if its proponents are to be believed, future versions could revolutionise biology, allowing us to evolve new organisms or rewrite whole genomes with ease. It might even transform humanity itself.

These days everything from your food and clothes to the medicines you take may well come from genetically modified plants or bacteria. The first generation of engineered organisms has been a huge hit with farmers and manufacturers - if not consumers. And this is just the start. So far organisms have only been changed in relatively crude and simple ways, often involving just one or two genes. To achieve their grander ambitions, such as creating algae capable of churning out fuel for cars, genetic engineers are now trying to make far more sweeping changes.

Grand ambitions

Yet changing even a handful of genes takes huge amounts of time and money. For instance, a yeast engineered to churn out the antimalarial drug artemisinin has been hailed as one of the great success stories of synthetic biology. However, it took 150 person-years and cost $25 million to add or tweak around a dozen genes - and commercial production has yet to begin.

The task is so difficult and time-consuming because biological systems are so complex. Even simple traits usually involve networks of many different genes, which can behave in unpredictable ways. Changes often do not have the desired effect, and tweaking one gene after another to get things working can be a very slow and painstaking process.

Many biologists think the answer is to try to eliminate the guesswork. They are creating libraries of ready-made "plug-and-play" components that should behave in a reliable way when put together to create biologicial circuits. But George Church, a geneticist at Harvard Medical School in Boston, thinks there is a far quicker way: let evolution do all the hard work for us. Instead of trying to design every aspect of the genetic circuitry involved in a particular trait down to the last DNA letter, his idea is to come up with a relatively rough design, create lots of variants on this design and select the ones that work best.

The basic idea is hardly original; various forms of directed evolution are already used to design things as diverse as proteins and boats. Church's group, however, has developed a machine for "evolving" entire organisms - and it works at an unprecedented scale and speed. The system has the potential to add, change or switch off thousands of genes at a time - Church calls this "multiplexing" - and it can generate billions of new strains in days.

Of course, there are already plenty of ways to generate mutations in cells, from zapping them with radiation to exposing them to dangerous chemicals. What's different about Church's machine is that it can target the genes that affect a particular characteristic and alter them in specific ways. That greatly increases the odds of success. Effectively, rather than spending years introducing one set of specific changes, bioengineers can try out thousands of combinations at once. Peter Carr, a bioengineer at MIT Media Lab who is part of the group developing the technology, describes it as "highly directed evolution".

The first "evolution machine" was built by Harris Wang, a graduate student in Church's lab. To prove it worked, he started with a strain of the E. colibacterium that produced small quantities of lycopene, the pigment that makes tomatoes red. The strain was also modified to produce some viral enzymes. Next, he synthesised 50,000 DNA strands with sequences that almost matched parts of the 24 genes involved in lycopene production, but with a range of variations that he hoped would affect the amount of lycopene produced. The DNA and the bacteria were then put into the evolution machine.

The machine let the E. coli multiply, mixed them with the DNA strands, and applied an electric shock to open up the bacterial cells and let the DNA get inside. There, some of the added DNA was swapped with the matching target sequences in the cells' genomes. This process, called homologous recombination, is usually very rare, which is where the viral enzymes come in. They trick cells into treating the added DNA as its own, greatly increasing the chance of homologous recombination.

The effect was to create new variants of the targeted genes while leaving the rest of the genome untouched. It was unlikely that all 24 genes would be altered simultaneously in any one bacterium, so the cycle was repeated over and over to increase the proportion of cells with mutations in all 24 genes.

Repeating the cycle 35 times generated an estimated 15 billion new strains, each with a different combination of changes in the target genes. Some made five times as much lycopene as the original strain, Wang's team reported in 2009 ( Nature, vol 460, p 894).

It took Wang just three days to do better than the biosynthesis industry has managed in years. And it was no one-off - he has since repeated the trick for the textile dye indigo.

Church calls this bold approach multiplex automated genome engineering, or MAGE. In essence, he has applied the key principles that have led to the astonishing advances in DNA sequencing - parallel processing and automation - to genetic engineering. And since Church was one of the founders of the human genome project and helped develop modern sequencing methods, he knows what he is doing.

Just as labs all over the world now buy thousands of automated DNA sequencing machines, so Church envisions them buying automated evolution machines. He hopes to sell them relatively cheaply, at around $90,000 apiece. "We're dedicated to bringing the price down for everybody, rather than doing some really big project that nobody can repeat," Church says.

He hopes the machines will greatly accelerate the process of producing novel microbes. LS9, a biofuels company based near San Francisco that was co-founded by Church, has said it hopes to use MAGE to engineer E. coli that can produce renewable fuels. Church and colleagues are also adapting the approach for use with other useful bacteria, including Shewanella, which can convert toxic metals such as uranium into an insoluble form, and cyanobacteria which can extract energy from light using photosynthesis.

A big revolution

In principle, the technique should work with plant and animal cells as well as microbes. New methods will have to be developed for coaxing cells to swap in tailored DNA for each type of organism, but Church and his colleagues say that progress has already been made in yeast and mammalian cells.

"I think it is a big revolution in genome engineering," says Kristala Jones Prather, a bioengineer at the Massachusetts Institute of Technology who is not part of Church's collaboration. "You don't have to already know what the answer is. You can manipulate multiple things at a time, and let the cell find a solution for you."

Because biological systems are so complex, it is a huge advantage to be able to tweak lots of genes simultaneously, rather than one at a time, she says. "In almost every case you'll get a different solution that's a better solution."

The disadvantage of Church's approach is that the "better solution" is mixed up with millions of poorer solutions. Prather points out that the technique is limited by how easy it is to screen for the characteristics that you want. Wang selected good lycopene producers by growing 100,000 of the strains he had created in culture dishes and simply picking out the brightest red colonies. "Essentially nothing that we use in my lab can be screened so easily," Prather says.

By automating selection and using a few tricks, though, it should be practical to screen for far more subtle characteristics. For instance, biosensors that light up when a particular substance is produced could be built into the starting strain. "The power going forward will have to do with clever selections and screens," says Church.

As revolutionary as this approach is, Church thinks MAGE's most far-reaching potential lies elsewhere. He reckons it will be possible to use the evolution machine to make many thousands of specific changes to a cell's DNA: essentially, to rewrite genomes.

At the moment, making extensive changes to even the smallest genome is extremely costly and laborious. Last year, the biologist and entrepreneur Craig Venter announced that his team had replaced a bacterium's genome with a custom-written one (Science, vol 329, p 52). His team synthesised small pieces of DNA with a specific sequence, and then joined them together to create an entire genome. It was an awesome achievement, but it took 400 person-years of labour and cost around $40 million.

MAGE can do the same job far more cheaply and efficiently by rewriting existing genomes, Church thinks. The idea is that instead of putting DNA strands into the machine with a range of different mutations, you add only DNA with the specific changes you want. Even if you are trying to change hundreds or thousands of genes at once, after a few cycles in the machine, a good proportion of the cells should have all the desired changes. This can be checked by sequencing.

If the idea works it would make feasible some visionary projects that are currently impossibly difficult. Church, needless to say, has something suitably ambitious in mind. In fact, it is the reason he devised MAGE in the first place.

In 2004 he had joined forces with Joseph Jacobson, an engineer at the MIT Media Lab, best known as inventor of the e-ink technology used in e-readers. Searching for a "grand goal" in bioengineering, the pair hit upon the idea of altering life's genetic code. Rather than just alter the sequence of DNA, they want to change the very language in which the instructions for life are written(see diagram).

This is not as alarming as it might sound. Because all existing life uses essentially the same genetic code, organisms that translate DNA using a different code would be behind a "genetic firewall", unable to swap DNA with any normal living thing. If they escaped into the wild, they would not be able to spread any engineered components. Nor would they be able to receive any genes from natural bacteria that would endow them with antibiotic resistance or the ability to make toxins. "Any new DNA coming in or any DNA coming out doesn't work," says Church. "We're hoping that people who are concerned, including us, about escape from industrial processes, will find these safer."

There is another huge advantage: organisms with an altered genetic code would be immune to viruses, which rely on the protein-making machinery of the cells they infect to make copies of themselves. In a cell that uses a different genetic code, the viral blueprints will be mistranslated, and any resulting proteins will be garbled and unable to form new viruses.

Doing this in bacteria or cell lines used for growing chemicals would be of huge importance to industry, where viral infections can shut down entire production lines. And the approach is not necessarily limited to single cells. "It's conceivable that it could be done in animals," says Carr.

Completely virus-proof

Carr and his colleagues have already begun eliminating redundant codons from the genome of E. coli. They are starting with the rarest, the stop codon TAG, which appears 314 times. Each instance will be replaced by a different stop codon, TAA. So far they have used MAGE to create 32 E. coli strains that each have around 10 of the necessary changes, and are now combining them to create a single strain with all the changes. Carr says this should be completed within the next few months, after which he hopes to start replacing another 12 redundant codons. To make a bacterium completely virus-proof will probably require replacing tens of thousands of redundant codons, he says, as well as modifying the protein-making factories so they no longer recognise these codons.

To ensure novel genes cannot be translated if they get passed on to other organisms, the team would have to go a step further and reassign the freed-up codons so a different amino acid to normal is added to a protein when they occur. This could include amino acids that do not exist in nature, opening the door to new types of chemistry in living cells. Artificial amino acids could be used to create proteins that do not degrade as easily, for example, which could be useful in industry and medicine.

There are potential dangers in making organisms virus-proof, though. Most obviously, they might have an advantage over competing species if they escaped into the wild, allowing them to dominate environments with potentially destructive effects. In the case of E. coli, those environments could include our guts.

"We want to be very careful. The goal is to isolate these organisms from part of the natural sphere with which they normally interact," says Carr. "We shouldn't pretend that we understand all possible ramifications, and we need to study these modified organisms carefully." But he points out that we deal with similar issues already, such as invasive species running riot in countries where they have no natural predators. Additional safeguards could be built in, such as making modified organisms dependent on nutrients they can get only in a lab or factory. And if the worst came to the worst, biologists could create viruses capable of killing their errant organisms. Such viruses would not be able to infect normal cells.

Church argues that with proper safety and regulatory controls, there is no reason why the approach shouldn't be used widely. "I think that to some extent you'd like every organism to be multi-virus resistant," he says. "Or at least industrial microbes, agricultural species and humans."

Yes, humans. Church is already adapting MAGE for genetically modifying human stem cell lines. The work, funded by the US National Human Genome Research Institute, aims to create human cell lines with subtly different genomes in order to test ideas about which mutations cause disease and how. "Sequencing is now a million times cheaper, and there are a million times as many hypotheses being generated," he says. "We'd like to develop the resources so that people can quickly test hypotheses about the human genome by synthesising new versions."

As the technology improves and becomes routine, says Church, it could also be used to alter the cells used for cell-based therapies. Tissue-engineered livers grown from stem cells, say, could have their genetic code altered so that they would be immune to liver-destroying viruses such as hepatitis C.

"Everybody getting stem cell therapies will be given a choice of doing ordinary stem cell therapy - either with their cells or donor cells - or doing stem cells that are resistant to viruses," he says. "There will have to be all kinds of safety checks and FDA approval and so forth, but most people faced with two fairly safe choices, one of which is virus-sensitive and one of which is virus-resistant, are going to take the virus-resistant one."

Of course, there would be enormous experimental and safety hurdles to overcome. Not least the fact that gene targeting using homologous recombination or any other method is not perfect - the added DNA is sometimes inserted into the wrong place in the genome, and the process can trigger other kinds of mutations too. Such off-target changes might be a big problem when making hundreds of targeted changes at a time.

So not surprisingly, Carr describes the move to humans as "fraught with peril". But if we do get to a point where there are lots of people walking around with virus-resistant tissues or organs, and lots of farm animals that are completely virus-resistant, Church thinks it is only a matter of time before clinics create virus-resistant babies. "If it works really well, somebody somewhere will decide to try it in the next generation."

Making changes to the genomes of humans that will get passed on to their children has long been seen as taboo. But Church points out that there was strong resistance to techniques such as in vitro fertilisation and organ transplants when they were new; yet as soon as they were shown to work, they were quickly accepted. "Many technologies start out that way," he says. "But once they work really well, everybody says it's unethical not to use them."

Arthur Caplan, a bioethicist at the University of Pennsylvania in Philadelphia who advises the US government on reproductive technologies, is sceptical about the idea of making virus-resistant people, because anyone modified in this way would only be able to conceive children naturally with a partner whose genome had been altered in exactly the same way. "You would be denying a hugely important choice to a future modified human."

But, he says, if MAGE really can be used to edit the genome of human cells, it would provide a way to fix the mutations that cause inherited disease. It could be the technology that opens the door to the genetic engineering of humans. We should start debating now how best to use it, Caplan says. Should it be limited to preventing disease, or used for enhancement too? What sort of regulation is needed? Who should be eligible?

This prospect might seem a long way off, but Caplan argues that if the technique works well in other species, it could become feasible to attempt to engineer humans in as little as 10 years. "If you learn to do this in microbes and then in animals, you'll find yourself wondering how we got to humans so fast," he says. "You've got to pay attention to what's going on in lower creatures because that's the steady march to people."

If all this sounds wildly implausible, bear in mind that the idea of sequencing an entire human genome in days seemed nigh on impossible just a few years ago. Now it's fast becoming routine. Most biologists would probably agree that it is just a matter of time before we develop the technology needed to rewrite the DNA of living creatures at will. If Church succeeds, this future will happen faster than any imagined.

Thursday, December 16, 2010

The Antikythera Mechanism... Built With LEGO

Antikythera
I'll be honest, I had little clue about what the "Antikythera Mechanism" was. Although I'd heard of it, I didn't know who built it, when it was built or why it was built.
As it turns out, in 1901, divers off the coast of the Greek island of Antikythera found a device on board a shipwreck dating back over 2,000 years. Not much was known about the "device" until, in 2006, scientists carried out X-ray tomography on what remained of the complex artifact.

According to the recent Nature article Ancient astronomy: Mechanical inspiration, by Jo Marchant:
"The device, which dates from the second or early first century BC, was enclosed in a wooden box roughly 30 centimetres high by 20 centimetres wide, contained more than 30 bronze gearwheels and was covered with Greek inscriptions. On the front was a large circular dial with two concentric scales. One, inscribed with names of the months, was divided into the 365 days of the year; the other, divided into 360 degrees, was marked with the 12 signs of the zodiac."
The device -- which sounds like something that belongs in a Dan Brown novel -- is an ancient celestial computer, driven by gears to carry out the calculations and dials to accurately predict heavenly events, such as solar eclipses. The technology used to construct the device wasn't thought to be available for another 1,000 years.
According to Adam Rutherford, editor of Nature, the science journal has a long standing relationship with the Antikythera Mechanism. In a recent email, Rutherford pointed to a video he had commissioned in the spirit of continuing Nature coverage of this fascinating device. But he hadn't commissioned a bland documentary about the history of the Antikythera Mechanism, he'd commissioned an engineer to build the thing out of LEGO!
The result is an engrossing stop animation production of a LEGO replica of this ancient celestial calculator. For me, this video really put the device in perspective. The Greeks, over 2,000 years ago, built a means of predicting the positions of the known planets, the sun, even the elliptical motions of planetary orbits. They'd drawn inspiration from the Babylonians (according to new research reported on by Nature) and re-written the history of what we understand of the ancient civilization's technical prowess.
Sadly for the ancient Greeks, the Antikythera Mechanism was lost for 2,000 years at the bottom of the ocean and only now are we beginning to understand just how advanced this fascinating piece of technology truly is.

Watch this video, it's awesome:




Source

Tuesday, December 14, 2010

Critics raise doubts on NASA's arsenic bacteria

Critics raise doubts on NASA's arsenic bacteria

December 9, 2010 by Lin Edwards Critics raise doubts on NASA’s arsenic bacteriaEnlarge
A microscopic image of GFAJ-1 grown on arsenic.
(PhysOrg.com) -- NASA’s announcement last week that bacteria had been discovered that appeared to replace phosphorus with arsenic and thrive even in the most poisonous environments, has now come under fire from a number of scientists.


The findings reported last week, were that some bacteria (GFAJ-1) thrived when access to phosphate was removed and the bacteria were grown in a highly toxic culture rich is arsenate. The scientists suggested the bacteria thrived because they were able to replace , which has always been thought vital to , with , which is directly under it on the periodic table and has similar chemical properties. The researchers also suggested the bacteria were replacing phosphorus with arsenic within the bases that make up DNA.
These findings, if correct, would mean the scientists had found a new form of life on Earth, and it would also re-write the guide book on the essential requirements for life to exist elsewhere.
After the findings were published in Science, other scientists began immediately to express their doubts at the conclusions of the paper, with some even expressing the opinion the paper should not have been published at all.
One of the critics was Dr. Alex Bradley, from Harvard University, who blogged that there were a number of problems with the research. Firstly, if arsenic had replaced phosphorus in the DNA the molecule would have broken into fragments when the DNA was placed in water, since the arsenic would have hydrolyzed, and yet it did not. Secondly, the paper showed there was a small amount of phosphorus in the medium and Bradley argued that even though small, this could have been enough, since bacteria metabolism is extremely efficient.
Dr. Bradley also pointed out the bacteria live in Mono Lake, which is rich in arsenic but which also contains a higher concentration of phosphate than almost anywhere else on Earth, and this means there would be no selective pressure for a life based on arsenic to evolve.

Dr. Bradley also suggested a mass spectrum of the DNA sequences would have shown whether or not the nucleotides contained arsenic in place of phosphorus, but this was not done.
Another critic was University of British Columbia biologist Rosie Redfield, who reviewed the paper on her blog, and has more recently submitted a letter to the journal. Among her conclusions are that the paper “doesn't present ANY convincing evidence that arsenic has been incorporated into DNA (or any other biological molecule).” She also writes: “If this data was presented by a PhD student at their committee meeting, I'd send them back to the bench to do more cleanup and controls.”
Dr. Redfield also points out there was phosphate in the culture and that the authors did not calculate whether the amount of growth they saw in the arsenate-only medium could be supported by the phosphate present. She calculates on the blog that the growth of the bacteria is actually limited by the amount of phosphorus.
Another point made by Redfield is that the arsenic bacteria were “like plump little corn kernels” and contain granules, which are usually produced by bacteria when they have ample supplies of carbon and energy sources but there are shortages of other nutrients needed for growth.
The authors of the arsenic bacteria paper initially refused to be drawn into the arguments, saying the discussion should be confined to peer-reviewed journals, but one of the authors, Ronald Ormeland, did answer questions on the controversy after giving a lecture on the findings at headquarters yesterday. He said the amount of phosphorus in the sample was too small to sustain growth, and a mass spectrum was not done because they did not have enough money, and wanted to get the result published quickly. He also pointed out that the are still there and other scientists could duplicate the work and carry out further experiments if they wished.

Source

Sunday, December 12, 2010

Bacteria cells used as secure information storage device






Cambridge - A technique for encryption/ compression/decryption of data and the use of bacteria as a secure storage device was successfully produced by a team of Chinese biochemistry students as an alternative solution for storing electronic data.
A team of instructors and students of the Chinese University of Hong Kong (CUHK) have managed to store enormous amounts of data in bacteria. The system is based on a novel cryptographic system for data encoding and the application of a compression algorithm which reduces its size dramatically. Following the reduction in size, the researchers were able to enter the information into bacteria in the form of modified DNA sequences. They used the DH5-alpha strain of Escherichia coli, a bacterium normally found in the intestines of most animals. This bacterium is often used as a model organism in microbiology and biotechnology. Modified E. coli has also been used in bioengineering for the development of vaccines, bio-remediation and the production of certain enzymes. Two research groups have already conducted unsuccessful experiments in 2001 and 2007 aiming to the use of biological systems as data storage devices. The researchers of the Chinese University of Hong Kong used encoded E. coli plasmid DNA (a molecule of DNA usually present in bacteria that replicate independently of chromosomal DNA) to encrypt the data and store it in the bacteria. Then, by using a novel information processing system they were able to reconstruct and recover the data with error checking. Another advantage of the system is that the bacteria cells abundantly replicate the data storage units thereby ensuring the integrity and permanence of the data by redundancy. Based on the procedures tested, they estimate the ability to store about 900000 gigabytes (GB) in one gram of bacteria cells. That is the equivalent of 450 hard drives, each with the capacity of 2 terabytes (2000 GB).
As an example of the potential for storage they explain that the text of the Declaration of Independence of the United States (8047 characters) could be stored in just 18 bacteria cells. One gram of bacteria cells contains approximately 10 million cells.
"We believe this could be an industry standard for large-scale manipulation of data storage in living cells"
said the researchers responsible for the project on their website where they describe the potential of data bio-encryption and storage. The researchers envision a wide range of applications for this technology. The capabilities of what they describe as a “bio-hard-disk” include the storage of text, images, music and even movies, or the insertion of barcodes into synthetic organisms as part of security protocols to discriminate between synthetic and natural organisms. The team of researchers was integrated by 3 instructors and 10 undergraduate biochemistry students of CUHK. They carried out their study as part of a worldwide Synthetic Biology competition called The International Genetically Engineered Machine (iGEM) organized by the Massachusetts Institute of Technology (MIT) of the USA. The CUHK team obtained a gold award in the iGEM competition.
“Biology students learn engineering approaches and tools to organize, model, and assemble complex systems, while engineering students are able to immerse themselves in applied molecular biology.”
declared iGEM organizers. The iGEM competition started in 2003. The 2010 version included over 1,900 participants in 138 teams from around the world. They were required to specify, design, build, and test simple biological systems made from standard, interchangeable biological parts. The achievements of the iGEM research teams often lead to important advances in medicine, energy, biotechnology and the environment.

Read more
 http://www.scribd.com/doc/44687672/Bacterial-based-storage-and-encryption-device

Cyber war will hit all web users - BBC










The conflict between Wikileaks supporters and the companies withdrawing their services from the whistle-blowing website has been dubbed a "cyber war".
Activists have targeted firms such as PayPal, Mastercard and Visa for their opposition to the site's publication of thousands of secret US diplomatic messages.
But there are fears the online battle could lead to everyday internet use becoming much more heavily regulated.
Source - BBC

Wednesday, December 8, 2010

Who’s to Blame for the Linux Kernel?

Finger-pointing time! Let’s see who’s responsible for kernel development in the last year. Once again, the Linux foundation has released its report on who wrote Linux. As always, it has some interesting insight into who did what when it comes to kernel development, and the direction of the kernel. Unsurprisingly, embedded/mobile is becoming a major factor in kernel development.
The Linux Foundation publishes an annual Linux report that shows (approximately) who has written and contributed to the Linux kernel. The report is put together by LWN’s Jon Corbet (also a kernel contributor) and kernel developer Greg Kroah-Hartman, with additional contributions from the Linux Foundation’s Amanda McPherson.

The Top 5
Everybody wants to know, who’s at the top of the list. Consistently at the top is “none,” which is to say that nearly 20% of the kernel development is done by people who aren’t affiliated with a company — at least as far as their kernel contributions go. Yes, Virginia, independent kernel contributions still exist.
The report provides two lists — contributions since 2.6.12, when Git logs became available, and since the last report (2.6.30). Red Hat tops both lists, with 12.4% of kernel changes since 2.6.12, and 12.0% since 2.6.30. A tiny decline, but remember that the number of developers participating in each release cycle grows by about 10%. Meaning that the proverbial pond keeps getting bigger, and the Red Hat fish isn’t getting much smaller in comparison.
The red fish keeps growing, but the green fish isn’t keeping up quite as well. Novell had 7.0% of kernel contributions since 2.6.12, but only 5.0% since 2.6.30. It’s dropped from second to third in kernel contributions, after Intel, which had 7.8% of kernel contributions since 2.6.30. Some of that may be that more X.org is being moved into the kernel, and a lot of X.org development is being done by Intel, and Intel is also doing more with its work on MeeGo.
Intel comes in second on most recent contributions, bumping Novell to its third place spot. IBM is also displaced by Intel, landing at fourth (Intel’s old slot). Who’s in fifth (sorry Abbot, Costello)? Nokia. Yep, Nokia — who were behind SGI, Parallels, and Fujitsu in 2009.
If you’re looking for individuals, the top five since 2.6.30 are Paul Mundt, Johannes Berg, Peter Zijlstra, Bartlomiej Zolnierkiewicz, Greg Kroah-Hartman. Mundt explains Renesas’ place in the list — he’s working for them, after a stint at the CE Linux Forum (CELF). Berg is on Intel’s payroll, working on wireless, Zijlstra works for Red Hat, and Zolnierkiewicz is a student at Warsaw University of Technology. Kroah-Hartman, of course, is at Novell.
Linus Torvalds doesn’t make the list not because he’s not doing anything, but because the list doesn’t measure what Torvalds does very well. That is to say, Torvalds spends much of his time merging commits from others and not so much writing his own code. Still quite important, but not as easily measured.
I beat Oracle up pretty heavily lately because of their antagonism towards Google and open source Java, as well as their mishandling of OpenSolaris, OpenOffice.org, and virtually all of the properties they got from Sun. Nothing that’s related to open source has gotten better since Oracle took it over. Still, the company turns in a respectable — if somewhat reduced — showing in kernel development. Oracle clocks in with 1.9% of kernel changes since 2.6.30, and 2.3% since 2.6.12.
Then there’s Canonical. Or rather, there Canonical isn’t. Once again, the most popular Linux desktop vendor and would-be enterprise Linux player doesn’t rank highly enough in kernel development to show up — even in the past year. I might get flamed for mentioning this, but I do think it’s worth pointing out. Yes, Canonical makes valuable contributions to Linux in other areas — even if the seem ashamed or reluctant to mention that Ubuntu is Linux underneath. Does Canonical need to contribute to the kernel to be successful? Apparently not. Should Canonical be contributing more given its standing and dependency on the Linux kernel? I believe so.
Embedded
Nokia’s placement on the list shows that much more development is being driven by mobile and embedded Linux. In the past, server Linux was the big money behind the kernel. Still is, but it’s making room for embedded Linux.
Nokia has jumped up in the standings and has doubled its percentage of contribution. Wolfson Microelectronics and Renesas Technology appear in the top 20 for the first time. Both companies are working with embedded Linux. Texas Instruments also makes the list — Linux on a calculator, anyone?
Broadcom and Atheros also make the top 20 since 2.6.30 — which is good, we might see fewer and fewer chipsets that aren’t supported in Linux.
What’s disappointing is that Google isn’t higher in the ranks here. Actually — Google has dropped off the top 20 altogether since 2.6.30. The search giant had less than a percent (0.8%) of kernel changes since 2.6.12, and only 0.7% since 2.6.30. Google is behind Pengutronix, for goodness sakes. Have you heard of Pengutronix? Nope, me either. For a company that is arguably using more Linux than anybody — pushing two Linux-based OSes and likely to have more Linux servers in use than any other entity — Google’s kernel contributions are actually quite paltry.
Summary
2011 should be interesting. If Google finally merges Android’s changes into the mainline kernel, that should bump Google up in the standings. I suspect, and hope, SUSE/Novell will move past Intel in 2011, now that its future is a bit more clear. As MeeGo continues to gather steam, I suspect Nokia will also show up a bit higher in the standings.
In all, Linux kernel development is as healthy as ever. I’d be curious to see a similar report for other major system utilities and such (GCC, the GNU utilities, X.org, Apache Web server). The kernel is very important, but just a part of the overall ecosystem. There’s plenty of userspace goodies that companies should get credit for as well.
Make sure to check out the full report PDF too. It makes for good reading, and it’s short and well-written.

Source

Saturday, March 20, 2010

Cool Or Hot? Linux really making your coffee, live a linux coffee machine

Too bad it's only for professional use the HGZ Linux based coffee machine. I'd love to have on of these. A Dream come true. The Linux coffee maker.
Embedded Linux on a coffee machine, touch screen. Build on Qt framework.
Have Linux brew your coffee, finally a stable cup of coffee.
Demo-ed at the Embedded World in Nurmberg Germany by Qt:

And some images from the presentation of Qt:







Source: http://www.handlewithlinux.com

Tuesday, March 9, 2010

Το FBI συλλαμβάνει Hackers στη Ισπανία

Το FBI σε συνεργασία με την Guardia Civil της Ισπανίας κατάφερε να εντοπίσει και να συλλάβει μια ομάδα από hackers, κλείνοντας οριστικά το δίκτυό τους. Οι τρείς άντρες ηλικιών από 25 εώς 31 δεν είχαν ιδιαίτερες υψηλές γνώσεις υπολογιστών και ασφάλειας δικτύων [!] και αγόρασαν στη μαύρη αγορά τα προγράμματα με τα οποία υλοποίησαν το σχέδιό τους . Το όνομα που χρησιμοποιούσαν είναι DDP, δηλαδή Días de Pesadilla που σημαίνει Εφιαλτικές Ημέρες. Ο εντοπισμός τους ήταν ιδιαίτερα δύσκολος, διότι ο έλεγχος των server τους γινόταν πάντα μέσω υπηρεσιών ανώνυμων VPN, που υπέκρυπταν τις πραγματικές τους IP.

Το δίκτυο ονομάστηκε Mariposa, που σημαίνει πεταλούδα στα ισπανικά και είναι ένα botnet malware το οποίο εξαπλώνεται μέσα από δίκτυα pear 2 pear, MSN links, αλλά και USB flash drives, ενώ υπολογίζεται ότι κατάφερε να μολύνει 12,7 εκατομμύρια υπολογιστές σε πάνω από 190 χώρες, δηλαδή ένα από τα μεγαλύτερα δίκτυα zombie υπολογιστών που έχει καταγραφεί ποτέ.

Από την στιγμή που καταλαμβάνει κάποιο υπολογιστή, εγκαθιστά keyloggers και trojans για παρακολούθηση τραπεζικών λογαριασμών και απομακρυσμένη πρόσβαση. Τα δεδομένα τα οποία λάμβαναν από αυτές τις δραστηριότητες μεταπωλούταν από την ομάδα, είτε χρησιμοποιούταν για ξέπλυμα χρήματος κ.λπ. Το "πρόβλημα" εντοπίστηκε τον Μάϊο του 2009 από Καναδικές αρχές, οι οποίες συνεργάστηκαν με ειδικούς στην Panda Security το Georgia Tech Information Security Center, αλλά και άλλες αρχές ανά τον κόσμο για να φέρουν τους υπεύθυνους στη δικαιοσύνη.

Παροξυσμός για τις συνδέσεις 1Gbps της Google

Η προ μηνός ανακοίνωση της Google πως θα προσφέρει πιλοτικά συνδέσεις 1Gbps σε επιλεγμένες πόλεις στην Αμερική έχει προκαλέσει αναταραχές στον χώρο των ISP. Οι συνδέσεις θα γίνονται μέσω οπτικής ίνας και αρχικά θα προσφερθούν σε 50.000 με 500.000 κατοίκους. Αποτέλεσμα αρκετές πόλεις στη χώρα να προσπαθούν να πείσουν την Google να προσφέρει σε αυτές τις υπηρεσίες της. Μάλιστα η Topeka στη πολιτεία του Kansas, έχει μετονομαστεί σε Google, Kansas για τον μήνα Μάρτιο.

Η κάθε πόλη έχει διορία μέχρι τις 26 του μήνα να εκφράσει το ενδιαφέρον της, ώστε να αποκτήσει το πρώτο δίκτυο 1Gbps, χωρίς όμως να έχει γίνει γνωστό πότε θα ξεκινήσουν οι εργασίες. Ο δήμαρχος της Duluth, στην Minnesota μάλιστα βούτηξε στα παγωμένα νερά της λίμνης προσπαθώντας να κερδίσει τις εντυπώσεις και προσκάλεσε τους υπολοίπους ενδιαφερομένους να κάνουν το ίδιο για να δείξουν πως αξίζουν την προσοχή της Google.

Δείτε το video της αρχικής ανακοίνωσης από τον Product Manager James Kelly.

Saturday, February 27, 2010

15 Awesome Google Services You Never Knew Existed

Whether you're sending an email in Gmail, finding directions to that fancy restaurant using Google Maps, or pretending to be a part of the latest microblogging craze with Google Buzz, the G-word is everywhere. Well, it turns out that there is also a whole library of Google web applications and services stacked up behind the everyday services you may have come to take for granted.

Most of the mega company's services are either full blown web applications readily available to the public, or secretly tucked away behind a door in the Google Labs. However, even those wearing their Public Beta scrubs are readily available to play with. We've gone and picked through fifteen Google services you may not have heard of before, but can definitely benefit from. Try them out, and if you have any suggestions of ones we may have missed, leave a note in the comments.

Never miss another important headline

News Timeline

If you're tired of missing out on the week’s most important headlines, set Google News Timeline as your browser’s home page and you’ll never be out of the loop again. This distinct search engine scours various news outlets, Wikipedia, and even Twitter. Just enter in search term and News Timeline will retrieve the most recent headlines from the web containing the word. You can even specify what publications you’d like News Timeline to search, including your local paper. Sadly, Mac|Life wasn’t among the choices.



 

Patent your invention



Patents

Got a crazy robot that does all sorts of cool, crazy robot things? Well, before you start working on the actual mechanical implementation of that idea, mosey on over to Google Patents to make sure your product hasn’t already been invented. This specified search engine sifts through indexed patents registered with the United States Patent and Trademark Office (USPTO). The search engine uses optical character recognition (OCR) to sift through patents based on words and terms embedded in the image scans.

We took a few minutes to glance at some of the random patents that popped up on the front page. For instance, this apple case for use in preserving apples and this kid-friendly inhaler that looks like a panda.  See if you can find any of Apple’s patents.


 

Let's get political, political

In Quotes
Yeah, the presidential election meme is totally passé, but voting is an American right and should be utilized to the fullest extent. That’s why Google’s still got the reigns on a nifty service dubbed In Quotes, which displays side-by-side comparisons of noteworthy quotes from major politicians on a variety of hot topics. 

Type in a search topic or choose a political issue from the drop-down box, then choose your politicians and a year; the generator offers speeches and opinions from a wide selection of politicians, beginning from 2003 to present. There’s also a U.K., India, and Canada edition for international expats.

Quotes are generated automatically, depending on the topic. In Quotes is a great tool for students preparing a paper on a recent politician or political matter, but if you’re looking for anything George Washington-era, get ready to crack open a book.



 

You’ve got questions? They’ve got the answers

Google Moderator

Google Moderator offers an open forum for users to post their questions, offer suggestions, concoct ideas, and receive answers in return. You can scour topics and vote on other people’s opinions, or contribute your own.

Each question has its own list of topics, while a list of Google's featured services offer up alternative sites that are a bit more specific, like Take a Tip, Share a Tip--an open forum for users to share their experiences on how to be frugal in all areas of your life. 



Google Moderator is a great way to get an objective opinion from the many anonymous internet users trolling the web, or waste a little bit of time without having to get yourself extensively involved in a social network. If you like this web service, check out the most recent addition to the Google family: Aardvark. 





 

Explore the world on foot

City Tours
Traveling is already an extravagant endeavor. It’s a better idea to pocket the money you’d spend on travel books that will inevitably become outdated by the time you return from vacation, and simply invest some time in Google’s City Tours. City Tours generates a list of important traveling hot spots based on your destination of choice. For example, if you’re on your way to visit Berlin, Germany, type in a starting location (like the address of where you're staying) and City Tours will map out a route for a walking tour around the area you’re stationed.

Each landmark contains important information, like hours of operation and the address of the location--in case you decide to take a taxi or public transportation. You can also add other areas to your walking tour either manually or from a predetermined list provided by Google Maps. 



City Tours still has a few kinks to work out, though it’s gotten better since we used it for last summer’s trip to Lund, Sweden. For instance, walking tours no longer take 53 minutes between each stopping point, and have been significantly cut down to less than 20 minutes. Regardless, we have to keep in mind that most Google Labs applications are a work in progress. And even so, this is one feature we plan on using for all of our future traveling destinations. 




 


See politics in motion

Audio Indexing
Using YouTube to search for that political speech you've been looking for is an extreme pain in the derriere--almost as annoying as rewinding and fast-forwarding a VHS on a VCR (remember those things?). Google’s Audio Indexing simplifies this grueling task by aggregating it all for you in an easy-to-use search engine.

Type in a popular word, like “clean technology” or “California," and Audio Indexing will fetch a comprehensive list of videos with any mention of your search term in the definition. You can also share videos on Facebook, Twitter, et al. or copy and paste the direct link provided for you. Unfortunately, there is no generated embed code available. 



This service is great if you’re on the search for visual aids for a presentation in your Political Science class, or just looking to catch up on all those missed hours of C-SPAN.


 

Learn HTML all over again

Code Search

For the web coder with frequent bouts of brain freeze, Google’s Code Search is truly a lifesaver. If you’re writing CSS or attempting to bypass Flash with a very concise HTML 5 tag, you can cross reference any line of code by copying and pasting it into the search engine.





Find exactly what you're looking for

Similar Images
Google Labs' Similar Images is basically a harder-working version of the search engine’s already massive Image Search. If you’re looking for very specific image, like a view of the Golden Gate Bridge from the south end, search for "Golden Gate Bridge", then select the image that most resembles the one you're looking for. Each click refines your search down to eventually what you're looking for.



 

Watch as your image search dances around you

Image Swirl

Similar Images may eventually get you the photo you want, but what if the image you’re really looking for can only be sought out using a phonetics algorithm? Image Swirl organizes image search results into groups and sub-groups based on their visual and semantic similarities--kind of like how mind-mapping works.

Type in three search terms and you’ll be amazed at how the internal script behind the engine works to match an image with each of your descriptions. You can select the photo of each individual cluster for closer review, or the images surrounding it. We should note that Image Swirl is the newest addition to the Beta family, and is greatly limited in its search capabilities.



 

Peruse news on the web a dozen pages at a time

Fast Flip

If you’re always on the go and out of the loop, a visit to Google's Fast Flip should do the trick. It does exactly as it advertises: view screenshots of the most important news outlets on the web all at once. Or for a more refined selection of news based on topic, type in a search term and Fast Flip will retrieve a number of the most relevant sources from a predefined list of sources. You can also cycle through the news based on the most popular, recent, viewed and recommended headlines around the internet, or categorize the news by section and most discussed topics. There’s also a mobile version for iPhone users.








Refine your bibliography

Scholar

Writing a term paper is already a grueling task, so why make it more difficult by trolling the Internet for unreliable sources? Google’s got you covered with Scholar, which searches the works of academic scholars who have chosen to openly share their published writings online.

Of course, as with all academic and published works, don’t forget to cite what you use!





Find the best deals

Product Search
You may remember it as Froogle, but Google Product Search has since evolved into something quite extraordinary, even if it is still in beta. Type in a product query, and this search engine will return a list of sites offering the product of your choice, at the price of your choice. Perhaps the best thing about Product Search is that it makes absolutely no commission off of what you buy, so you can rest assured it’s just a clean, simple search engine for the best deals on the web.



 

Discover what’s trending on the web

Trends
No, we're not talking about Twitter. Google Trends is like the popularity gauge for the Internet. For example, if you're curious to see how certain car companies fare against each other in terms of search frequency, type in two search terms separated by commas and Google will retrieve a graph detailing the statistical difference between the two search terms. The graph also shows regions, cities, and languages with which the search term is most popular, and the recent stories that picked up the most traffic from Google.



 

Trace the genealogy of your friendships

People Hopper

You may share more similarities with your friends than you think. No, we don’t mean interests and hobbies; we’re talking about eye shapes, nose bumps, freckles and moles. Google’s People Hopper proves that everyone shares a little something by “morphing” your profile image with a friend’s and displaying the transformation breakdown, picture-by-picture, in a neat spectrum graph.



The service borrows its photos and user accounts from Orkut, so you’ll have to have an active account to use this service. Choose a friend who’s also on the social networking service and People Hopper will return with the facial breakdown between you and your comrade--and a bunch of other people floating around the web. The quality of the path between faces depends on how closely the two photos match. Even if it’s not a true-to-form match, it’s interesting to see as each photo descends from the primary match and morphs into another user. Plus, it’s a great way to meet new people, or find that long lost brother of yours.

Honestly, People Hopper is a little creepy, but incredibly enticing all at the same time. If you want to opt out of being a part of Google’s under-the-radar anthropological experiments, follow these instructions.



 

Meet some new people

Orkut
Orkut is a free-access social networking service designed to help you quell your Facebook addition. The service is incredibly popular in India and Brazil, but severely lagging behind Myspace and Facebook in the United States. 

If you use the service with People Hopper, maybe you'll run into someone who looks like you in India and Brazil. You never know.