Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Sunday, July 10, 2011

Inner eye Project


While Microsoft tend to support more Natural User Interface (NUI) projects, as most celebrated example its early NUI work is the Kinect Xbox body motion sensor, there’s also quite a bit of focus by Microsoft’s NUI researchers on the intersection between NUI and healthcare. There are a number of Microsoft Research projects exploring the NUI-health connection. One of these projects, named “Inner Eye” is focused on  the medical automation analysis of Computed Tomography ( CT ) scans, using modern machine learning techniques as 3D model navigation and visualization.
InnerEye takes advantage of advances in computer-human interactions that have put computers on a path to work for us and collaborate with us. The development of a natural user interface (NUI) enables computers to adapt to you and be more integrated into your environment via speech, touch, and gesture. As NUI systems become more powerful and are imbued with more situational awareness, they can provide beneficial, real-time interactions that will be seamless and naturally suited to your context—in short, systems will understand where you are and what you’re doing
Antonio Criminisi, as a leader of the research group of Microsoft’s Research center in Cambridge, who develop the system that will make it easier for doctors to work with databases of medical imagery. This system indexes the images generated during the scans. It automatically recognizes organs, and they are working to train the system to detect certain kinds of brain tumors.
This software snaps a collection of 2D and 3D images and index them all together. After combining them all together, medical imaging databases are created using the text comments linked to the image for doctors to search. This gives them the ability to search, but it takes time because not all of the results are relevant. These kinds of systems will allow doctors to easily navigate from new images to old images in the same patient, side-by-side. It will also allow doctors to easily pull up images from other patients for comparison.
Criminisi’s team is also working on embedding the technology found in Kinect. This will give surgeons the ability to navigate through the images with gestures. This will give them access to the images mid-procedure without them having to touch a mouse, keyboard, or even a touch screen. As these are all things that could compromise the sterility of the operation, this will be a very useful tool. The team plans for this tool to be implemented at a large scale, making automatic indexes of images as they are scanned and tying them into the greater database seamlessly.
Using Kinect technology, they would only have to motion their hands to access the parts they need to focus on. The potential Microsoft solution is quicker and slicker: And it could help to save lives. Criminisi said: “Our solution enables surgeons to wave at the screen and access the patients images without touching any physical device, thus maintaining asepsis. By gesturing in mid-air surgeons can zoom in on specific organs or lesions and manipulate 3D views; they can also search for images of other patients with similar conditions. It’s amazing how such images can offer clues of disease and potential cure. Pre-filtering patient data can be an important tool for doctors and surgeons.
Although needs in each hospitals different and levels of sophistication, the general outcome was sufficiently encouraging to drive scientific research towards a new, efficient tool to aid surgery.




Resources: 
  • Tecnology Review
  • Pappas Evangelos. 30/03/2010 - Assesment for CO3 6th Semester Academic English. University of Wales

Academic Search Engines: Beyond Google Scholar

 

Scirus

 

 

how to use the invisible web

Scirus has a pure scientific focus. It is a far reaching research engine that can scour journals, scientists’ homepages, courseware, pre-print server material, patents and institutional intranets.

 

 

 

 

 

 

 

 

InfoMine

invisible web search engines

Infomine has been built by a pool of libraries in the United States. Some of them are University of California, Wake Forest University, California State University, and the University of Detroit. Infomine “˜mines’ information from databases, electronic journals, electronic books, bulletin boards, mailing lists, online library card catalogs, articles, directories of researchers, and many other resources.

You can search by subject category and further tweak your search using the search options. Infomine is not only a standalone search engine for the Deep Web but also a staging point for a lot of other reference information. Check out its Other Search Tools and General Reference links at the bottom.

 

 

 

 

 

 

The WWW Virtual Library

 

invisible web search engines

This is considered to be the oldest catalog on the web and was started by started by Tim Berners-Lee, the creator of the web. So, isn’t it strange that it finds a place in the list of Invisible Web resources? Maybe, but the WWW Virtual Library lists quite a lot of relevant resources on quite a lot of subjects. You can go vertically into the categories or use the search bar. The screenshot shows the alphabetical arrangement of subjects covered at the site.

 

 

 

 

 

 

 

 

 

DeepPeep

 

 

search invisible web

DeepPeep aims to enter the Invisible Web through forms that query databases and web services for information. Typed queries open up dynamic but short lived results which cannot be indexed by normal search engines. By indexing databases, DeepPeep hopes to track 45,000 forms across 7 domains.

The domains covered by DeepPeep (Beta) are Auto, Airfare, Biology, Book, Hotel, Job, and Rental. Being a beta service, there are occasional glitches as some results don’t load in the browser.

 

 

 

 

 

 

Reference: 10 Search Engines to Explore the Invisible Web

Evolution machine: Genetic engineering on fast forward

Source: NewScientist

Automated genetic tinkering is just the start – this machine could be used to rewrite the language of life and create new species of humans

IT IS a strange combination of clumsiness and beauty. Sitting on a cheap-looking worktop is a motley ensemble of flasks, trays and tubes squeezed onto a home-made frame. Arrays of empty pipette tips wait expectantly. Bunches of black and grey wires adorn its corners. On the top, robotic arms slide purposefully back and forth along metal tracks, dropping liquids from one compartment to another in an intricately choreographed dance. Inside, bacteria are shunted through slim plastic tubes, and alternately coddled, chilled and electrocuted. The whole assembly is about a metre and a half across, and controlled by an ordinary computer.

Say hello to the evolution machine. It can achieve in days what takes genetic engineers years. So far it is just a prototype, but if its proponents are to be believed, future versions could revolutionise biology, allowing us to evolve new organisms or rewrite whole genomes with ease. It might even transform humanity itself.

These days everything from your food and clothes to the medicines you take may well come from genetically modified plants or bacteria. The first generation of engineered organisms has been a huge hit with farmers and manufacturers - if not consumers. And this is just the start. So far organisms have only been changed in relatively crude and simple ways, often involving just one or two genes. To achieve their grander ambitions, such as creating algae capable of churning out fuel for cars, genetic engineers are now trying to make far more sweeping changes.

Grand ambitions

Yet changing even a handful of genes takes huge amounts of time and money. For instance, a yeast engineered to churn out the antimalarial drug artemisinin has been hailed as one of the great success stories of synthetic biology. However, it took 150 person-years and cost $25 million to add or tweak around a dozen genes - and commercial production has yet to begin.

The task is so difficult and time-consuming because biological systems are so complex. Even simple traits usually involve networks of many different genes, which can behave in unpredictable ways. Changes often do not have the desired effect, and tweaking one gene after another to get things working can be a very slow and painstaking process.

Many biologists think the answer is to try to eliminate the guesswork. They are creating libraries of ready-made "plug-and-play" components that should behave in a reliable way when put together to create biologicial circuits. But George Church, a geneticist at Harvard Medical School in Boston, thinks there is a far quicker way: let evolution do all the hard work for us. Instead of trying to design every aspect of the genetic circuitry involved in a particular trait down to the last DNA letter, his idea is to come up with a relatively rough design, create lots of variants on this design and select the ones that work best.

The basic idea is hardly original; various forms of directed evolution are already used to design things as diverse as proteins and boats. Church's group, however, has developed a machine for "evolving" entire organisms - and it works at an unprecedented scale and speed. The system has the potential to add, change or switch off thousands of genes at a time - Church calls this "multiplexing" - and it can generate billions of new strains in days.

Of course, there are already plenty of ways to generate mutations in cells, from zapping them with radiation to exposing them to dangerous chemicals. What's different about Church's machine is that it can target the genes that affect a particular characteristic and alter them in specific ways. That greatly increases the odds of success. Effectively, rather than spending years introducing one set of specific changes, bioengineers can try out thousands of combinations at once. Peter Carr, a bioengineer at MIT Media Lab who is part of the group developing the technology, describes it as "highly directed evolution".

The first "evolution machine" was built by Harris Wang, a graduate student in Church's lab. To prove it worked, he started with a strain of the E. colibacterium that produced small quantities of lycopene, the pigment that makes tomatoes red. The strain was also modified to produce some viral enzymes. Next, he synthesised 50,000 DNA strands with sequences that almost matched parts of the 24 genes involved in lycopene production, but with a range of variations that he hoped would affect the amount of lycopene produced. The DNA and the bacteria were then put into the evolution machine.

The machine let the E. coli multiply, mixed them with the DNA strands, and applied an electric shock to open up the bacterial cells and let the DNA get inside. There, some of the added DNA was swapped with the matching target sequences in the cells' genomes. This process, called homologous recombination, is usually very rare, which is where the viral enzymes come in. They trick cells into treating the added DNA as its own, greatly increasing the chance of homologous recombination.

The effect was to create new variants of the targeted genes while leaving the rest of the genome untouched. It was unlikely that all 24 genes would be altered simultaneously in any one bacterium, so the cycle was repeated over and over to increase the proportion of cells with mutations in all 24 genes.

Repeating the cycle 35 times generated an estimated 15 billion new strains, each with a different combination of changes in the target genes. Some made five times as much lycopene as the original strain, Wang's team reported in 2009 ( Nature, vol 460, p 894).

It took Wang just three days to do better than the biosynthesis industry has managed in years. And it was no one-off - he has since repeated the trick for the textile dye indigo.

Church calls this bold approach multiplex automated genome engineering, or MAGE. In essence, he has applied the key principles that have led to the astonishing advances in DNA sequencing - parallel processing and automation - to genetic engineering. And since Church was one of the founders of the human genome project and helped develop modern sequencing methods, he knows what he is doing.

Just as labs all over the world now buy thousands of automated DNA sequencing machines, so Church envisions them buying automated evolution machines. He hopes to sell them relatively cheaply, at around $90,000 apiece. "We're dedicated to bringing the price down for everybody, rather than doing some really big project that nobody can repeat," Church says.

He hopes the machines will greatly accelerate the process of producing novel microbes. LS9, a biofuels company based near San Francisco that was co-founded by Church, has said it hopes to use MAGE to engineer E. coli that can produce renewable fuels. Church and colleagues are also adapting the approach for use with other useful bacteria, including Shewanella, which can convert toxic metals such as uranium into an insoluble form, and cyanobacteria which can extract energy from light using photosynthesis.

A big revolution

In principle, the technique should work with plant and animal cells as well as microbes. New methods will have to be developed for coaxing cells to swap in tailored DNA for each type of organism, but Church and his colleagues say that progress has already been made in yeast and mammalian cells.

"I think it is a big revolution in genome engineering," says Kristala Jones Prather, a bioengineer at the Massachusetts Institute of Technology who is not part of Church's collaboration. "You don't have to already know what the answer is. You can manipulate multiple things at a time, and let the cell find a solution for you."

Because biological systems are so complex, it is a huge advantage to be able to tweak lots of genes simultaneously, rather than one at a time, she says. "In almost every case you'll get a different solution that's a better solution."

The disadvantage of Church's approach is that the "better solution" is mixed up with millions of poorer solutions. Prather points out that the technique is limited by how easy it is to screen for the characteristics that you want. Wang selected good lycopene producers by growing 100,000 of the strains he had created in culture dishes and simply picking out the brightest red colonies. "Essentially nothing that we use in my lab can be screened so easily," Prather says.

By automating selection and using a few tricks, though, it should be practical to screen for far more subtle characteristics. For instance, biosensors that light up when a particular substance is produced could be built into the starting strain. "The power going forward will have to do with clever selections and screens," says Church.

As revolutionary as this approach is, Church thinks MAGE's most far-reaching potential lies elsewhere. He reckons it will be possible to use the evolution machine to make many thousands of specific changes to a cell's DNA: essentially, to rewrite genomes.

At the moment, making extensive changes to even the smallest genome is extremely costly and laborious. Last year, the biologist and entrepreneur Craig Venter announced that his team had replaced a bacterium's genome with a custom-written one (Science, vol 329, p 52). His team synthesised small pieces of DNA with a specific sequence, and then joined them together to create an entire genome. It was an awesome achievement, but it took 400 person-years of labour and cost around $40 million.

MAGE can do the same job far more cheaply and efficiently by rewriting existing genomes, Church thinks. The idea is that instead of putting DNA strands into the machine with a range of different mutations, you add only DNA with the specific changes you want. Even if you are trying to change hundreds or thousands of genes at once, after a few cycles in the machine, a good proportion of the cells should have all the desired changes. This can be checked by sequencing.

If the idea works it would make feasible some visionary projects that are currently impossibly difficult. Church, needless to say, has something suitably ambitious in mind. In fact, it is the reason he devised MAGE in the first place.

In 2004 he had joined forces with Joseph Jacobson, an engineer at the MIT Media Lab, best known as inventor of the e-ink technology used in e-readers. Searching for a "grand goal" in bioengineering, the pair hit upon the idea of altering life's genetic code. Rather than just alter the sequence of DNA, they want to change the very language in which the instructions for life are written(see diagram).

This is not as alarming as it might sound. Because all existing life uses essentially the same genetic code, organisms that translate DNA using a different code would be behind a "genetic firewall", unable to swap DNA with any normal living thing. If they escaped into the wild, they would not be able to spread any engineered components. Nor would they be able to receive any genes from natural bacteria that would endow them with antibiotic resistance or the ability to make toxins. "Any new DNA coming in or any DNA coming out doesn't work," says Church. "We're hoping that people who are concerned, including us, about escape from industrial processes, will find these safer."

There is another huge advantage: organisms with an altered genetic code would be immune to viruses, which rely on the protein-making machinery of the cells they infect to make copies of themselves. In a cell that uses a different genetic code, the viral blueprints will be mistranslated, and any resulting proteins will be garbled and unable to form new viruses.

Doing this in bacteria or cell lines used for growing chemicals would be of huge importance to industry, where viral infections can shut down entire production lines. And the approach is not necessarily limited to single cells. "It's conceivable that it could be done in animals," says Carr.

Completely virus-proof

Carr and his colleagues have already begun eliminating redundant codons from the genome of E. coli. They are starting with the rarest, the stop codon TAG, which appears 314 times. Each instance will be replaced by a different stop codon, TAA. So far they have used MAGE to create 32 E. coli strains that each have around 10 of the necessary changes, and are now combining them to create a single strain with all the changes. Carr says this should be completed within the next few months, after which he hopes to start replacing another 12 redundant codons. To make a bacterium completely virus-proof will probably require replacing tens of thousands of redundant codons, he says, as well as modifying the protein-making factories so they no longer recognise these codons.

To ensure novel genes cannot be translated if they get passed on to other organisms, the team would have to go a step further and reassign the freed-up codons so a different amino acid to normal is added to a protein when they occur. This could include amino acids that do not exist in nature, opening the door to new types of chemistry in living cells. Artificial amino acids could be used to create proteins that do not degrade as easily, for example, which could be useful in industry and medicine.

There are potential dangers in making organisms virus-proof, though. Most obviously, they might have an advantage over competing species if they escaped into the wild, allowing them to dominate environments with potentially destructive effects. In the case of E. coli, those environments could include our guts.

"We want to be very careful. The goal is to isolate these organisms from part of the natural sphere with which they normally interact," says Carr. "We shouldn't pretend that we understand all possible ramifications, and we need to study these modified organisms carefully." But he points out that we deal with similar issues already, such as invasive species running riot in countries where they have no natural predators. Additional safeguards could be built in, such as making modified organisms dependent on nutrients they can get only in a lab or factory. And if the worst came to the worst, biologists could create viruses capable of killing their errant organisms. Such viruses would not be able to infect normal cells.

Church argues that with proper safety and regulatory controls, there is no reason why the approach shouldn't be used widely. "I think that to some extent you'd like every organism to be multi-virus resistant," he says. "Or at least industrial microbes, agricultural species and humans."

Yes, humans. Church is already adapting MAGE for genetically modifying human stem cell lines. The work, funded by the US National Human Genome Research Institute, aims to create human cell lines with subtly different genomes in order to test ideas about which mutations cause disease and how. "Sequencing is now a million times cheaper, and there are a million times as many hypotheses being generated," he says. "We'd like to develop the resources so that people can quickly test hypotheses about the human genome by synthesising new versions."

As the technology improves and becomes routine, says Church, it could also be used to alter the cells used for cell-based therapies. Tissue-engineered livers grown from stem cells, say, could have their genetic code altered so that they would be immune to liver-destroying viruses such as hepatitis C.

"Everybody getting stem cell therapies will be given a choice of doing ordinary stem cell therapy - either with their cells or donor cells - or doing stem cells that are resistant to viruses," he says. "There will have to be all kinds of safety checks and FDA approval and so forth, but most people faced with two fairly safe choices, one of which is virus-sensitive and one of which is virus-resistant, are going to take the virus-resistant one."

Of course, there would be enormous experimental and safety hurdles to overcome. Not least the fact that gene targeting using homologous recombination or any other method is not perfect - the added DNA is sometimes inserted into the wrong place in the genome, and the process can trigger other kinds of mutations too. Such off-target changes might be a big problem when making hundreds of targeted changes at a time.

So not surprisingly, Carr describes the move to humans as "fraught with peril". But if we do get to a point where there are lots of people walking around with virus-resistant tissues or organs, and lots of farm animals that are completely virus-resistant, Church thinks it is only a matter of time before clinics create virus-resistant babies. "If it works really well, somebody somewhere will decide to try it in the next generation."

Making changes to the genomes of humans that will get passed on to their children has long been seen as taboo. But Church points out that there was strong resistance to techniques such as in vitro fertilisation and organ transplants when they were new; yet as soon as they were shown to work, they were quickly accepted. "Many technologies start out that way," he says. "But once they work really well, everybody says it's unethical not to use them."

Arthur Caplan, a bioethicist at the University of Pennsylvania in Philadelphia who advises the US government on reproductive technologies, is sceptical about the idea of making virus-resistant people, because anyone modified in this way would only be able to conceive children naturally with a partner whose genome had been altered in exactly the same way. "You would be denying a hugely important choice to a future modified human."

But, he says, if MAGE really can be used to edit the genome of human cells, it would provide a way to fix the mutations that cause inherited disease. It could be the technology that opens the door to the genetic engineering of humans. We should start debating now how best to use it, Caplan says. Should it be limited to preventing disease, or used for enhancement too? What sort of regulation is needed? Who should be eligible?

This prospect might seem a long way off, but Caplan argues that if the technique works well in other species, it could become feasible to attempt to engineer humans in as little as 10 years. "If you learn to do this in microbes and then in animals, you'll find yourself wondering how we got to humans so fast," he says. "You've got to pay attention to what's going on in lower creatures because that's the steady march to people."

If all this sounds wildly implausible, bear in mind that the idea of sequencing an entire human genome in days seemed nigh on impossible just a few years ago. Now it's fast becoming routine. Most biologists would probably agree that it is just a matter of time before we develop the technology needed to rewrite the DNA of living creatures at will. If Church succeeds, this future will happen faster than any imagined.

Tuesday, July 5, 2011

Q: According to relativity, two moving observers always see the other moving through time slower. Isn’t that a contradiction? Doesn’t one have to be faster?

Physicist: They definitely both experience time dilation.  That is to say, they both see the other person moving through time slower (you will always see your own clock running normally, in all circumstances).  The short resolution to the “paradox” is: if you’re flying past each other, and never come back to the same place again to compare clocks, what’s the problem?  You may both observe the other person’s clock running slower, but that’s not a contradiction in any “physical sense”.

Of course if you do meet up again, then you’ve got the “twin paradox”, which still isn’t a problem (or a paradox).  One of the most frustrating things about the universe is that there is no such thing as “absolute time“, which would allow you to say “who’s right”.  If you could ask the universe “what time is it?” the universe’s best answer would be “that depends on who’s asking”.  The universe is kind of a smart ass.

There’s a halfway decent explanation the twin paradox here: “Q: How does the twin paradox work?” and there’s a quarterway decent, possibly simpler, explanation of why movement makes time pass slower here: “Q: Why does going fast or being lower make time slow down?“.  Personally, I think the question of this post is far more profound.

I should note here that the language physicists use makes the situation sound subjective; “the observer sees…”, “the person experiences…”, etc.  However, none of the effects (covered in a minute) are due to observer based effects, like the delay caused by the time it takes light to get from place to place.  Everything here is literal and physically real.

You don't see an event when it happens, you see it when the light from the event gets to you. But you can easily figure out when it did happen by dividing the distance to the event by C, giving you the time delay.

This diagram is pretty standard physicist fare.  If you want to draw a picture of something happening in 4 dimensional spacetime, you just drop two of the spacial dimensions.  So here, time is up, space is left/right, the red arrow is a person (it doesn’t have to be a person) sitting still and moving forward in time (like you’re doing right now, in all likelihood).  The yellow triangle is a lightning strike (at the bottom) and the light expanding out from it.  The picture should make sense; the longer after the lightning strike (the higher on the picture) the farther the light from the strike has traveled (wider to the left and right).  “Lightning” is a common example of an event, because it hits in a definite place at a very definite time.  Plus, (spoiler alert) lightning is bright so, unlike most small instantaneous things, it makes sense that everyone should be able to see them.

Now let’s say you’ve got two people; Alice, who likes to hang out by the train tracks, andBob, who likes to ride trains on said tracks.

A train car (light blue) moves to the right. Right when Alice and Bob are eye-to-eye the front and back of the train car are hit by lightning. This moment, that Alice calls "Now!", is the red line.

As the train, with Bob sitting in the exact center, passes Alice they high-five each other.  Suddenly, both ends of the train are struck by lightning.  Alice knows this because, verysoon after the lightning strikes, the light from them get to her.  Being smart, she realizes that the strikes must have happened at the same time, because they happened equally far away from her, and the light took the same amount of time to get to her (see picture).  Moreover, really milking her cleverness, she predicts that Bob will see the lightning bolt at the front of the train first, and the lightning bolt at the back of the train second (see picture, some more).

The speed of light, C, is an absolute.  No matter where you are, or how fast you’re moving, C stays exactly the same.  So Alice’s reasoning is completely solid, and she’s right when she says that the lightning bolts happened at the same time.

When you say that two things are “happening at the same moment”, or “happening now” you’re saying that they’re on the same spacetime plane, that I’ll call a “moment plane”.  In the same way that a regular  2D plane is a big, flat subset of regular 3D space, the moment planes are big flat 3D subsets of 4D spacetime.

So the high five, the lightning strikes, and everything else in the universe at that moment, are all in the same “moment plane”.  However, that doesn’t necessarily mean that everything in that plane happens at the same time (although they do for someone, in this case Alice).

 

Same situation from the perspective of the train. Bob sees the lightning at the front of the train first, and the lightning at the back of the train second, and can even figure out that Alice will see them at the same time. However, he disagrees with Alice about when things happen.

Around 1900 the Michelson Morley experiment (among others) was demonstrating that the speed of light seemed to be the same to everyone, regardless of whether or not they’re moving.  Einstein’s great insight was “Hey everybody, if the speed of light seemsto be the same regardless of the observers movement, then maybe it really is?”  He was a staggering genius.  Also, he was exactly right.

So, Alice was right about Bob seeing the front bolt first, and the back bolt second.  However, as far as Bob’s concerned, she was wrong about why.  Using the same reasoning Alice did, he figures that the speed of light is fixed (whether or not you’re moving), and the distance from him to the front and back of the train is the same, so since he saw the front bolt before the back, it must have happened first.  And he’s right.  Moreover, he thinks that the reason that Alice saw both at the same time is that she’s moving to the left, away from the first bolt and toward the second (picture).

The first thing that most people say when the hear about the train thought experiment is “Isn’t the person on the tracks correct, since they really are sitting still?”  Nope!  The tracks may be bigger, but (as Dr. V. E. Kilmer discovered experimentally) there’s no physical way to say who’s moving and who’s not.

Both Alice and Bob are completely correct.  What’s very strange is that the ideas we intuitively have about “nowness” are wrong.  As soon as two people are moving with respect to each other, the set of points that they consider “now” are no longer the same (different moment planes).  Alice’s “now” is the same red line above, but Bob’s sees it as “tilted”.

After all of that, what you should take away is: “a things’s moment planes tilts up in the direction of its movement” (picture above).  If this is the first time you’ve seen this sort of stuff, you shouldn’t really believe any of it.  Or at least, you shouldn’t have internalized it.

This stuff isn’t terribly complicated, but it is really mind-bending, so take a moment.

Physically tilted "nows" are difficult to wrap your mind around. To encourage the reader to pause or go back a little bit; consider the above.

Finally, to actually answer the question, imagine the situation from the point of view of someone in between Alice and Bob, who sees them flying off at equal speeds in opposite directions.  This puts Alice and Bob on equal footing, and there’s no questions about “who’s right” or “who’s moving”.

Alice and Bob's now planes at various times. If you were to ask Alice "how much time has passed for Bob now?" the answer will always be "less time than for Alice." The situation with Bob is exactly symmetrical.

Since the moment planes of each person “tilts up in the direction of movement”, each person is always trailing behind the other in time.  When they pass each other they each start their stopwatches.  For that one moment they can agree that T=0 for both of them.  But that’s where the agreement ends.  If you ask Bob about what set of points in the universe (both position and time) correspond to T=7, he’ll have no trouble telling you.  Specifically, he’ll tell you “right now my stopwatch reads ‘T=7′ and, F.Y.I., Alice’s stopwatch reads ‘T=5′”.  Bob recognizes that this is because her clock is running slower, and he’s right.  At least, he’s right in terms of how time is flowing for him.

If you were to run over to Alice at the moment that her stop watch reads 5 (using aTARDIS or something), she would say “right now my stopwatch reads ‘T=5′ and Bob’s reads ‘T=3.5′”.  Also being clever, she realizes that this is because Bob’s clock is running slower, and she’s right.  Notice she doesn’t say “T=7″.  Once again, this is because they disagree on what “now” means, since their moment planes aren’t the same.

The first question that should be coming to mind is something like “Well, what’s reallyhappening?” or “How is time actually passing?”.  Sadly, time is a strictly local phenomena.  How it flows is determined (defined really) by relative position and relative velocity.  That is to say, there is no “universal clock” that describes how time passes overall.  The only reason that there seems to be some kind of universal clock is that we (people) are all moving at very nearly the same speed.  Or equivalently, we’re all sitting still in very much the same way.  Our time, length, and speed scale are all just too small.

0^0 = ? ( Zero raised to the zeroth power ) equal? dilemma

Clever student:

 

I know!

x^{0} =  x^{1-1} = x^{1} x^{-1} = \frac{x}{x} = 1.

 

Now we just plug in x=0, and we see that zero to the zero is one!


Cleverer student:

 

No, you’re wrong! You’re not allowed to divide by zero, which you did in the last step. This is how to do it:

0^{x} = 0^{1+x-1} = 0^{1} \times 0^{x-1} = 0 \times 0^{x-1}0

which is true since anything times 0 is 0. That means that

0^{0} = 0.

 


Cleverest student :

 

That doesn’t work either, because if x=0 then

0^{x-1} is 0^{-1} = \frac{1}{0}

so your third step also involves dividing by zero which isn’t allowed! Instead, we can think about the function x^{x} and see what happens as x>0 gets small. We have:

\lim_{x \to 0^{+}} x^{x}  = \lim_{x \to 0^{+}} \exp(\log(x^{x}))

\lim_{x \to 0^{+}} \exp(x \log(x))

\exp( \lim_{x \to 0^{+} } x \log(x) )

\exp( \lim_{x \to 0^{+} } \frac{\log(x)}{ x^{-1} } )

\exp( \lim_{x \to 0^{+} } \frac{ \frac{d}{dx} \log(x) }{ \frac{d}{dx} x^{-1} } )

\exp( \lim_{x \to 0^{+} } \frac{x^{-1}}{- x^{-2}} )

\exp( \lim_{x \to 0^{+} } -x )

\exp( 0)

1

So, since  \lim_{x \to 0^{+}} x^{x}  = 1, that means that 0^{0} = 1.


High School Teacher:

 

Showing that x^{x} approaches 1 as the positive value x gets arbitrarily close to zero does not prove that 0^{0} = 1. The variable x having a value close to zero is different than it having a value of exactly zero. It turns out that 0^{0} is undefined. 0^{0} does not have a value.


Calculus Teacher:

 

For all x>0, we have

0^{x} = 0.

 

Hence,

\lim_{x \to 0^{+}} 0^{x} = 0

That is, as x gets arbitrarily close to 0 (but remains positive), 0^{x} stays at 0.

On the other hand, for real numbers y such that y \ne 0, we have that

y^{0} = 1.

 

Hence,

\lim_{y \to 0} y^{0} = 1

That is, as y gets arbitrarily close to 0y^{0} stays at 1.

Therefore, we see that the function f(x,y) = y^{x} has a discontinuity at the point (x,y) = (0,0). In particular, when we approach (0,0) along the line with x=0 we get

\lim_{y \to 0} f(0,y) = 1

but when we approach (0,0) along the line segment with y=0 and x>0 we get

\lim_{x \to 0^{+}} f(x,0) = 0.

 

Therefore, the value of \lim_{(x,y) \to (0,0)} y^{x}  is going to depend on the direction that we take the limit. This means that there is no way to define 0^{0} that will make the function y^{x}continuous at the point (x,y) = (0,0).


Mathematician: Zero raised to the zero power is one. Why? Because mathematicians said so. No really, it’s true.

 

Let’s consider the problem of defining the function f(x,y) = y^x for positive integers y and x. There are a number of definitions that all give identical results. For example, one idea is to use for our definition:

y^x := 1 \times y \times y \cdots \times y

where the y is repeated x times. In that case, when x is one, the y is repeated just one time, so we get

y^{x} = 1 \times y.

 

However, this definition extends quite naturally from the positive integers to the non-negative integers, so that when x is zero, y is repeated zero times, giving

y^{0} = 1

which holds for any y. Hence, when y is zero, we have

0^0 = 1.

 

Look, we’ve just proved that 0^0 = 1! But this is only for one possible definition of y^x. What if we used another definition? For example, suppose that we decide to define y^x as

y^x := \lim_{z \to x^{+}} y^{z}.

In words, that means that the value of y^x is whatever y^z approaches as the real number z gets smaller and smaller approaching the value x arbitrarily closely.

[Clarification: a reader asked how it is possible that we can use y^z in our definition of y^x, which seems to be recursive. The reason it is okay is because we are working here only with z>0, and everyone agrees about what y^z equals in this case. Essentially, we are using the known cases to construct a function that has a value for the more difficult x=0 and y=0 case.]

Interestingly, using this definition, we would have

0^0 = \lim_{x \to 0^{+}} 0^{x} = \lim_{x \to 0^{+}} 0 = 0

Hence, we would find that 0^0 = 0 rather than 0^0 = 1. Granted, this definition we've just used feels rather unnatural, but it does agree with the common sense notion of what y^xmeans for all positive real numbers x and y, and it does preserve continuity of the function as we approach x=0 and y=0 along a certain line.

So which of these two definitions (if either of them) is right? What is 0^0 really? Well, for x>0 and y>0 we know what we mean by y^x. But when x=0 and y=0, the formula doesn't have an obvious meaning. The value of y^x is going to depend on our preferred choice of definition for what we mean by that statement, and our intuition about what y^x means for positive values is not enough to conclude what it means for zero values.

But if this is the case, then how can mathematicians claim that 0^0=1? Well, merely because it is useful to do so. Some very important formulas become less elegant to write down if we instead use 0^0=0 or if we say that 0^0 is undefined. For example, consider thebinomial theorem, which says that:

(a+b)^x = \sum_{k=0}^{\infty} \binom{x}{k} a^k b^{x-k}

 

where \binom{x}{k} means the binomial coefficients.

Now, setting a=0 on both sides and assuming b \ne 0 we get

b^x

(0+b)^x = \sum_{k=0}^{\infty} \binom{x}{k} 0^k b^{x-k}

\binom{x}{0} 0^0 b^{x} + \binom{x}{1} 0^1 b^{x-1} + \binom{x}{2} 0^2 b^{x-2} + \hdots

\binom{x}{0} 0^0 b^{x}

0^0 b^{x}

where, I've used that 0^k = 0 for k>0, and that  \binom{x}{0} = 1. Now, it so happens that the right hand side has the magical factor 0^0. Hence, if we do not use 0^0 = 1 then the binomial theorem (as written) does not hold when a=0 because then b^x does not equal 0^0 b^{x}.

If mathematicians were to use 0^0 = 0, or to say that 0^0 is undefined, then the binomial theorem would continue to hold (in some form), though not as written above. In that case though the theorem would be more complicated because it would have to handle the special case of the term corresponding to k=0. We gain elegance and simplicity by using 0^0 = 1.

There are some further reasons why using 0^0 = 1 is preferable, but they boil down to that choice being more useful than the alternative choices, leading to simpler theorems, or feeling more "natural" to mathematicians. The choice is not "right", it is merely nice.

 

Source