Saturday, January 22, 2011

Robotic Ghost Knifefish Is 'Born'

The robot -- created after observing and creating computer simulations of the black ghost knifefish -- could pave the way for nimble robots that could perform underwater recovery operations or long-term monitoring of coral reefs.

Led by Malcolm MacIver, associate professor of mechanical and biomedical engineering at Northwestern's McCormick School of Engineering and Applied Science, the team's results are published in the Journal of the Royal Society Interface.

The black ghost knifefish, which works at night in rivers of the Amazon basin, hunts for prey using a weak electric field around its entire body and moves both forward and backward using a ribbon-like fin on the underside of its body.

MacIver, a robotics expert who served as a scientific consultant for"Tron: Legacy" and is science advisor for the television series"Caprica," has studied the knifefish for years. Working with Neelesh Patankar, associate professor of mechanical engineering and co-author of the paper, he has created mechanical models of the fish in hopes of better understanding how the nervous system sends messages throughout the body to make it move.

Planning for the robot -- called GhostBot -- began when graduate student Oscar Curet, a co-author of the paper, observed a knifefish suddenly moving vertically in a tank in MacIver's lab.

"We had only tracked it horizontally before," said MacIver, a recent recipient of the Presidential Early Career Award for Scientists and Engineers."We thought, 'How could it be doing this?'"

Further observations revealed that while the fish only uses one traveling wave along the fin during horizontal motion (forward or backward depending on the direction on the wave), while moving vertically it uses two waves. One of these moves from head to tail, and the other moves tail to head. The two waves collide and stop at the center of the fin.

The team then created a computer simulation that showed that when these"inward counterpropagating waves" are generated by the fin, horizontal thrust is canceled and the fluid motion generated by the two waves is funneled into a downward jet from the center of the fin, pushing the body up. The flow structure looks like a mushroom cloud with an inverted jet.

"It's interesting because you're getting force coming off the animal in a completely unexpected direction that allows it to do acrobatics that, given its lifestyle of hunting and maneuvering among tree roots, makes a huge amount of sense," MacIver said.

The group then hired Kinea Design, a design firm founded by Northwestern faculty that specializes in human interactive mechatronics, and worked closely with its co-founder, Michael Peshkin, professor of mechanical engineering, to design and build a robot. The company fashioned a forearm-length waterproof robot with 32 motors giving independent control of the 32 artificial fin rays of the lycra-covered artificial fin. (That means the robot has 32 degrees of freedom. In comparison, industrial robot arms typically have less than 10.) Seven months and$200,000 later, the GhostBot came to life.

The group took the robot to Harvard University to test it in a flow tunnel in the lab of George V. Lauder, professor of ichthyology and co-author of the paper. The team measured the flow around the robotic fish by placing reflective particles in the water, then shining a laser sheet into the water. That allowed them to track the flow of the water by watching the particles, and the test showed the water flowing around the biomimetic robot just as computer simulations predicted it would.

"It worked perfectly the first time," MacIver said."We high-fived. We had the robot in the real world being pushed by real forces."

The robot is also outfitted with an electrosensory system that works similar to the knifefish's, and MacIver and his team hope to next improve the robot so it can autonomously use its sensory signals to detect an object and then use its mechanical system to position itself near the object.

Humans excel at creating high-speed, low-maneuverability technologies, like airplanes and cars, MacIver said. But studying animals provides a platform for creating low-speed, high-maneuverability technologies -- technologies that don't currently exist. Potential applications for such a robot include underwater recovery operations, such as plugging a leaking oil pipe, or long-term monitoring of oceanic environments, such as fragile coral reefs.

While the applied work on the robot moves ahead in the lab, the group is pursuing basic science questions as well."The robot is a tool for uncovering the extremely complicated story of how to coordinate movement in animals," MacIver said."By simulating and then performing the motions of the fish, we're getting insight into the mechanical basis of the remarkable agility of a very acrobatic, non-visual fish. The next step is to take the sensory work and unite the two."


Source

Friday, January 21, 2011

Video Games With Imaginary Steering Wheel as the Controller

Researchers from the Group of Applied Artificial Intelligence (GIAA) on the Colmenarejo Campus of UC3M presented this application at a recent conference,SalónInternacional de Material Eléctrico y Electrónico,recently held in Madrid. The participants who visited the booth for Infaimon, the company which has collaborated on this project, had the opportunity to test this interface with a videogame operated simply by moving ones hands as if holding a virtual steering wheel.

The scientists have employed a time of flight camera or TOF with which they capture in 3D user's movements to later transmit them to a computer, which then processes and transmits them to the game's car."The most complicated part was determining the camera's characteristics to be able to optimize movement and its integration with many different applications," noted one of the GIAA researchers from UC3M, Daniel Sánchez, who has carried out his final degree project within the framework of this research study.

The big advantage of this type of camera is that it offers three-dimensional information without having to resort to the classic stereoscope systems of two lenses."These new sensors offer in-depth information, which is of great interest when working with artificial vision systems," remarked MiguelÁngel Patricio, who coordinates this research from the Department of Informatics at UC3M. The functioning of the TOF camera is relatively simple: an infrared ring gives off a light that bounces off the body, which is then recorded and returns to the sensor. According to the time this process takes, the distance between these objects can be calculated."Our idea," Patricio points out,"is to be able to apply this sensor to different problems on which we are currently working, such as video surveillance systems, biometric face identification, analysis of player movement in sports performance, and man-machine interfaces," he concluded.

Multlple applications

These researchers, who work on the UC3M Campus of Colmenarejo, are now focusing their efforts on analyzing information that is obtained using this type of sensors."I am convinced that their use will revolutionize artificial vision systems in the future, because the data obtained are much richer than that obtained through other types of traditional sensors," asserted the professor, who pointed out that we only have to wait until the economy of the market lowers their price, as they now cost close to 6,000 euros per unit.

The current challenge facing these scientists is applying this camera's potential in certain fields. In medicine, for example with this type of sensors an automatic rehabilitation system can be created which can guide patients in doing their exercises without having to leave their home. These researchers also collaborate with INEF (Spain's National Sport Institute) in the development of criteria for the analysis of childhood obesity through the TOF sensor, which up to now has been done with laser. And the applications likewise reach into the area of what is referred to as"affective computing" through the design of HCI (Human-Computer Interface) applications which attempt to examine a person's mood through the application of algorithms that analyze information provided by these three-dimensional cameras.


Source

Thursday, January 20, 2011

For Robust Robots, Let Them Be Babies First

Or at least that's not too far off from what University of Vermont roboticist Josh Bongard has discovered, as he reports in the January 10 online edition of theProceedings of the National Academy of Sciences.

In a first-of-its-kind experiment, Bongard created both simulated and actual robots that, like tadpoles becoming frogs, change their body forms while learning how to walk. And, over generations, his simulated robots also evolved, spending less time in"infant" tadpole-like forms and more time in"adult" four-legged forms.

These evolving populations of robots were able to learn to walk more rapidly than ones with fixed body forms. And, in their final form, the changing robots had developed a more robust gait -- better able to deal with, say, being knocked with a stick -- than the ones that had learned to walk using upright legs from the beginning.

"This paper shows that body change, morphological change, actually helps us design better robots," Bongard says."That's never been attempted before."

Robots are complex

Bongard's research, supported by the National Science Foundation, is part of a wider venture called evolutionary robotics."We have an engineering goal," he says"to produce robots as quickly and consistently as possible." In this experimental case: upright four-legged robots that can move themselves to a light source without falling over.

"But we don't know how to program robots very well," Bongard says, because robots are complex systems. In some ways, they are too much like people for people to easily understand them.

"They have lots of moving parts. And their brains, like our brains, have lots of distributed materials: there's neurons and there's sensors and motors and they're all turning on and off in parallel," Bongard says,"and the emergent behavior from the complex system which is a robot, is some useful task like clearing up a construction site or laying pavement for a new road." Or at least that's the goal.

But, so far, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured or outdoor environments.

Which is why Bongard, an assistant professor in UVM's College of Engineering and Mathematical Sciences, and other robotics experts have turned to computer programs to design robots and develop their behaviors -- rather than trying to program the robots' behavior directly.

His new work may help.

To the light

Using a sophisticated computer simulation, Bongard unleashed a series of synthetic beasts that move about in a 3-dimensional space."It looks like a modern video game," he says. Each creature -- or, rather, generations of the creatures -- then run a software routine, called a genetic algorithm, that experiments with various motions until it develops a slither, shuffle, or walking gait -- based on its body plan -- that can get it to the light source without tipping over.

"The robots have 12 moving parts," Bongard says."They look like the simplified skeleton of a mammal: it's got a jointed spine and then you have four sticks -- the legs -- sticking out."

Some of the creatures begin flat to the ground, like tadpoles or, perhaps, snakes with legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals.

And why do the generations of robots that progress from slithering to wide legs and, finally, to upright legs, ultimately perform better, getting to the desired behavior faster?

"The snake and reptilian robots are, in essence, training wheels," says Bongard,"they allow evolution to find motion patterns quicker, because those kinds of robots can't fall over. So evolution only has to solve the movement problem, but not the balance problem, initially. Then gradually over time it's able to tackle the balance problem after already solving the movement problem."

Sound anything like how a human infant first learns to roll, then crawl, then cruise along the coffee table and, finally, walk?

"Yes," says Bongard,"We're copying nature, we're copying evolution, we're copying neural science when we're building artificial brains into these robots." But the key point is that his robots don't only evolve their artificial brain -- the neural network controller -- but rather do so in continuous interaction with a changing body plan. A tadpole can't kick its legs, because it doesn't have any yet; it's learning some things legless and others with legs.

And this may help to explain the most surprising -- and useful -- finding in Bongard's study: the changing robots were not only faster in getting to the final goal, but afterward were more able to deal with new kinds of challenges that they hadn't before faced, like efforts to tip them over.

Bongard is not exactly sure why this is, but he thinks it's because controllers that evolved in the robots whose bodies changed over generations learned to maintain the desired behavior over a wider range of sensor-motor arrangements than controllers evolved in robots with fixed body plans. It seem that learning to walk while flat, squat, and then upright, gave the evolving robots resilience to stay upright when faced with new disruptions. Perhaps what a tadpole learns before it has legs makes it better able to use its legs once they grow.

"Realizing adaptive behavior in machines has to date focused on dynamic controllers, but static morphologies," Bongard writes in his PNAS paper"This is an inheritance from traditional artificial intelligence in which computer programs were developed that had no body with which to affect, and be affected by, the world."

"One thing that has been left out all this time is the obvious fact that in nature it's not that the animal's body stays fixed and its brain gets better over time," he says,"in natural evolution animals bodies and brains are evolving together all the time." A human infant, even if she knew how, couldn't walk: her bones and joints aren't up to the task until she starts to experience stress on the foot and ankle.

That hasn't been done in robotics for an obvious reason:"it's very hard to change a robot's body," Bongard says,"it's much easier to change the programming inside its head."

Lego proof

Still, Bongard gave it a try. After running 5000 simulations, each taking 30 hours on the parallel processors in UVM's Vermont Advanced Computing Center --"it would have taken 50 or 100 years on a single machine," Bongard says -- he took the task into the real world.

"We built a relatively simple robot, out of a couple of Lego Mindstorm kits, to demonstrate that you actually could do it," he says. This physical robot is four-legged, like in the simulation, but the Lego creature wears a brace on its front and back legs."The brace gradually tilts the robot," as the controller searches for successful movement patterns, Bongard says,"so that the legs go from horizontal to vertical, from reptile to quadruped.

"While the brace is bending the legs, the controller is causing the robot to move around, so it's able to move its legs, and bend its spine," he says,"it's squirming around like a reptile flat on the ground and then it gradually stands up until, at the end of this movement pattern, it's walking like a coyote."

"It's a very simple prototype," he says,"but it works; it's a proof of concept."


Source

Wednesday, January 19, 2011

Better Than the Human Eye: Tiny Camera With Adjustable Zoom Could Aid Endoscopic Imaging, Robotics, Night Vision

The"eyeball camera" has a 3.5x optical zoom, takes sharp images, is inexpensive to make and is only the size of a nickel. (A higher zoom is possible with the technology.)

While the camera won't be appearing at Best Buy any time soon, the tunable camera -- once optimized -- should be useful in many applications, including night-vision surveillance, robotic vision, endoscopic imaging and consumer electronics.

"We were inspired by the human eye, but we wanted to go beyond the human eye," said Yonggang Huang, Joseph Cummings Professor of Civil and Environmental Engineering and Mechanical Engineering at Northwestern's McCormick School of Engineering and Applied Science."Our goal was to develop something simple that can zoom and capture good images, and we've achieved that."

The tiny camera combines the best of both the human eye and an expensive single-lens reflex (SLR) camera with a zoom lens. It has the simple lens of the human eye, allowing the device to be small, and the zoom capability of the SLR camera without the bulk and weight of a complex lens. The key is that both the simple lens and photodetectors are on flexible substrates, and a hydraulic system can change the shape of the substrates appropriately, enabling a variable zoom.

The research is being published the week of Jan. 17 by theProceedings of the National Academy of Sciences(PNAS).

Huang, co-corresponding author of the PNAS paper, led the theory and design work at Northwestern. His colleague John Rogers, the Lee J. Flory Founder Chair in Engineering and professor of materials science and engineering at the University of Illinois, led the design, experimental and fabrication work. Rogers is a co-corresponding author of the paper.

Earlier eyeball camera designs are incompatible with variable zoom because these cameras have rigid detectors. The detector must change shape as the in-focus image changes shape with magnification. Huang and Rogers and their team use an array of interconnected and flexible silicon photodetectors on a thin, elastic membrane, which can easily change shape. This flexibility opens up the field of possible uses for such a system. (The array builds on their work in stretchable electronics.)

The camera system also has an integrated lens constructed by putting a thin, elastic membrane on a water chamber, with a clear glass window underneath.

Initially both detector and lens are flat. Beneath both the membranes of the detector and the simple lens are chambers filled with water. By extracting water from the detector's chamber, the detector surface becomes a concave hemisphere. (Injecting water back returns the detector to a flat surface.) Injecting water into the chamber of the lens makes the thin membrane become a convex hemisphere.

To achieve an in-focus and magnified image, the researchers actuate the hydraulics to change the curvatures of the lens and detector in a coordinated manner. The shape of the detector must match the varying curvature of the image surface to accommodate continuously adjustable zoom, and this is easily done with this new hemispherical eye camera.

In addition to Huang and Rogers, other authors of the paper are Chaofeng Lu and Ming Li, from Northwestern; Inhwa Jung, Jianliang Xiao, Viktor Malyarchuk and Jongseung Yoon, from the University of Illinois; and Zhuangjian Liu, from the Institute of High Performance Computing, Singapore.


Source

Friday, January 14, 2011

Self-Assembling Structures Open Door to New Class of Materials

The helical"supermolecules" are made of tiny colloid balls instead of atoms or molecules. Similar methods could be used to make new materials with the functionality of complex colloidal molecules. The team publishes its findings in the Jan. 14 issue of the journalScience.

"We can now make a whole new class of smart materials, which opens the door to new functionality that we couldn't imagine before," said Steve Granick, Founder Professor of Engineering at the University of Illinois and a professor of materials science and engineering, chemistry, and physics.

Granick's team developed tiny latex spheres, dubbed"Janus spheres," which attract each other in water on one side, but repel each other on the other side. The dual nature is what gives the spheres their ability to form unusual structures, in a similar way to atoms and molecules.

In pure water, the particles disperse completely because their charged sides repel one another. However, when salt is added to the solution, the salt ions soften the repulsion so the spheres can approach sufficiently closely for their hydrophobic ends to attract. The attraction between those ends draws the spheres together into clusters.

At low salt concentrations, small clusters of only a few particles form. At higher levels, larger clusters form, eventually self-assembling into chains with an intricate helical structure.

"Just like atoms growing into molecules, these particles can grow into supracolloids," Granick said."Such pathways would be very conventional if we were talking about atoms and molecules reacting with each other chemically, but people haven't realized that particles can behave in this way also."

The team designed spheres with just the right amount of attraction between their hydrophobic halves so that they would stick to one another but still be dynamic enough to allow for motion, rearrangement, and cluster growth.

"The amount of stickiness really does matter a lot. You can end up with something that's disordered, just small clusters, or if the spheres are too sticky, you end up with a globular mess instead of these beautiful structures," said graduate student Jonathan Whitmer, a co-author of the paper.

One of the advantages of the team's supermolecules is that they are large enough to observe in real time using a microscope. The researchers were able to watch the Janus spheres come together and the clusters grow -- whether one sphere at a time or by merging with other small clusters -- and rearrange into different structural configurations the team calls isomers.

"We design these smart materials to fall into useful shapes that nature wouldn't choose," Granick said.

Surprisingly, theoretical calculations and computer simulations by Erik Luijten, Northwestern University professor of materials science and engineering and of engineering sciences and applied mathematics, and Whitmer, a student in his group, showed that the most common helical structures are not the most energetically favorable. Rather, the spheres come together in a way that is the most kinetically favorable -- that is, the first good fit that they encounter.

Next, the researchers hope to continue to explore the colloid properties with a view toward engineering more unnatural structures. Janus particles of differing sizes or shapes could open the door to building other supermolecules and to greater control over their formation.

"These particular particles have preferred structures, but now that we realize the general mechanism, we can apply it to other systems -- smaller particles, different interactions -- and try to engineer clusters that switch in shape," Granick said.

The team also included University of Illinois graduate students Qian Chen and Shan Jiang and research scientist Sung Chul Bae. The U.S. Department of Energy and the National Science Foundation supported this work.


Source

Tuesday, January 11, 2011

Graphene Grains Make Atom-Thick Patchwork 'Quilts'

The multidisciplinary Cornell collaboration, publishing online Jan. 5 in the journal Nature, focuses on graphene -- a one atom-thick sheet of carbon atoms bonded in a crystal lattice like a honeycomb or chicken wire -- because of its electrical properties and potential to improve anything from solar cells to cell phone screens. But it doesn't grow in perfect sheets; rather, it develops in pieces that resemble patchwork quilts, where the honeycomb lattice meets up imperfectly and creates five- or seven-member carbon rings, rather than the perfect six. Where these"patches" meet are called grain boundaries, and scientists had wondered whether these boundaries would allow the special properties of a perfect graphene crystal to transfer to the much larger quilt-like structures.

To study the material, the researchers grew graphene membranes on a copper substrate (a method devised by another group) but then conceived a novel way to peel them off as free-standing, atom-thick films. Then, with diffraction imaging electron microscopy, they imaged the graphene by seeing how electrons bounced off at certain angles, and using a color to represent that angle. By overlaying different colors according to how the electrons bounced, they created an easy, efficient method of imaging the graphene grain boundaries according to their orientation. And as a bonus, their pictures took an artistic turn, reminding the scientists of patchwork quilts.

"You don't want to look at the whole quilt by counting each thread," said David Muller, professor of applied and engineering physics and co-director of the Kavli Institute at Cornell for Nanoscale Science, who conducted the work with Paul McEuen, professor of physics and director of the Kavli Institute; and Kavli member Jiwoong Park, assistant professor of chemistry and chemical biology."You want to stand back and see what it looks like on the bed. And so we developed a method that filters out the crystal information in a way that you don't have to count every atom."

This new method could apply to other two-dimensional materials and sheds new light on the previously mysterious way that graphene was stitched together at grain boundaries.

Further analysis revealed that growing larger grains (bigger patches) didn't improve the electrical conductivity of the graphene, as was previously thought by materials scientists. Rather, it is impurities that sneak into the sheets that make the electrical properties fluctuate. This insight will lead scientists closer to the best ways to grow and use graphene.

The work was supported by the National Science Foundation through the Cornell Center for Materials Research and the Nanoscale Science and Engineering Initiative, as well as the Air Force Office of Scientific Research through the Multidisciplinary Research Program of the University Research Initiative and a Presidential Early Career Award for Scientists and Engineers. The paper's other contributors were: Pinshane Huang (applied and engineering physics), Carlos Ruiz-Vargas (applied and engineering physics), Arend van der Zande (physics), William Whitney (physics), Mark Levendorf (chemistry), Joshua Kevek (Oregon State), Shivank Garg (chemistry), Jonathan Alden (applied and engineering physics), Caleb Hustedt (Brigham Young University) and Ye Zhu (applied and engineering physics).


Source

Monday, January 10, 2011

No Left Turn: 'Superstreet' Traffic Design Improves Travel Time, Safety

Superstreets are surface roads, not freeways. It is defined as a thoroughfare where the left-hand turns from side streets are re-routed, as is traffic from side streets that needs to cross the thoroughfare. In both instances, drivers are first required to make a right turn and then make a U-turn around a broad median. While this may seem time-consuming, the study shows that it actually results in a significant time savings since drivers are not stuck waiting to make left-hand turns or for traffic from cross-streets to go across the thoroughfare.

"The study shows a 20 percent overall reduction in travel time compared to similar intersections that use conventional traffic designs," says Dr. Joe Hummer, professor of civil, construction and environmental engineering at NC State and one of the researchers who conducted the study."We also found that superstreet intersections experience an average of 46 percent fewer reported automobile collisions -- and 63 percent fewer collisions that result in personal injury."

The researchers assessed travel time at superstreet intersections as the amount of time it takes a vehicle to pass through an intersection from the moment it reaches the intersection -- whether traveling left, right or straight ahead. The travel-time data were collected from three superstreets located in eastern and central North Carolina, all of which have traffic signals. The superstreet collision data were collected from 13 superstreets located across North Carolina, none of which have traffic signals.

The superstreet concept has been around for over 20 years, but little research had been done to assess its effectiveness under real-world conditions. The NC State study is the largest analysis ever performed of the impact of superstreets in real traffic conditions.

A paper on the travel time research is being presented Jan. 24 at the Transportation Research Board Annual Meeting in Washington, D.C. The paper is co-authored by Hummer, former NC State graduate students Rebecca Haley and Sarah Ott, and three researchers from NC State's Institute for Transportation Research and Education: Robert Foyle, associate director; Christopher Cunningham, senior research associate; and Bastian Schroeder, research associate.

The collision research was part of an overarching report of the study submitted to the North Carolina Department of Transportation (NCDOT) last month, and is the subject of a forthcoming paper. The study was funded by NCDOT.

NC State's Department of Civil, Construction and Environmental Engineering is part of the university's College of Engineering.


Source

Saturday, January 8, 2011

New Solar Cell Self-Repairs Like Natural Plant Systems

"We've created artificial photosystems using optical nanomaterials to harvest solar energy that is converted to electrical power,"said Jong Hyun Choi, an assistant professor of mechanical engineering at Purdue University.

The design exploits the unusual electrical properties of structures called single-wall carbon nanotubes, using them as"molecular wires in light harvesting cells," said Choi, whose research group is based at the Birck Nanotechnology and Bindley Bioscience centers at Purdue's Discovery Park.

"I think our approach offers promise for industrialization, but we're still in the basic research stage," he said.

Photoelectrochemical cells convert sunlight into electricity and use an electrolyte -- a liquid that conducts electricity -- to transport electrons and create the current. The cells contain light-absorbing dyes called chromophores, chlorophyll-like molecules that degrade due to exposure to sunlight.

"The critical disadvantage of conventional photoelectrochemical cells is this degradation," Choi said.

The new technology overcomes this problem just as nature does: by continuously replacing the photo-damaged dyes with new ones.

"This sort of self-regeneration is done in plants every hour," Choi said.

The new concept could make possible an innovative type of photoelectrochemical cell that continues operating at full capacity indefinitely, as long as new chromophores are added.

Findings were detailed in a November presentation during the International Mechanical Engineering Congress and Exhibition in Vancouver. The concept also was unveiled in an online article (http://spie.org/x41475.xml?ArticleID=x41475) featured on the Web site for SPIE, an international society for optics and photonics.

The talk and article were written by Choi, doctoral students Benjamin A. Baker and Tae-Gon Cha, and undergraduate students M. Dane Sauffer and Yujun Wu.

The carbon nanotubes work as a platform to anchor strands of DNA. The DNA is engineered to have specific sequences of building blocks called nucleotides, enabling them to recognize and attach to the chromophores.

"The DNA recognizes the dye molecules, and then the system spontaneously self-assembles," Choi said

When the chromophores are ready to be replaced, they might be removed by using chemical processes or by adding new DNA strands with different nucleotide sequences, kicking off the damaged dye molecules. New chromophores would then be added.

Two elements are critical for the technology to mimic nature's self-repair mechanism: molecular recognition and thermodynamic metastability, or the ability of the system to continuously be dissolved and reassembled.

The research is an extension of work that Choi collaborated on with researchers at the Massachusetts Institute of Technology and the University of Illinois. The earlier work used biological chromophores taken from bacteria, and findings were detailed in a research paper published in November in the journalNature Chemistry(http://www.nature.com/nchem/journal/v2/n11/abs/nchem.822.html).

However, using natural chromophores is difficult, and they must be harvested and isolated from bacteria, a process that would be expensive to reproduce on an industrial scale, Choi said.

"So instead of using biological chromophores, we want to use synthetic ones made of dyes called porphyrins," he said.


Source

Friday, January 7, 2011

Newly Developed Cloak Hides Underwater Objects from Sonar

"We are not talking about science fiction. We are talking about controlling sound waves by bending and twisting them in a designer space," said Fang, who also is affiliated with the Beckman Institute for Advanced Science and Technology."This is certainly not some trick Harry Potter is playing with."

While materials that can wrap sound around an object rather than reflecting or absorbing it have been theoretically possible for a few years, realization of the concept has been a challenge. In a paper accepted for publication in the journalPhysical Review Letters, Fang's team describe their working prototype, capable of hiding an object from a broad range of sound waves.

The cloak is made of metamaterial, a class of artificial materials that have enhanced properties as a result of their carefully engineered structure. Fang's team designed a two-dimensional cylindrical cloak made of 16 concentric rings of acoustic circuits structured to guide sound waves. Each ring has a different index of refraction, meaning that sound waves vary their speed from the outer rings to the inner ones.

"Basically what you are looking at is an array of cavities that are connected by channels. The sound is going to propagate inside those channels, and the cavities are designed to slow the waves down," Fang said."As you go further inside the rings, sound waves gain faster and faster speed."

Since speeding up requires energy, the sound waves instead propagate around the cloak's outer rings, guided by the channels in the circuits. The specially structured acoustic circuits actually bend the sound waves to wrap them around the outer layers of the cloak.

The researchers tested their cloak's ability to hide a steel cylinder. They submerged the cylinder in a tank with an ultrasound source on one side and a sensor array on the other, then placed the cylinder inside the cloak and watched it disappear from their sonar.

Curious to see if the hidden object's structure played a role in the cloaking phenomenon, the researchers conducted trials with other objects of various shapes and densities."The structure of what you're trying to hide doesn't matter," Fang said."The effect is similar. After we placed the cloaked structure around the object we wanted to hide, the scattering or shadow effect was greatly reduced."

An advantage of the acoustic cloak is its ability to cover a broad range of sound wavelengths. The cloak offers acoustic invisibility to ultrasound waves from 40 to 80 KHz, although with modification could theoretically be tuned to cover tens of megahertz.

"This is not just a single wavelength effect. You don't have an invisible cloak that's showing up just by switching the frequencies slightly," Fang said."The geometry is not theoretically scaled with wavelengths. The nice thing about the circuit element approach is that you can scale the channels down while maintaining the same wave propagation technology."

Next, the researchers plan to explore how the cloaking technology could influence applications from military stealth to soundproofing to health care. For example, ultrasound and other acoustic imaging techniques are common in medical practice, but many things in the body can cause interference and mar the image. A metamaterial bandage or shield could effectively hide a troublesome area so the scanner could focus on the region of interest.

The cloaking technology also may affect nonlinear acoustic phenomena. One problem plaguing fast-moving underwater objects is cavitation, or the formation and implosion of bubbles. Fang and his group believe that they could harness their cloak's abilities to balance energy in cavitation-causing areas, such as the vortex around a propeller.


Source

Thursday, January 6, 2011

New Glaucoma Test Allows Earlier, More Accurate Detection

The self-test instrument has been designed in Eniko Enikov's lab at the UA College of Engineering. Gone are the eye drops and need for a sterilized sensor. In their place is an easy-to-use probe that gently rubs the eyelid and can be used at home.

"You simply close your eye and rub the eyelid like you might casually rub your eye," said Enikov, a professor of aerospace and mechanical engineering."The instrument detects the stiffness and, therefore, infers the intraocular pressure." Enikov also heads the Advanced Micro and Nanosystems Laboratory.

While the probe is simple to use, the technology behind it is complex, involving a system of micro-force sensors, specially designed microchips, and math-based procedures programmed into its memory.

Enikov began working on the probe four years ago in collaboration with Dr. Gholan Peyman, a Phoenix ophthalmologist."We went through several years of refinement and modifications to arrive at the current design," Enikov noted.

The National Science Foundation has funded the work, and Enikov and Peyman now are seeking investors to help fund final development and commercialization of the product.

In addition to screening for glaucoma, an eye disease that can lead to blindness if left untreated, the device corrects some problems with the current procedure, and can be used to measure drainage of intraocular fluid.

"Eye pressure varies over a 24-hour cycle," Enikov said."So it could be low at the doctor's office and three hours later it might be high. With only a single test, the doctor might miss the problem. Having the ability to take more frequent tests can lead to earlier detection in some cases."

Once the diagnosis is made, several treatments are available. The question then is: How effective are they? Patients could use the probe at home to trace how much the pressure decreases after using eye drop medications, for instance.

"One of the reasons pressure builds up in the eye is because fluid doesn't drain properly," Enikov noted."Currently, there are no methods available to test drainage."

Current tests require applying pressure directly to the cornea, but only very light pressure is safe to use, and it doesn't cause the fluid to drain.

"Our technique allows us to apply slightly greater pressure, but it's still not uncomfortable," he said."It's equivalent to rubbing your eye for a brief period to find out if the pressure changes. If it does, we know by how much and if there is a proper outflow of intraocular fluid."

Sometimes, a surgical shunt is used to help fluid drain from the eye."The problem with glaucoma shunts is they can plug up over time," Enikov noted."Or if they're not properly installed, they may drain too quickly. So you would want to know how well the shunt is working and if it is properly installed. Our device could help answer those questions."

In another scenario, certain patients cannot be tested for glaucoma using currently available procedures."If a patient had cataract surgery or some other surgery through the cornea, the cornea sometimes thickens," Enikov said."The cornea's structure is different, but our test remains accurate because it's not applied to the cornea."

Instead, it presses the entire eyeball, much as you might press a balloon to determine its stiffness.

"The innovation with our device is that it's noninvasive, simpler to use and applies to a variety of situations that are either difficult to address or impossible to test using the current procedures," Enikov said."That's why we're so excited about this probe. It has great potential to improve medical care, and significant commercial possibilities, as well."


Source

Wednesday, January 5, 2011

Recycled Haitian Concrete Can Be Safe, Strong and Less Expensive, Researchers Say

In a paper published in theBulletin of the American Ceramic Society, researchers Reginald R. DesRoches, Kimberly E. Kurtis and Joshua J. Gresham say that they have made new concrete, from recycled rubble and other indigenous raw materials using simple techniques, which meets or exceeds the minimum strength standards used in the United States.

Most of the damaged areas of Haiti are still in ruins. The trio says their work points to a successful and sustainable strategy for managing an unprecedented amount of waste, estimated to be 20 million cubic yards.

"The commodious piles of concrete rubble and construction debris form huge impediments to reconstruction and are often contaminated," says DesRoches, professor and Associated Chair of Civil and Environmental Engineering at Georgia Tech."There are political and economic dilemmas as well, but we have found we can turn one of the dilemmas -- the rubble -- into a solution via some fairly simple methods of recycling the rubble and debris into new concrete."

DesRoches, who was born in Haiti, traveled several times in 2010 to Port-au-Prince to gather samples of typical concrete rubble and additionally collect samples of two readily available sand types used as fine aggregates in some concrete preparation.

He and Gresham also studied the methods, tools and raw materials used by local laborers to make concrete mixes. DesRoches recalls they encountered no mixing trucks."Instead, all of the construction crews were manually batching smaller amounts of concrete. Unfortunately, they were mixing volumes of materials 'by eye,' an unreliable practice that probably caused much of the poor construction and building failure during the earthquake," he says.

Before leaving, DesRoches and Gresham manually cast an initial set of standard 3-inch by 6-inch concrete test blocks using mixes from several different construction sites.

They returned to Georgia Tech with their cast blocks, sand samples and notes, where they were joined by Kurtis, also a professor and Chair of the American Concrete Institute's Materials Science of Concrete Committee.

They quickly discovered that the concrete test samples cast in Haiti were of poor quality."The Haitian-made concrete had an average compressive strength of 1,300 pounds per square inch," says Kurtis."In comparison, concrete produced in the U.S. would be expected to have a minimum strength of 3,000 pounds per square inch.

They then manually crushed the samples with a hammer to provide course aggregate for a second round of tests. In this round, they made concrete samples from mixes that combined the course aggregate with one of the two types of sands they had collected. However, instead of"eye-balling" the amounts of materials, in this round of tests they carefully measured volumes using methods prescribed by the American Concrete Institute. The materials were still mixed by hand to replicate the conditions in Haiti.

Subsequent tests of samples made from each type of sand provided good news: The compressive strength of both of the types of new test blocks, still composed of Haitian materials, dramatically increased, showing an average over 3,000 pounds per square inch.

"Based upon these results, we now believe that Haitian concrete debris, even of inferior quality, can be effectively used as recycled course aggregate in new construction," says Kurtis."It can work effectively, even if mixed by hand. The key is having a consistent mix of materials that can be easily measured. We are confident are results can be scaled up mix procedure where quantities can be measured using common, inexpensive construction equipment."

DesRoches is pleased because recycling eliminates two hurdles to reconstruction."First, removing the remaining debris is nearly impossible because there are few, if any, safe landfill sites near Port-au-Prince, and the nation lacks the trucks and infrastructure to haul it away. It is better to use it than to move it."Second," DesRoches says,"Finding fresh aggregate is more difficult than getting rid of the debris. It is costly to find, mine and truck in."

The trio notes recycled concrete aggregate has been used worldwide for roadbeds, drainage, etc., and that many European Union countries commonly use 20 percent recycled aggregates in structural concrete. Published research by others has also demonstrated that the use of local-sourced recycled aggregate concrete production can be more sustainable.

Because of the urgency of quick and safe reconstruction, the researchers urge that recycling the debris quickly move from proof-of-concept to large scale testing."More work must be done to characterize the recycled materials, test additional performance parameters and gauge the safest ways to crush the rubble. Seismic behavior and building codes must be studied. But, these tests can and should be done dynamically, during reconstruction, because the benefits can be so immediate and significant," says DesRoches.

DesRoches, Kurtis and Gresham say they plan on sharing their research with Haitian government officials and nongovernmental organizations working on reconstruction projects. DesRoches is hopeful that a debris strategy and infrastructure will eventually emerge from the government once the disputed presidential elections in Haiti are resolved."Some think that many rebuilding projects have on hold for the past few months because of distraction from the elections. The next round of elections is this month, so it soon may be possible to accelerate reconstruction."


Source

Monday, January 3, 2011

Researchers Helping Electric-Wheelchair Users Move More Easily

To address this problem, researchers at the Florida A&M University-Florida State University College of Engineering are working on technology that will enable electric-powered wheelchairs to detect hazardous terrain and automatically adjust their control settings to maneuver more safely.

Emmanuel Collins is the John H. Seely Professor of Mechanical Engineering at the college and director of Florida State's Center for Intelligent Systems, Control and Robotics (http://www.eng.fsu.edu/ciscor/) (CISCOR). He said that a device known as a laser line striper, originally developed for military use, has been adapted to classify terrain conditions so the wheelchair control system can self-adjust.

"I'm inspired by the idea of applying technology originally meant for the battlefield to improve the quality of everyday life for injured soldiers and others," Collins said.

Engineers had previously developed automatic terrain-sensing controls for military robotic vehicles, and several four-wheel-drive automobiles now on the market include such controls for improved safety. So, Collins wondered, why not integrate this type of system into electric-powered wheelchairs to provide more mobility and independence for their operators?

Collins' team, working with colleagues from the University of Pittsburgh, began experiments this year to add instrumentation based on current driving control systems. The new technology is designed to enable an electric-powered wheelchair to detect hazardous terrain and implement safe driving strategies while avoiding wheel slip, sinkage or vehicle tipping.

Collins said that, to his knowledge, no one else is working on this type of application.

The U.S. Army Medical Research and Materiel Command's Telemedicine and Advanced Technology Research Center saw the promise in this collaboration and has provided funding and guidance for the researchers to pursue their ideas together. The partnership joins CISCOR, which has worked extensively with control and guidance of autonomous vehicles, with the University of Pittsburgh's Human Engineering Research Laboratories. The latter group has developed several assistive technologies already in use by wheelchair manufacturers and rehabilitation hospitals nationwide.

The partnership began when Collins heard a presentation by Professor Rory Cooper, director of the Human Engineering Research Laboratories and chairman of Pitt's rehabilitation science and technology department. Cooper has used a wheelchair since receiving a spinal cord injury in 1980 during his service in the Army. He won a bronze medal in the 1988 Paralympic Games in Seoul and has been recognized nationally for his research and leadership efforts to aid veterans and others with spinal cord injuries.

In his presentation, Cooper mentioned the need for terrain-dependent, electric-powered wheelchair assistance. Collins approached him about working together, and the two of them began developing ideas with other collaborators at the National Science Foundation-sponsored Quality of Life Technology Center, an engineering research center affiliated with the Human Engineering Research Laboratories that Cooper co-directs.

Cooper also is the founding director and a senior research scientist of the VA Rehabilitation Research and Development Center of Excellence in Pittsburgh. His laboratory has been collaborating with the Veterans Administration for 15 years, and with the military since 2004, to develop robotic and other advanced assistive technologies. Cooper noted that the lab has a very good relationship with the orthopedic and rehabilitation departments of Walter Reed Army Medical Center and the National Naval Medical Center.

Army Maj. Kevin Fitzpatrick, director of Walter Reed's wheelchair clinic, said,"This technology will provide electric-powered wheelchair users with an increased degree of independence that may significantly increase their ability to participate in recreational and functional activities."

The project is part of the Rehabilitation Engineering and Assistive Technology sub-portfolio, recently managed by Craig Carignan, within the Telemedicine and Advanced Technology Research Center's Advanced Prosthetics and Human Performance research portfolio.

"The Human Engineering Research Laboratories and the Pittsburgh VA center are considered among the top wheelchair testers in the United States, and are playing critical roles in developing international wheelchair standards," Carignan said."The researchers on this project are excellent investigators, and we are looking forward to the solution they develop."

Collins estimated that if the team develops a strong commercial partner, the technology could be assisting electric wheelchair users in approximately five years.


Source