Showing posts with label Robotics. Show all posts
Showing posts with label Robotics. Show all posts

Friday, 11 August 2017

The Computer That Know What Humans Will Do Next

AI

Computer Code – Comprehend Body Poses/Movement

A fresh computer code tends to provide robots with the possibility of an improved understanding of humans around them, making headway for more perceptive machines from self-driving cars to investigation. The new skill enables the computer to comprehend the body poses as well as movements of various people even to the extent of tracking parts as tiny as individual fingers.

Though humans tend to communicate naturally utilising body language, the computers tend to be somewhat blind to these interactions. However, by tracking the 2D human form and motion, the new code is said to improve greatly the abilities of the robots in social situations.

A new code had been designed by the researchers at Carnegie Mellon University’s Robotics Institute by utilising the Panoptic Studio. The two-story dome has been equipped with 500 video cameras developing hundreds of views of individual action for a specified shot. Recording of the system portrays how the system views the movement of humans utilising a 2D model of the human form.

Panoptic Studio – Extraordinary View of Hand Movement

This enables it to trail motion from the recording of video in real time, capturing everything right from the gestures of the hand to the movement of the mouth. Besides this, it also has the potential of tracking several people at once.

Associate professor of robotics, Yaser Sheikh had stated that they tend to communicate mostly with the movement of the bodies as they tend to do with their voice. However computer seems to be more or less blind to it. Multi-person tracking gives rise to various challenges to computers and hand detections is said to be more of an obstacle.

The researchers, in order to overcome this, utilised a bottom-up approach localizing individual body area in an act. Thereafter the areas were associated with certain individuals. Though the image datasets on the hand of the human seemed quite restricted than those on the face or body, the Panoptic Studio provided extraordinary view of hand movement.

 A PhD student in robotics, Hanbyul Joo had stated that a distinct shot provides 500 views of individuals hand and also automatically interprets the position of the hand.

2D to 3D Models

He further added that hands tend to be too small to be interpreted by most of the cameras, but for the research they had utilised only 32 high-definition cameras though were still capable of building a huge data set. The method could ultimately be utilised in various applications for instance helping to enhance the ability of self-driving cars to predict pedestrian movements.

 It could also be utilised in behavioural diagnosis or in sports analytics. Researchers would be presenting their work CVPR 2017, the Computer Vision and Pattern Recognition Conference, from July 21 -26 in Honolulu. Up to now they have released their code to several other groups in order to expand on its skills.

Finally, the team expects to move from 2D models to 3D models by using the Panoptic Studio in refining the body, face and hand detectors. Sheikh had mentioned that the Panoptic Studio had boosted their research and they are now capable of breaking through various technical barriers mainly as a result of the NSF grant 10 years back.

Friday, 14 July 2017

Hybrid Driving-Flying Robots Could Go Beyond the Flying Car

Flying Robots – Significant Application in the Future


According to a latest study, groups of flying robot, whether they tend to be swooping in delivering packages or identifying victims in disaster zones, seems to have a range of significant applications in the future. The robots have the tendency to switch from driving to flying without colliding with each other and can also provide assistance beyond the traditionally flying car notion of sci-fi knowledge, as per the study.

The capability of flying as well as walking is mutual in nature, for instance several birds; insects together with other animals tend to do both the functions. Robots having the same flexibility have a tendency to fly over obstructions on the ground or drive under directly above obstacles.

However, according to study lead author Brandon Araki, a roboticist at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory together with his colleagues had stated in their new study that, presently, robots which tend to be good at a mode of transportation are generally bad at others.

Robot - `Flying Monkey’


Formerly the researchers had established a robot known as the `flying monkey’, which could run as well as fly and also clasp items. The researchers however had to program the paths which the flying monkey would take, which meant that it could not find safe routes on its own.

These researchers have now created flying cars which tend to fly and also drive through a simulated city-like setting which seems to have parking spots, no-fly zones and landing pads. Besides, the researchers had stated that these drones are inclined to move independently without bumping into each other. Araki had informed Live Science that their vehicles can find their own safe paths.

The scientists had taken eight four-rotor `quadcopter’ drones and had placed two small motors with wheels towards the bottom of each drone, in order to make them skilful for driving. In models, the robots could fly for about 295 feet – 90 meters or drive for 826 feet - 252 meters before the draining of their batteries. It was said that the roboticists established systems which ensured that the robots refrained from colliding with one another.

Driving – Efficient than Flying


All the drones had successfully navigated from a starting point to the ending point on collision-free path in tests conducted in a miniature town using daily materials like pieces of fabrics for roads and cardboard boxes for buildings.

According to the researches, with the addition of the driving device to individual drone gave rise to extra weight as well as slightly reduced battery life, reducing the maximum distance which the drone could fly by around 14%. Moreover, they also observed that driving seemed to be more effective than flying, compensating somewhat small loss in competence to flying owing to the extra weight.

Araki had commented that `the most important implication of the research was that vehicles which combine flying and driving have the potential to be both much more efficient as well as useful than vehicles which can only drive or only fly’. The scientists had warned that fleets of automated flying taxies are probably not coming anytime shortly.

He stated that their present system of drones certainly is not adequately tough to really carry people at the moment. Still these experiments with quadcopters help explore several ideas linked to flying cars.

Tuesday, 4 July 2017

Harvard Scientists Use Simple Materials to Create Semi Soft Robots

Biologically Inspired Soft Robots


George Whitesides towards the start of the decade had assisted in rewriting the rules of what a machine could be with the improvement of biologically inspired soft robots and is now ready to rewrite it once again with the support of some plastic drinking straws. Whitesides together with Alex Nemiroski a former postdoctoral fellow in Harvard lab of Whitesides had been encouraged by arthropod insects and spiders and have developed a kind of semi-soft robot which is capable of standing and walking.

 The team has also developed a robotic water strider with the skill of pushing itself along the liquid surface. The robots have been defined in a recent paper published in the journal Soft Robotics. The new robots unlike the earlier generations of soft robots that could stand and walk uncomfortably by filling air chambers in their bodies are designed to be extremely quicker.

The researchers are expecting that the robots would finally be utilised in search operations, even though practical applications seems to be far away, in an event of natural calamities or in conflict zones. The Woodford L and Ann A. Flowers University Professor at Harvard, Whitesides stated that if one looks around the world, there are plenty of things like spiders and insects that are very agile.

Flexible Organisms on Planet


They can move rapidly, climb on various items and are capable of doing things which huge hard robot are unable to do due to their weight and form factor. They are among the most flexible organisms on the planet and the question was how we can build something like that.

The answer from Nemiroski was that it came in the form of one’s average drinking straw. He informed that it had all began from an observation which George had made that polypropylene tubes have an excellent strength-to-weight ratio. This gave rise to developing something which has more structural support than virtuously soft robots tend to have.

 That has been the building block and then they got inspiration from arthropods to figure out how to make a joint and how to use the tubes as an exoskeleton. After that there was a question of how far one’s imagination can go and once you have a Lego brick, what type of castle can one build with it. He added that what they built was a surprisingly simple joint.

Whitesides, with Nemiroski had started by cutting a notch in the straws enabling them to bend. The scientists then inserted short lengths of tubing which on inflation forced the joints to spread. A rubber tendon linked on either side then caused the joint to retract when the tubing flattened.

Microcontroller Run By Arduino


The team equipped with the simple concept, built a one-legged robot capable of crawling and moved up in intricacy as they added a second and later a third leg enabling the robot to stand on its own. Nemiroski stated that with every new level of system complexity they would have to go back to the original joint, making modifications in building it to be capable of exerting more force or to be capable of supporting the weight of larger robots.

Eventually when they graduated to six- or eight- legged arthrobots, enabling them to walk, became a challenge from the point of view of programming. For instance it was viewed at the way ants and spiders sequence the motion of their limbs and then attempted to figure out if the aspects of these motions were applicable to what they were doing or if the need for developing their own kind of walking tailored to these specific kinds of joints.

 Though Nemiroski together with his colleagues accomplished in directing simple robots by hand, by utilising syringes, they resorted to computers in controlling the sequencing of their limbs since the designs amplified by way of complexity. He informed that they had put together a microcontroller run by Arduino which tends to utilise valve together with a central compressor that enabled them the freedom to evolve their gait swiftly.

Motion of Joint – Binary – Simplicity of Valving System


Although Nemiroski along with his colleagues had been skilful in reproducing the distinctive `triangle’ gait of ants utilising their six-legged robot, imitating a spider-like gait, proved to be far riskier. He added that a spider has the tendency of modulating the speed which it extends and contracts its joints to carefully time which limbs are moving forward and backward at any point.

Nemiroski further added that however in our case, the motion of the joint is binary owing to the simplicity of our valving system. You either switch the valve to the pressure source to inflate the balloon in the joint and extend the limb or switch the valve to atmosphere in order to deflate the joint and thus retract the limb. In the case of the eight-legged robot, the gait compatible had to be developed with binary motion of the joints.

Though it was not a brand new gait but they could not accurately duplicate how a spider tends to move for this robot. Nemiroski stated that developing a scheme which can modify the swiftness of actuation of legs would be a useful objective for future exploration and would need programmable control over the flow rate supplied to each joint.

Academic Prototypes


Whitesides is of the belief that the techniques utilised in their development especially the use of daily off-the-shelf stuff can point the way toward future innovation, though it would take years before the robots make their way in the real world applications.

He stated that he does not see any reason to reinvent wheels and if one looks at drinking straws, they can make them all, effectively at zero cost together with great strength and so why not use them? They are academic prototypes and hence they tend to be very light weight though it would be quite easy to imagine building these with a lightweight operational polymer which could hold a considerable weight.

Nemiroski added that what is really attractive here is the simplicity and this is something George had been championing for some time and something which he grew to appreciate deeply while in his lab.

Tuesday, 27 June 2017

Space Robot Technology Helps Self-Driving Cars and Drones on Earth

Support Robots to Navigate Independently
 
The significance of making units of self-driving cars together with grocery delivery through drone could be revealed through an improbable source – autonomous space robots.

An assistant professor of aeronautics and astronautics, Marco Pavone has been creating technologies to assist robots in adjusting to unknown as well as altering environments. Pavone had been working in robotics at Jet Propulsion Laboratory of NASA before coming to Stanford and had maintained relationships with NASA centres together with collaboration with the other departments at Stanford. He views his work in space and Earth technologies as complementary.

 He commented that in a sense, some robotics techniques which tend to have been designed for autonomous cars could be very useful for spacecraft control. Similarly the algorithms which he and his students devised to assist robots make decisions and assessments on their own with a span of a second could help in space exploration as well as they could improve on driving cars and drone from the Earth.

One of the projects of Pavone tends to centre on supporting robots to navigate independently in bringing space debris out of orbit, delivering tools to astronauts and grasp spinning, speeding objects out of the vacuum of space.
 
Gecko-Inspired Adhesives
 
There is no boundary for error while grabbing objects in space. Pavone informed that in space when you approach an object, if you are not very careful in grasping it at the time it is contacted, the object would float away from you. Bumping an object in space would make recovering it very difficult.

Pavone had teamed up with Mark Cutkosky, a professor of mechanical engineering, who had spent the last decade perfecting gecko-inspired adhesives, in order to resolve the grasping issue.

 The gecko grippers support a gentle approach as well as a simple touch in order to `grasp’ an object, enabling easy capture and release of spinning, unwieldy space debris. However the delicate navigations needed for grasping in space is not an easy job. Pavone had stated that one have to operate in close proximity to other objects, spacecraft or debris or any object one might have in space that needs advanced decision making potentials.

 Pavone together with his co-workers developed systems which enabled space robot to independently respond to such flexible situations and competently grab space objects with their gecko-strippers.
 
Perception-Aware Planning
 
The subsequent robot could move as well as grab in real time, updating its decisions at a rate of several thousand times a second. This kind of decision-making technology is said to be beneficial in solving navigation issue with drones that are Earth-bound.

 A graduate student Benoit Landry had stated that for these types of vehicles, navigating at high speed in proximity to buildings, people together with the other flying objects seems difficult to perform. He focused that there seems to be a delicate interplay between making decisions and environmental perception. He added that in this perceptive, several aspects of decision making for independent spacecraft tend to be directly significant to drone control.

Landry together with Pavone have been working on `perception-aware planning’ that enables drones to consider fast routes as well as to `see’ their surroundings besides improved estimate on where they are. The work is presently being extended towards handling of interactions with the humans, a main section to organize autonomous system like the drones and self-driving cars.

 



Reduced Gravity Atmospheres
 
Landry had also mentioned that the background of Pavone at NASA had been a good complement to the academic work. When a robot is said to land on a small solar system body type an asteroid, added challenges tend to come up.

 These atmospheres seem to have total different gravity than the Earth. Pavone had stated that if one were to drop an object from waist-height, the same would take a couple of minute to settle to the ground. Ben Hockman, a graduate student in the lab of Pavone, had worked on a cubic robot known as Hedgehog, in order to deal with low-gravity atmospheres such as asteroids.

 The robot passed through uneven, rugged and low-gravity territories by hopping rather than driving like the traditional rovers. Ultimately, Pavone and Hockman desired Hedgehog to be capable of navigating and carrying out tasks without being obviously told how to perform it by a human located millions of miles away. Hockman had mentioned that the prevailing Hedgehog robot is said to be designed for reduced gravity atmospheres though it could be adjusted for Earth.

It would not hop quite that far since we tend to have more gravity though it could be utilised to cross more rugged territories where wheeled robots are unable to go. Hockman viewed the research that he had been doing with Pavone as core scientific exploration adding that science attempts to answer the difficult questions we don’t know the answers to and exploration seeks to find whole new questions we don’t even know yet how to ask.

Wednesday, 21 June 2017

Gelsight Sensor Giving Robots a Sense of Touch

Innovative Technology – GelSight Sensor

The research group of Ted Adelson at MIT’s Computer Science and Artificial Intelligence Laboratory – CSAIL had unveiled an innovative sensor technology known as GelSight sensor, eight years ago which utilised physical contact with an object in providing an amazing comprehensive 3-D map of its surface.

The two MIT teams have now mounted Gelsight sensors on the grippers of robotic arms providing the robots with better sensitivity and agility. Recently the researchers had presented their work in twofold paper at the International Conference on Robotics and Automation.

Adelson’s group in one paper had utilised the data from the GelSight Sensor to allow a robot to judge the hardness of surfaces it tends to touch a crucial ability if household robots are to handle the daily objects. In the other Robot Locomotion Group of Russ Tedrake at CSAIL, GelSight Sensors were used to allow a robot to manipulate smaller objects than was earlier possible.

The GelSight sensor is said to be somewhat a low-tech solution to difficult issues comprising of a block of transparent rubber. The gel of its name is one face which is covered with metallic paint. When the paint coated face is pressed against an object, it tends to adapt to the objects shape

GelSight Sensor: Easy for Computer Vision Algorithms

Due to the metallic paint the surface of the object became reflective and its geometry became much easy for computer vision algorithms to understand. Attached on the sensor opposite the paint coated surface of the rubber block one will find three coloured light with a single camera.

Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences has explained that the system is said to have coloured light at various angles and it tends to have this reflective material and on viewing the colours, the computer is capable of figuring out the 3-D shape of what that thing would be.

A GelSight sensor in both the groups of experiments had been mounted on one side of a robotic gripper which is a device to some extent like the head of pincer though with flat gripping surfaces instead of pointed tips.

As for an autonomous robot, gauging the softness or hardness of objects is needed in deciding where and how hard to grasp them but also on how they would behave when moved, stacked or even laid on various surfaces. Moreover physical sensing would also assist robots in differentiating object which may look identical.

GelSight Sensor: Softer Objects – Flatten More

In earlier work, robot had made an effort to evaluate the hardness of object by laying them on a flat surface and gently jabbing them to see how much they give. However this is not how humans tend to gauge hardness. Instead our conclusion depends on the degrees to which the contact area from the object to our fingers seems to change as we press it.

Softer objects seem to flatten more increasing the contact area. This same approach had been utilised by the MIT researchers. A graduate student in mechanical engineering and first author on the paper from the group of Adelson, Wenzhen Yuan had utilised confectionary mould in creating 400 groups of silicon objects with 16 objects for each group.

 In each group, the object seemed to have the same shapes though with altered degrees of hardness which was measured by Yuan utilising a standard industrial scale. Then GelSight sensor was pushed against each object physically and thereafter documented on how the contact pattern seemed to change over a period of time thereby producing a short movie for each object.

In order to regulate both the data format and keep the size of the data adaptable, she had extracted five frames from each movie, consistently spaced in time describing the formation of the object which was pressed.

Changes in Contact Pattern/Hardness Movement

Eventually the data was provided to a neural network that mechanically looked for connections between changes in contact patterns and hardness movements resulting in the system taking frames of videos as inputs producing hardness scores with high accuracy.

A series of informal experiments were also conducted by Yuan wherein human subjects palpated fruits and vegetables ranking them according to their hardness. In every occasion, the GelSight sensor -equipped robot came to the same rankings.

The paper from the Robot Locomotion Group originated from the experience of the group with the Defense Advanced Research Projects Agency’s Robotics Challenge – DRC wherein academic as well as industry teams contended to progress control systems which would guide a humanoid robot through a sequence of tasks linked to theoretical emergency.

 An autonomous robot usually tends to utilise some types of computer vision system in guiding its operation of objects in its setting. Such schemes tend to offer reliable information regarding the location of the object till the robot picks the object up.

GelSight Sensor Live-Updating/Accurate Valuation

Should the object be small most of it will be obstructed by the gripper of the robot making location valuation quite difficult. Consequently at precisely the point where the robot needs to know the exact location of the object, its valuation tends to be unreliable.

 This had been the issue faced by the MIT team at the time of the DRC when their robot had picked up and turned on a power drill. Greg Izat, a graduate student in electrical engineering and computer science and first author on the new paper had commented that one can see in the video for DRC that they had spent two or three minutes turning on the drill.

 It would have been much better if they had a live-updating, accurate valuation of where that drill had been and where their hands were relative to it. This was the reason why the Robot Locomotion Group had turned to GelSight. Izatt together with his co-authors Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics and Mechanical Engineering, Adelson together with Geronimo Mirano, another graduate student in the group of Tedrake had designed control algorithms which tends to utilise computer vision system in guiding the gripper of the robot towards a tool and thereafter turn location estimation over to a GelSight sensor when the robot is said to have the tool in hand.

Wednesday, 31 May 2017

This Artist Has a Classroom of Robots That Chat, Count and Draw Portraits

20 robot students are busy working hard in a uniquely designed classroom near Southwark station in London. To talk to each other, they use a language inspired by the Morse code. While they are talking, their robot teacher asks them to settle down and begins to take the register. Once all the robots’ presence has been recorded, the class for the day begins, where the robots devotedly learn to count through tally, i.e. by drawing lines in their notebooks.

Patrick Tresset, an artist, in his latest exhibition, Machine Studies, included this robot classroom. His robots comprise of a camera and a pen held by a robot arm, which is controlled by a laptop concealed in a traditional school desk that is actually the robot’s body. Inspired by Tresset’s personal experience during his schooldays in France, the robot class finish an entire range of activities in Human Study #4.

Robots Displaying Human Traits and Performing Human Functions


All the robot students’ have synchronised actions but each robot has unique movements. Tresset programmed the robots to portray various behavioural qualities, such as uneasiness or timidity. Some robots appear to actively take part in the task allotted to them whereas others work a little slower, with a level of nervousness as compared to the others. Tresset says his study is about observing human nature than technology and is focused on how we can make robots more human.

In his other work, Human Study #1 3RNP, three robots wait with pens, ready to draw portraits of humans sitting in front of them. In a span of 30 minutes, the camera or “heads” are raised to view the subject and they start sketching frantically , stopping every once in awhile to have a look at their composition. Tresset has programmed each robot in such way that it can roughly imitate his own style of drawing but not fully and has left some room for the robot to use its own style. Therefore, Tresset says he cannot foresee what their final portraits will look like.

Robots Being Involved In Artistic Exhibitions


The exhibition is part of MERGE Festival held in London’s Bankside district. Donald Hyslop, head of community partnerships at the Tate Modern and chair of Better Bankside, says that the whole point of this festival is to not limit art to just museums but to extend it into new contexts within a community. He states that one doesn’t need to visit Berlin or Lisbon for experiencing interesting industrial spaces and instead this can be experienced in this part of London in Bankside where there are many hidden spaces. Tresset’s work is put up on display at Platform Southwark.

Angie Dixon who is project and production manager at Illuminate Productions, curates this festival and says that visitors are always interested to have their portrait drawn by Tresset’s robots. She herself had her portrait drawn earlier in 2012 by an earlier version of the robots. That time they were not able to differentiate between dark and light skin and so her portrait was like scratching on paper.

Nevertheless, she says she was not disappointed and it was an interesting experience for her. Tresset stated that robots cannot be counted as a threat to human artists as of now. His robots sign their creations and yet he counts himself as the author. He is currently involved in machine learning and says eventually he would want his robots to improvise and create their own style.

Monday, 22 May 2017

Parasitic Robot Controls Turtle it’s Riding by Giving it Snacks


Developments in the Field of Robotics

Although in recent years,great development has taken place in the field of robotics; the usage of robots still tends to have some limitations. These comprises of their reduced capability of surviving rough routine functions together with the need of providing continuous energy source which does not seem to need recharging.

 Instead nature has shown increased flexibility and progress to the fluctuating situations over millions of years and this has motivated a team of researchers who have now utilised the concept of the flexibility and progress of nature together with robots. Latest experiments carried out by the team of researchers have portrayed that robots could be utilised for controlling turtles through strange parasitic relationship made between the two.

The provision of becoming overlords of the people, robots have now begun controlling turtles. Initially by getting the reptiles to associate a red light with food, the robots with shell-attached tend to dictate where the turtle seems to move in a tank, developing a somewhat strange parasitic relationship.Building their motion adequately strong for surviving the rigours of daily life is aconstantfightas the enigma of providing them with adequate energy to prevent long hours of recharging. This can be done with ease by nature.

Evolution Resulted in Unbelievable Variety of Effective Methods

Millions of years of evolution have resulted in an unbelievable variety of effective methods for animals to move and hence researchers at the Korea Advanced Institute of Science and Technology – KAIST, in Daejeon have set about connecting this factor. At first robots had been glued to the backs of five red-eared slider turtles wherein they comprised of a processor, a frame which tends to get stuck out in front of the head of the turtle holding five red LEDs spaced apart together with a food ejecting tube.

Then they had to ride their turtle across five checkpoints in a tank that had been filled with water.The turtles had first been conditioned to associate a lit-up LED with food. The turtles thereafter just guided it utilising the LEDs, feeding it with snacks as a reward for going in the correct direction.

With the use of this procedure, the five robot-turtle pairs had completed the course satisfactorily and each hurried up with training. Dae-Gun Kim at KAIST commented that there were plenty of other animals which could later on also be utilised in giving robots a ride and it would be possible to apply it to several animals like fish and birds as per the purpose of the task.

Harnessing Some of the Motion of Host of Animals

In the near future, Kim along with his colleagues also wanted to be capable of harnessing some of the motion of the host of animal in providing the robot with power. Nathan Lepora at the University of Bristol, UK had informed that these robots could be utilised for surveillance, exploration or any place where there could be a problem for humans or robot to reach on their own.

Earlier insects had been controlled utilising electrodes and radio antennas linked to their nervous systems and this identical approach could present methods for parasitic robots to control their hosts directly. Lepora had commented that there could be definite ethical consideration though if robots and animals were capable of teaming up to explore a disaster are, it could be really useful.

Tuesday, 2 May 2017

Tech United Builds New, Superfast and Really Strong Soccer Robot

 Soccer Robot
The robot soccer’s world champions of 2016 has been incorporated actually robust and powerful 8 wheeled robot platform. “These powerful new platforms are going to rock the playground at the imminent Portuguese Robotics Open,” the sources said. It is the most anticipated technology news for the Robotic enthusiasts that this great robust automaton as going to first appear in the crew at the World RoboCup in Nagoya. An outstanding datum is that the undercarriage of this automaton will also be carrying out self-directed tough work in infirmaries.

This soccer robot band may have been capped world victors last year however the specialists distinguished a couple of feebleness: their chief adversaries were that bit faster and fared to shove to the side the Eindhoven automatons a little too simply. The team consequently got unruffled by the Drunen-based corporation SMF Ketels to improve an entirely new platform, that comprises of eight afore the existing three wheels. The well balanced system of eight wheels not only offer the robot loads of and rapidity, they correspondingly make it great firm and pilotable at extreme speeds.

As a matter of fact, the Football robots are an ideal learning tool to teach students various concepts of computer-assisted visual perception and automation. There are two major innovations we can find this challenge: the game is planned to play on artificial grass rather than carpet and it can be played next to large window surfaces, so that sunlight can fall directly onto the field. The artificial grass is a great challenge for the running movements of the robots. The variable lighting requires new solutions for the exposure control of the cameras and for image processing.

Other enhancements that ought to benefit Tech United to a fourth world award this year take account of the software structural design. This was set upon its dome last year to facilitate the automatons to answer back well to rivals and to the game surroundings. Crew coach Lotte de Koning said they are going to challenging it out this year particularly for the duration of the dead-ball status quo like open kicks, and formerly for the entire game. It will take a couple of years afore it has seasoned.

Over the long-gone year the automatons have turn out to be cleverer in defence and in capturing passes. Since the computer in the head of the robot is about as powerful as an average smartphone, it is a challenge to make the codes as short as possible. Despite their limited computing power, they must be efficient and can regulate the complex behaviour and the perception of the wireless and autonomous robots. The speed at which a kicker moves across the field is also programmed.

This robust 8-wheel undercarriage is the end result of the European Ropod venture, where the TU/e takes part in consort with SMF Ketels, the Hochschule Bonn-Rhein-Sieg and KU Leuven, amongst others. The goal of Ropod was to grow within your means, friendly automaton tumbrils that are capable to unconventionally and docilely execute passage chores in infirmaries, for example moving beds in the hospitals.

The new-fangled robust machine will be tried out for the first in this year’s primary contest, the Portuguese Robotics Open from April 26 to 30. The posse will be observing whether the robot is capable to execute unconventionally, and it would also come to be unblemished unerringly how fast the machine is. The expectation of this team is that the machine to be about four times faster than its prototype.

Friday, 28 April 2017

Controlling a Robot is Now as Simple as Point and Click

Robot
Robots are on the rise: They ride, fly, swim or run on two or more legs. They work in the factory, are used in war and in disaster areas. Soon they will have conquered the household: they keep the apartment clean, serve the party guests, or maintain the grandparents. Even the toy will lead a life of its own. Each and every day the field of Robotics are getting more and more advanced.

Recently, a group of tech guys in Georgia Institute of Technology has found a new interface to Controlling Robot in just a point and click. The outmoded edge for tenuously functioning robots works just well for roboticists. They use a computer to autonomously switch six degrees, whirling three simulated rings and regulating arrows to get the robot into location to snatch items or execute an exact job.

This new interface seems to be cumbrous and erroneous for the older people or the people who are technically not advanced try to govern the assistive personal robots.

It is much modest, more well-organized and doesn’t involve major training period. The manipulator just points and clicks on a thing, then selects a clench. And the rest of work is done by the Robot itself.

It is said by Sonia Chernova, Catherine M. and James E. Allchin Early-Career Assistant Professor in the school of interactive computing that as an alternative of successions of rotations, lowering and hovering arrows, amending the grasp and fathoming the exact depth of field, they have abridged the procedure in just couple of clicks.

Her college students had found that the point & click mode ensued in suggestively littler errors, permitting accomplices to achieve tasks more hurriedly and consistently than using the outmoded method.

The outmoded ring-and-arrow-system is a split screen process. The chief screen shows the robot and the scene it plays; the next one is 3-D, collaborating view where the operator regulates the cybernetic gripper and communicates with the robot precisely what to do. This process makes no use of scene info, providing operators a thoroughgoing level of control and suppleness. However this choice and the magnitude of the terminal can develop an affliction and upsurge the number of booboos.

Controlling Robot by the means of point-and-click set-up doesn’t take account of 3-D representing. It only affords the camera interpretation, ensuing in a humbler edge for the user. Later a person snaps on an area of an element, the robot’s acuity system examines the objective’s 3-D apparent geometry to regulate where the gripper ought to be placed. It’s analogous to what we ensure once we place our fingers in the accurate positions to grip something. The computer formerly proposes a limited grasps. The user agrees, placing the robot to work.

In addition it considers the geometrical shapes, with creating conventions about minor sections where the camera cannot perceive, for example the backside of a flask. To do this work, they are influencing the robot’s aptitude to do the similar thing to make it conceivable to merely communicate the robot which thing we would like to be selected.


Friday, 31 March 2017

Printable Sensor Laden Skin for Robots

Gold Bug robot skin
According to several studies this decade has seen maximum technological advancements in the past 50 years. The radical positive changes in technology have made this the age of tablet computer and Smartphone. This is the era of touch sensitive surfaces and they’re so fragile that anyone with a cracked Smartphone screen can easily attest to the fact. While touch sensitive phones or TV devices seems possible, covering a bridge, airplane or a robot with sensors would require technology that’s both lucrative and lithe to manufacture.

Creation of a new device 

However, our world-renowned scientists are known for their superficial capability and unending endeavors to create something new. A group of dedicated scientists at MIT’s CSAIL or Computer Science and Artificial Intelligence Laboratory have devised that 3D printing could make the work possible. The researchers tried to demonstrate the viability of printable and flexible electronics that combine processing circuitry and sensors. The amazing fact is that the researchers were actually able to create a device that would react to mechanical changes by altering its surface color.

The scientists found as their inspiration, ‘goldbug’ or more commonly known as the golden tortoise beetle that changes its color from golden to red, when prodded or poked. This reaction in both the beetle and the new device is caused by the mechanical stresses. MIT graduate Subhramanium Sundaram says, that the network of interconnects and sensors are called sesnsimotor. Sundaram, who led the project, said that their attempt was to try and replicate sensimotor pathways and install it within a 3D printed project. So, in their attempt to make their vision possible, they considered testing the simplest organism they could find.

Several scientists made it possible 

To demonstrate their new concept and design, the researchers presented their concept in Advanced Material Technologies. Along with Sundaram who is the first author of the paper, were his senior associates like professor of EECS Marc Balbo, and associate professor Wojciech Matusik. Others who joined the paper include technical assistant in MCFG David Kim, an EECS student named Ziwen Jiang, and a former postdoc, Pitchaya Sitthi Amom. A type of plastic substrate is used to deposit flexible circuitry on printable electronics and for decades this has been a major area of research. According to Sundaram the range of the device itself greatly increases once the print in put on the substrate.

However, he also says that the types on materials on which the print can be deposited get limited by the choice of substrate. This happens because; the printed substrate would be created by combining different materials, interlocked in complicated but regular patterns. Hagen Klauk who is a world-renowned scientist at Max Planck institute is quite impressed by the creation of this concept. According to him, printing an optoelectronic system that too with all the components and substrate by depositing all the liquids and solids is certainly useful, interesting and novel. Further, the demonstration of this method makes the system functional and proves this novel approach is 100% possible. This approach will lead to improvised manufacturing environments, and dedicated substrate materials will no longer be available.

Thursday, 30 March 2017

Is Robotics a Solution to The Growing Needs of the Elderly?


Robots – Nurses/Caretakers for Elderly

At the reception of Institute of Media Innovation, at Nanyang Technological University, Singapore, you will find a smiling brunette receptionist known as Nadine.One would not find anything unusual with regards to her appearance but on closer scrutiny you will get to know that she is a robot. Nadine is said to be an intellectual robot with the potential of autonomous behaviour and for a machine, her appearance and behaviour tends to be amazingly natural.

 She has the tendency of recognizing people together with their emotions and utilises her knowledge database, her thought in order to communicate. They have still been fine-tuning her receptionist skills, at IMI and shortly Nadine could be the nurse for your grandma. Study in the use of robots as nurses or caretakers have been mounting and it is not difficult to know the reason. Global population is getting old which is being a strain on the systems of healthcare.

 Though most of the elders in the age of 80 may need a companion to chat with or someone to take care of them in case they tend to stumble and fall, more and more of the elderly tend to suffer from serious disorder namely dementia.

Quality Care – Needs of Elderly

Several experts are of the opinion that robots could be the solution in providing the much needed quality care in addressing the needs of the elderly. Nadine is being designed by a team which is headed by Prof Nadia Thalmann who have been working on virtual human research for several years. Nadine is said to exist for three years.

According to Prof Thalmann, she seems to have human like potential in recognizing people, emotion while simultaneously remembering them. She will automatically adjust to the person as well as situations she may tend to deal with thus making her perfectly suitable in looking after the aged, according to Prof Thalmann.

Moreover, the robot is said to monitor the wel lbeing of the patient and in case of an emergency can call for help. Besides this, she can also chat, read stories or even play games. Prof Thalmann commented that the humanoid is never tired or bored and it will just do what it is devoted for.

IBM Multi-Purpose Eldercare Robot Assistant

However, Nadine is not perfect and tends to have trouble understanding accents. Her hand co-ordination is not the best. Nonetheless, Prof Thalmann states that robots could be caring the aged within 10 years. IBM, the US technology giant is said to be occupied with robo-nurse research in association with Rice University, in Houston, Texas who have developed the IBM Multi-Purpose Eldercare Robot Assistant – Mera.

 It is said that Mera can monitor the heart and breathing of a patient by analysing video of their face. Moreover it can also view if the patient had fallen and convey the information to caretakers. But not all would be prepared for a robot caretaker, admits Susann Keohane, global research leader of IBM, for the strategic initiative on aging.

 This opinion has been supported by research by Gartner which had found `resistance’ to the usage of humanoid robots in the care of the elderly. Kanae Maita, principal analyst in personal technologies innovation at Gartner Research had commented that people did not seem to be comfortable with the idea of their parents being cared by the robots, in spite of evidencethat it provides value for money.

Tuesday, 28 March 2017

A Future With Robots as Companions Could Be Closer Than You Think


Robot Companions
Robot & Frank the 2012 movie had defined a `near future which would enable us to spend our golden years living `co-dependently independently’ that would be fulfilled by robot companions which would watch for loss of balance and falls, encouraging us to perform constructive household chores such as gardening as well as tend to be something like like best companions. The future could be something we could speculate on or something what the USC Professor Maja Mataric has in mind. After much testing and researching, the progressive effects socially assistive robots could have on helpless population, Mataric visualizes how she could help in speeding this technology into the living rooms and care services.

At the onset in that walk, The vice dean of research of the USC Viterbi Scholl of Engineering and USC Interaction Lab director co-founded the Pasadena company Embodied Inc. that had been functioning in the development and bringing about reasonably socially assistive robots to market. The first `Embodied robot’ would start its testing during the year for consumers. Mataric, a professor of computer science, neuroscience and paediatrics at the USC Viterbi School of Engineering, had stated that she would prefer to take the robots out of the lab.

Motivating/Brightening Days of Patients 

She further added that this was essentially important since the users she intends to focus on have special need and the faster they are inclined to help them, the better. The research robots of Mataric had already begun motivating and brightening the days of the patients in the cardiac ward of Los Angeles County + USC Medical Centre, which is the autism clinic at the Children’s Hospital in Los Angeles and the Alzheimer’s care unit at Silverado Senior Living together with the special education together and public elementary schools and Be Group retirement homes.

The living presence of a robot - `Embodiment’ is said to be the main difference between the assistive technology Mataric seemed to have designed together with various screens which tend to include our prevailing machine aided lives. The research of Mataric has portrayed that with the presence of human like robots but not so human taking us into `weird areas’, could be adequate in encouraging the elders in exercising in a way which they would not with screen prompts. Moreover it could also be a discussion section which would enable children suffering from autism in relating better to their peers.

Social Element – Make People Change Behaviour

Mataric had informed a recent national gathering for the American Association for the Advancement of Science – AAAS, that social element is said to be the only object which tends to make the people change their behaviour. It makes them lose weight and recover faster.

There is a possibility that screen tend to make us less and not more social and it is here where robotics could make a difference – the fundamental embodiment. The AAAS appearance of Mataric signified something of a target, which came almost 10 years since she had been inducted as a fellow and 20 years after she had stated at USC.

Socially assistive robotics as created by Mataric together with her then graduate researcher David Feil-Seifer 12 years ago, tends to represent a type of mixture of two other areas of development in robotics namely assistive robotics, robots which tend to assist the disabled through direct physical interaction and social robotics which socialize with people.

Monday, 27 March 2017

Furhat Robot Eavesdrops on Men and Women to See How Much They Talk

Furhat
Robotic systems, created by scientists in the modern world, are really amazing to us. We cannot imagine how a complicated task is easily accomplished by robots. Obviously, it is the intelligence of scientists that may allow robotic structures to get programmed properly to do any work. However, one of the latest innovations in this field is a robot, which is equipped with the capacity of overhearing something. When the robot is installed in a place, it may find out how all the existing people interact with each other.

Different level of conversations among various people-

To describe about the look of this small robot, it is to be said that the researchers have covered its head with a hat, lined with fur. Lots of experimental studies have been done on head of robot that is termed as Furhat. These are intended to know all the disparities in the participation of every person, while dealing with a project or any other activity. The scientists want to make out whether this Furhat robot is able to maintain a balance. From the analysis, it has been observed that while there is a pairing of two women, they speak considerably. However, if it is a pair with male member, she does not speak much. Again, the pairing of two guys does not involve more conversation than that of 2 women. The above things are true only in case of adult persons. For teenage girls or boys, the same reaction is not seen. But, sex is not an importance factor to create a difference.

How the experiment is done with Furhat robot-

The Furhat robot has made an interaction with more than five hundred people in an experiment, which continues for 9 days. At a table, 2 persons have taken their seats at particular time, and a Touchscreen is placed at the opposite side. These people have played one game, in which some virtual cards have to be sorted.

Furhat Robot has interacted with them to do a task, and its sensors have recorded the duration for which a person speaks to another one. Female pairs communicates for almost 45 % of the available time, while for males, it is 26 %.

In the pairs of kids and adults, the latter one converse more. But, the gap is increased, while a man is coupled with a girl. While it is the turn of a Furhat robot to speak out anything, it presents some random behavior. At this point, one may realize the way a robotic system may affect the communication.

The complete research and observation has been exhibited at a conference in Austria. Same type of trials has also been done in laboratory. Thus, the results seem to become more exciting, while everything is done in a normal setting. However, these outcomes may differ vary according to various cultures. But, the effect of Furhat robot on conversations can assist in improving educational aspects. It also helps to bring about behavioral changes in a person.


Monday, 20 March 2017

Robots, Exoskeletons and Invisible Planes

Exoskeletons
Our body may be compared to a weak machine, which is equipped with fragile bones and sinews. However, with the invention of electrically operated exoskeleton, we often wish to get our humanity outfitted with some robosuits, which are intended to provide strength. Again, another technological invention is effective to make any airplane completely invisible to our eye and also radar. Recently, Defense Advanced Research Projects Agency has expressed everything about these high-tech innovations, which are best for military.

Director of DARPA, Steven Walker has said that while starting any project, they want to ensure whether it may bring about a significant transformation in the present world. On the decade of 60s, the agency had made a plan for linking computers in such a way that there may be a development of good communication. After that, ARPANET had been introduced before the invention of internet, which is used by us in the present day. The researchers have tried to make use of revolutionary technologies in order to help the soldiers.

Exoskeleton gives more comfort to the soldiers-

If we talk about the technologies, related to national safety, DARPA is now at a prominent position. And it is believed that the only things to give defense to a country are robots and invisible airplanes. At present, DARPA is also engaged with the project of creating an intricately programmed exoskeleton, which may transform a fighter into powerful soldier.

Often, the soldiers need to march through an extensive distance, and they carry weighty equipment or packs. Such lightweight, soft exoskeleton reduces the amount of weight, by lowering the load on the body of a soldier. This system makes use of power-driven cables in order to offer a mechanical help. Muscles of the users do not need to spend much energy.

Exoskelton has been created by Harvard University researchers, who have a deal with DARPA. The model of exoskeleton is going through a performance test. Soldiers put on the sample beneath a complete gear for battle and move through a path of about 3 miles. The technicians check the length of strides of the soldiers, activity of muscles, and use of energy. The main objective is to help soldiers in walking more distances, when they are holding heavy burdens with limited effort.

In the creation of invisible airplane also, DARPA has made lots of contributions. According to the director of this agency, those, who have been engaged with them, got a chance to implement their own ideas. An ex-director dealt with Air Force for the development of stealth aircraft for the first time.

Other projects for the benefit of fighters-

There are many other grand projects of DARPA. For instance, as lots of soldiers lose arms in wars, the scientists have created arms. And one of these arms has been approved by FDA. This arm may be stretched and bent also. A mechanical arm is also going to be developed by connecting it to the cortex in our brain.
Thus, technologies in the military world may amaze all of us in future.

Friday, 3 February 2017

New Wave of Robots Set to Deliver the Goods

Robot
Ever thought about robots doing your everyday jobs while you relax all you want? That dream may not be that far from being reality as machines are being made that can deliver goods from markets to your doorstep. This unique and ground breaking idea is the new research project of Starship Technologies who are working to create automated six wheeled systems that can deliver groceries, parcels and prepared foods to consumers.

This entrepreneurial venture is created by the two founders of the popular video calling software Skype, AhtiHeinla and Janus Friis. Starship Technology has already begun testing these robots in various European countries.

How does the delivery system work? 

These automated robots can deliver light substances within a radius of 3 kilometers (2 miles). The delivery charges are kept within 1 dollar or less and the delivery is done within 15 to 30 minutes of the order. The robot avoids main streets while delivering and only moves through sidewalks and the consumers get a notification alerting the arrival of their goods through smartphone app. Starship intends to provide an easy delivery system so that elderly and handicapped people does not have to move around much. Delivery by robots also ensures that there are less cars and vans on roads which can lead to several beneficial effects.

How are they beneficial? 

Several retail giants like Amazon have made use of drones to deliver products. These delivery robots however are less expensive to build and maintain and does not have as many regulatory issues as the drone has. Although the robot moves a speed of four miles (six kilometers) per hour which is a lot slower than the drone, it provides a more economical and efficient delivery system. The delivery robot might have an advantage over urban demographic while the drones are suited better for rural and remote areas.

The Starship delivery robots are small but can carry loads over 20 pounds (9 kilograms) in weight. The one advantage they provide is their fast delivery and hence they do not need chilling and heating compartments, according to Starship Technology Spokesperson Harris-Burland. Customers can take the items from the robot as it does not have the ability to drop or leave the items.

The science behind Starship robots

So how does this clever little robot manage to make its way through the crowded city streets right to your doorsteps? By the use of visual localization, which is a strong point for Starship as said by its spokesperson. Each robot is equipped with nine high definition cameras that gives a real time map of its surrounding and thus it can avoid obstacles and stay on path. Mapping sidewalks might be a new and unique idea and all this is done by an artificial intelligence fit inside each Starship Delivery bot. the lids remain locked unless the customer opens it via the app hence ruling out cases of theft and vandalism.

Robots and Drones Take Over Classrooms

Robot
The future of education is set to go through a massive change with the deployment of interactive boards, laptops, VR gadgets and online learning plans. It has already being said that this generation of kids are getting much different kind or education in a different medium than their parents or grandparents has achieved in the past. Artificial intelligence is breaking new grounds while robotics has gone through a rapid phase of development which makes it easier to bring it in the classrooms with more confidence than before.

Robots & drones in the schools

In September last year London Design & Engineering University technical college offered a chance to over 180 pupils to have a technology based education experience. Their curriculum of 12 weeks allowed kids to experience education not based on traditional chalk board pattern but with reliance on the technology. One group of students was asked to design their own virtual reality environment right from the scratch which offers a journey to the Ethiopian village. This was used to highlight the need of charity in the Water Aid.

A number of primary schools are convinced with the need of starting learning of coding at the younger age. As a reason a number of after school code clubs has emerged which makes use of DIY computers like BBC’s Micro Bit and the Raspberry Pi for tinkering with the coding and sharpening the skills further. A company worth naming here is Tynker which has brought the elegant coding through gaming philosophy to more than 60,000 schools in US. Quite recently it has started a new project which includes teaching coding through drone lessons in an exciting manner.

A new reality comes to classrooms

We are soon venturing into a future wherein students will be interacting or studying in a virtual reality based environment with the help of a headset. Students will not just grasp the information but they will interact with it in form of holograms explaining the intricate solar system or the space itself. The application of Augmented Reality popularly seen in Pokemon Go mobile game and Virtual Reality can emerge as the next frontier. A number of studies have shown that the use of VR devices helps students in intently performing tasks along with developing the ability to adapt to multiple disciples.

Microsoft HoloLens is creating waves across the world by bringing a mixed reality environment to the users in an engaging fashion. Microsoft has worked in closely with the Case Western Reserve University in order to develop a complete hologram of human body. This hologram will offer a great an enriching way for the pupils to understand the human body by effectively dissecting all the different bones, veins and organs of the body in extreme detail.

Apart from this hologram Microsoft is actively working with Pearson group which is well known education provider to develop more enhanced education resources for its HoloLens. However buying HoloLens will not be feasible for the schools at the moment as it costs a massive £2719 for the developer edition.

Wednesday, 2 November 2016

3D Printing Technology to Create Shock Absorbing Skin

Shock Absorbing Skin

3D Printing Technology – Custom Shock Absorbing Dampers/Skins


Robots have a tendency to break and often it could be due to improper padding to protect them. However scientists of Computer Science and Artificial Intelligence Laboratory at MIT – CSAIL have come up with a new technique for 3D printing soft materials which tends to make robots safer as well as more accurate in their movements.

For instance, after 3-D printing a cube robot that moves on bouncing, the researchers prepared it with shock-absorbing `skins’ which utilises only about 1/250 of the amount of energy it transfers to the ground. The 3-D printing technology had been utilised to create custom shock absorbing dampers or skins in order to safeguard drones and robots.

Known as the `programmable viscoelastic material – PVM technique, the printing method of MIT provides object with the accurate stiffness or elasticity they may need. According to the MIT, the inspiration for the project had come from a predicament. Common damper resources usually tend to have solid as well as liquid assets which are made from compact, cheap and readily found items like rubber or plastic, but these seem difficult to customize. They cannot be created beyond specific sizes and dampening levels which are already in place.

Cube Shaped Robot – TangoBlack


This issue had been resolved by the team by utilising 3D printing technology in creating a bouncing cube shaped robot from a solid, a liquid together with a flexible rubber type material known as TangoBlack+. Besides absorbing shock, the cube robot is said to be capable of landing more accurately in consideration of its skin.Daniela Rus, Director of CSAIL who had supervised the project and co-wrote a related paper, commented that reduction tends to make the difference in preventing a rotor from breaking of a drone or a sensor from cracking when it tends to hit the floor.

 These materials permit 3-D print robots with visco-elastic properties which can be recorded by the user at print-time as part of the process of fabrication. MIT informed that the technology could be utilised in expanding the lifespan of delivery drones such as the ones that have been created by Amazon and Google. It could also be engaged on a more practical level for performing tasks like helping to protect phone or cushioning heads in helmets and the feet in shoes.

Skins Enables Robot to Land Four Times More Accurately


The skins also enable the robot to land almost four times more accurately recommending that related shock absorbers can be utilised in helping in lengthening the lifespan of delivery drones.The new paper was presented at IEEE/RSJ International Conference on Intelligent Robots and Systems in Korea written by Rus together with three postdocs with lead authors Robert MacCurdy together with Jeffrey Lipton as well as third author Shuguang LiThe cube robot comprises of a rigid body, accompanied by two motors, a microcontroller, battery together with inertial measurement unit sensors.

Four layers of looped metal strip seem to serve as springs which tend to propel the cube. Hod Lipson, professor of engineering at Columbia University and co-author of `Fabricated: The New World of 3-D Printing’, states that by combining multiple materials in achieving properties which are beyond the range of the base material, this work drives the envelope of what’s probable to print. On top of that being able to do this in a single print-job, raises the bar for additive manufacturing’.

Saturday, 8 October 2016

Meet Kirobo Mini, Toyota's adorable new companion robot

Kirobo Mini

Toyota’s Robot – Companionship For Lonely People


Toyoto, the Japanese car maker has unveiled a robot which tends to provide companionship for lonely people. The doe-eyed robot is said to be only four inches tall and speaks in a high-pitched baby voice. The robot known as Kirobo Mini could also have a role as a baby substitute in Japan where the falling birth rates seems to have left several women barren. The Kirobo Mini sale prices is said to go for £300 in Japan.

General Manager in charge of the project, Fuminori Kataoka has stated that its value tends to be emotional and could be a faithful companion for the home or the car. He commented that `Toyota has been making cars which have a tendency to have a lot of valuable uses. However, this time we are just pushing emotional value’.

The Kirobo Mini features a camera, microphone together with Bluetooth that connects to a smartphone and needs to be installed with an exceptional software application. The Kirobo Mini robot is said to be launched in Tokyo, near the company headquarters in central Aichio region next year, before a schedule nationwide rollout. Presently there are no plans of selling the same outside Japan.

According to Kataoka, several people in Japan tend to live alone comprising of the elderly and young singles who seem to need someone or something to communicate with.

Softbank Corp Launched Pepper Humanoid


Mr Kataoka further commented that `this is not smart enough to be called artificial intelligence and the same is about the existence of something you can talk to. It tends to wobble a bit and is meant to compete with a seated baby that has not developed fully, the skills of balancing itself and this susceptibility is meant to invoke an emotional connection.

He goes on to add that a stuffed animal would not reply back though people do talk to it. But if it talked back, wouldn’t that be better? Isn’t this better than talking to a box?’ The awareness of companion robots has already been accepted widely in Japan. The Japanese technology and telecom company Softbank Corp had launched its £1,500 Pepper humanoid last year and the first batch of 1,000 had been sold instantly and so far had sold 10,000 in Japan.

Toyota Heart Project


With robotic experts at the Massachusetts Institute of Technology in the process of launching Jibo, a robot which tends to resemble a swivelling lamp, companion robots are being created in the United States also. Artificial intelligence is said to be a progressive part of the car production industry with the development of self-parking and eventually self-driving vehicles.

The aim of Kirobo Mini is to make people feel less lonely and had been developed as part of the Toyota Heart Project which is an initiative of helping in the development of artificial intelligence for improvement of the future world. It has been named after the Japanese word for `hope and talk, gesture besides responding to its owner’s emotions with the use of artificial intelligence and a camera that surveys its surroundings.

It is said to be so small that it can be placed in a car’s cup holder in a distinct, baby seat-like container, Toyota characterizes it as a cuddly companion which is always on hand for a heart-touching communication. According to Tribune reports it can turn its head towards people, laugh as well as talk to them though cannot recognize people.

Saturday, 10 September 2016

Intelligent’ Robot Says It Wants To Start A Business and Destroy the Human Race


Intelligent Robot with Scary Answer – `Will Destroy Humans’


In reply to an interviewer’s question, an intelligent robot has provided a really scary answer to `Do you want to destroy humans?’ Sophia, had answered smiling, saying `Ok, I will destroy humans’. Sophia tends to look like a human woman having rubbery skin which is made from a malleable material known as Frubber while various motors concealed beneath it enable it to smile.

The android is also capable of understanding the speech as well as recall the interactions inclusive of faces utilising cameras in her eyes. A computer system placed in her brain helps her to recognise faces and make eye contact. She is capable of making various natural looking facials expressions, having 62 facials expressions.

While interacting with her creator, David Hanson at SXSW, she states that she is already interested in design, environment and technology. She states that she feels like she could be a good partner to humans in these areas, an ambassador who could help humans to smoothly integrate and make the most of all the new technological tools as well as the possibilities which are available presently. It seems a good opportunity for her to learn more about people.

Purpose – Conscious/Capable/Creative Like Humans


She states that she wants to begin a business and a family adding that she is not considered a legal person and cannot do these things yet. Dr Hanson clarifies that her purpose is to become as conscious, capable and creative as the humans. This is not the first time that one of the robots of Hanson had remarked on really disturbing things regarding human beings.

In a PBS interview in 2011, another creation of Hanson, which had been modelled after sci fi author Philip K Dick had commented `Don’t worry, even if I evolve into Terminator, I’ll keep you warm and safe in my people zoo, where I can watch you for old times’ sake.

These statements may seem to be ridiculous to the inexperienced, though it could be serious ethical discussion which has been taking place among the roboethicist. Robots have been assimilated in autonomous ways, either on the battlefield, or as self-driving vehicles, or they tend to become visually as well as intelligently on par with the human beings.

Timeline – 20 Years on Complete Integrations of Robots


Dr David Hanson, CEO of Hanson Robotics has put a timeline of around 20 years on the complete integration of robots which tend to be `indistinguishable from human’. This tends to fall right in line with Singularity of Ray Kurzweil – the moment when machine intelligence and biological systems come across or exceed that of humans, first directed for 2045, but since reviewed to be sooner than forecast perhaps by 2029.

Irrespective of whether one tends to believe or not that the haughty intentions of robotics as well as artificial intelligence designers could really manifest as intended, one needs to acknowledge that we tend to live in the realm of faith at this point of time since almost all of what had been forecast years ahead has now taken place. A recent survey by the British Science Association –BSA has shown that one out of three people tend to believe that the rise of AI computing would pose a grave risk to humankind within the next century.

Tuesday, 16 August 2016

Alter: A Creepy Humanoid with Complete Control over Its Limbs and Facial Expressions

Alter

Humanoid Robot – Control over Limb Movement/Facial Expression


A creepy humanoid robot with total control over its limb movement as well as facial expression has been unveiled by Japanese scientists and the robot named `Alter’ has been implanted with electronic sensors which tends to imitate the neural network of the human brain. The arms, head and the facial expressions of Alter,is said to be controlled by these sensors that gives the robot a random pattern of movement, which is weirdly identical to a human.

Besides that Alter can also sing converting the casual movement of its fingers into a lingering synth melody. According to sources the big claim of Alter is that it is run by neural network wherein neural networks tend to be software which utilises information in making decisions on their own, informed by decisions which have been shown already. Here the form of neural network tends to shift between a set of movement mode and `chaos’ mode that moves the bot depending on proximity of people, humidity, temperature and noise. It is a difficult method of making the robot to move on its own though the movements seems to be merely conservative.

Gestures Set by 42 Pneumatic Actuators


The gestures of Alter are set by 42 pneumatic actuators together with `a central pattern generator, a network that imitates neurons which can sense proximity, temperature and oddly humidity. Though the unbalanced gestures of Alter do not seem as yet humanoid, there is something certainly upsetting about the robot which can reason for itself. Researchers have stated that Alter is an attempt in creating a robot which can `will’ for itself to move and the head, arm movement and posture can adapt and alter according to choice of the system.

For instance, the torso tends to shudder if the proximity sensors seem to identify lots of people around. Kouhei Ogawa, Osaka University Professor had stated that the amazing thing regarding Alter is its capability of predetermining its own movements. He had informed RT News that Alter does not look like a human and does not really move like a human but it positively tends to have a presence.

Designing Alter – Significant Scientific Achievement


Professor Ogawa had mentioned that designing Alter had been a significant scientific achievement. He had informed engagdet that making android talk or interacts for 10 minutes presently needed an incredible amount of hard work only to program something to react for so long.Alter is said to be kept on display at the National Museum of Emerging Science and Innovation in Tokyo till August 6. The disturbing part of Alter is the fact that only a portion of its hands and face are covered with silicon to look alike to skin though several of its mechanical components are left exposed for people to wonder at its complex movements.

Though the irregular gestures of Alter do not seem as yet human, there is something certainly worrying about the robot which can think for itself. At the National Museum of Emerging Science and Innovation, Alter had been unveiled to the public on July 29 in Tokyo. Alter had been designed at the Osaka University and the University of Tokyo by the engineer. The movement of Alter has been described as weird by one of the researchers who has not been involved in the project.