The Galaxy Note 8

 

 

Image result for samsung note 8 wiki

Samsung Galaxy Note 8 is an Android phablet smartphone. It was designed, developed and marketed by Samsung Electronics. It was unveiled  on 23 August 2017. It is the successor to the discontinued Samsung Galaxy Note 7. It was released on 15 September 2017. The Note 8 improves on the core device specifications and hallmark S Pen features of previous devices. While retaining the overall same look and same approximate size as the Galaxy S8+, it features an upgraded processor.

For the first time in Samsung’s smartphone history, a dual-camera system on the rear of the device; one functions as a wide-angle lens and the other as a telephoto lens, with both featuring 12 MP resolution and optical image stabilization. The S Pen has increased pressure sensitivity levels and its software has been upgraded to offer improved note taking capabilities on the always-on display, as well as animated GIF and improved translation features.

Software

The Samsung Note 8 comes with Android 7.1.1 “Nougat” with Samsung’s own custom user interface pre – installed. The S Pen offers expanded software features, including “Live Message” feature for the creation of handwritten notes combined with emoji’s resulting in short animated GIF’s. Users can remove the S Pen from the device and immediately write notes on the display through “Screen Off Memo”, which works due to the screen’s always-on capabilities. The screen can collect up to 100 notes and allows the user to easily go back to notes pinned directly on the always-on screen.

A “Translate” feature now recognizes punctuation marks, letting users highlight entire sentences rather than single words. It supports 71 different languages. The edges of the screen on the Note 8 allow the user to open two apps at once in a multi-window view, dubbed “App Pair”. In the Camera application, a new “Live Focus” effect lets users adjust the intensity of background blur both before and after capturing photos, while “Dual Capture” makes both rear cameras take individual photos of the same subject, with one acting as a close-up shot and the other from a distance capturing the whole scene.

 

Related imageA picture of “THE GALAXY NOTE 8”.

 

HISTORY OF EARTH

WHAT IS EARTH ?

Image result for formation of earth

Earth is our home planet.  Earth is the fifth-largest planet in the solar system. Its diameter is about 8,000 miles. And Earth is the third-closest planet to the sun. Its average distance from the sun is about 93 million miles. Only Mercury and Venus are closer.

HOW DOES EARTH MOVE?
Earth orbits the sun once every 365 days, or one year. The shape of its orbit is not quite a perfect circle. It’s more like an oval, which causes Earth’s distance from the sun to vary during the year.

 Earth is nearest the sun, or at“perihelion,” in January when it’s about 91 million miles away. Earth is farthest from the sun, or at “aphelion,” in July when it’s about 95 million miles away. At the equator, Earth spins at just over 1,000 miles per hour. Earth makes a full spin around its axis once every 24 hours, or one day. The axis is an imaginary line through the center of the planet from the North Pole to the South Pole. Rather than straight up and down, Earth’s axis is tilted at an angle of 23.5 degrees.

WHY DOES EARTH HAVE SEASON?
Earth has seasons because its axis is tilted. Thus, the sun’s rays hit different parts of the planet more directly depending on the time of year. From June to August, the sun’s rays hit the Northern Hemisphere more directly than the Southern Hemisphere. The result is warm (summer) weather in the Northern Hemisphere and cold (winter) weather in the Southern Hemisphere. From December to February, the sun’s rays hit the Northern Hemisphere less directly than the Southern Hemisphere.

The result is cold (winter) weather in the Northern Hemisphere and warm (summer) weather in the Southern Hemisphere. From September to November, the sun shines equally on both hemispheres. The result is fall in the Northern Hemisphere and spring in the Southern Hemisphere.The sun also shines equally on both hemispheres from March to May. The result is spring in the Northern Hemisphere and fall in the Southern Hemisphere.

WHAT ARE EARTH’S DIFFERENT PART
Earth consists of land, air, water and life. The land contains mountains, valleys and flat areas.  The air is made up of different gases, mainly nitrogen and oxygen.  The water includes oceans, lakes, rivers, streams, rain, snow and ice.  Life consists of people, animals and plants. There are millions of species, or kinds of life, on Earth.  Their sizes range from very tiny to very large.

 Below Earth’s surface are layers of rock and metal. Temperatures increase with depth, all the way to about 12,000 degrees Fahrenheit at Earth’s inner core. Earth’s parts once were seen as largely separate from each other. But now they are viewed together as the “Earth system.” Each part connects to and affects each of the other parts. For example:

  • Clouds in the air drop rain and snow on land.

  • Water gives life to plants and animals.

  • Volcanoes on land send gas and dust into the air.

  • People breathe air and drink water.

Earth system science is the study of interactions between and among Earth’s different parts.

 

FORMATION OF EARTH

Image result for FORMATION OFearth pictures

The history of Earth concerns the development of planet Earth from its formation to the present day.  Nearly all branches of natural science have contributed to the understanding of the main events of Earth’s past. The age of the Earth is approximately one-third of the age of the universe.  An immense amount of geological change has occurred in that time span, accompanied by the emergence of life and its subsequent evolution.

 Earth formed around 4.54 billion years ago by accretion from the solar nebula. Volcanic out gassing probably created the primordial atmosphere and then the ocean, but the early atmosphere contained almost no oxygen and so would not have supported known forms of life. Much of the Earth was molten because of frequent collisions with other bodies which led to extreme volcanism.

 A giant impact collision with a planet-sized body named Theia while Earth was in its earliest stage, also known as Early Earth, is thought to have been responsible for forming the Moon. Over time, the Earth cooled, causing the formation of a solid crust, and allowing liquid water to exist on the surface.The geological time scale (GTS) depicts the larger spans of time, from the beginning of the Earth to the present, and it chronicles some definitive events of Earth history.

The Hadean eon represents time before the reliable (fossil) record of life beginning on Earth; it began with the formation of the planet and ended at 4.0 billion years ago as defined by international convention. The Archean and Proterozoic eons follow; they produced the abiogenesis of life on Earth and then the evolution of early life. The succeeding eon is the Phanerozoic, which is represented by its three component eras: the Palaeozoic; the Mesozoic, which spanned the rise, reign, and climactic extinction of the non-avian dinosaurs; and the Cenozoic, which presented the subsequent development of dominant mammals on Earth.

 Hominins, the earliest direct ancestors of the human clade, rose sometime during the latter part of the Miocene epoch; the precise time marking the first hominins is broadly debated over a current range of 13 to 4 million years ago. The succeeding Quaternary period is the time of recognizable humans, i.e., the genus , but that period’s two million-year-plus term of the recent times is too small to be visible at the scale of the GTS graphic. (Notes re the graphic: Ga means “billion years”; Ma, “million years”.)

 The earliest undisputed evidence of life on Earth dates at least from 3.5 billion years ago,during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. There are microbial mat fossils such as stromatolites found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland as well as “remains of biotic life” found in 4.1 billion-year-old rocks in Western Australia. According to one of the researchers, “If life arose relatively quickly on Earth … then it could be common in the universe.”

 Photosynthetic organisms appeared between 3.2 and 2.4 billion years ago and began enriching the atmosphere with oxygen. Life remained mostly small and microscopic until about 580 million years ago, when complex multicellular life arose, developed over time, and culminated in the Cambrian Explosion about 541 million years ago. This event drove a rapid diversification of life forms on Earth that produced most of the major phyla known today, and it marked the end of the Proterozoic Eon and the beginning of the Cambrian Period of the Paleozoic Era.

 More than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct.  Estimates on the number of Earth’s current species range from 10 million to 14 million, of which about 1.2 million are documented, but over 86 percent have not been described.  Scientists recently reported that 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described.

 The Earth’s crust has constantly changed since its formation. Likewise, life has constantly changed since its first appearance. Species continue to evolve, taking on new forms, splitting into daughter species or going extinct in the process of adapting or dying in response to ever-changing physical environments. The process of plate tectonics continues to shape the Earth’s continents and oceans and the life they harbor.

 Human activity is now a dominant force affecting global change, adversely affecting the biosphere, the Earth’s surface, hydrosphere, and atmosphere, with the loss of wild lands, over-exploitation of the oceans, production of greenhouse gases, degradation of the ozone layer, and general degradation of soil, air, and water quality.

Image result for formation of earth

The Formation Of Our Universe. Scientifically V/S Religiously.

Image result for universe

  • Our universe began with an explosion of space itself – the Big . Starting from extremely high density and temperature, space expanded, the universe cooled, and the simplest elements formed.
  • During the Big , all of the space, time, matter, and energy in the Universe was created. This giant explosion hurled matter in all directions and caused space itself to expand. As the Universe cooled, the material in it combined to form galaxies, stars, and planets.
  • Gravity gradually drew matter together to form the first stars and the first galaxies. Galaxies collected into groups, clusters, and super clusters. Some stars died in supernova explosions, whose chemical remnants seeded new generations of stars and enabled the formation of rocky planets.

Image result for formation of our universe

In our generation, almost every one knows about the Big theory but, very few are aware of what the Hindu scriptures says about the formation of the Universe.

“Myths may not satisfy the demands of rationality or science, but they contain profound wisdom – provided one believes they do and is willing to find out what they communicate.” 
― Devdutt Pattanaik

1.  It all started from Lord Vishnu

It all started from Lord Vishnu

Vedas say that before the creation of the universe Lord Vishnu is sleeping in the ocean of all causes. His bed is a giant serpent with thousands of cobra like hoods. While Vishnu is asleep, a lotus sprouts of his navel (note that navel is symbolized as the root of creation). Inside this lotus, Brahma resides. Brahma represents the universe which we all live in, and it is this Brahma who creates life forms.

2.  The theory involving Purusha

The theory involving Purusha

In the Rig Veda (the first scripture of Hinduism, containing spiritual and scientific knowledge) it says that the universe was created out of the parts of the body of a single cosmic man Purusha when his body was sacrificed. There the four classes (varnas) of Indian society come from his body: the priest (Brahmin) from his mouth, the warrior (Kshatritya) from his arms, the peasant (Vaishya) from his thighs, and the servant (Shudra) from his legs.

3.  The theory of zilch

Image result for sri krishna

According to some school of thoughts, in Hinduism, the universe had no beginning it was there always. As Sri Krishna said “Never was there a time when I did not exist, nor you, nor all these kings; nor in the future shall any of us cease to be”.

 

4.  As per Shaiva Scriptures

Image result for shiva with sati

As per Shaiva scriptures, at first the ultimate truth “Brahman” was Shiva without any birth or death. Vishnu is formed from the Vaamanga of Shiva or the left body. Shiva is the extreme male power of the universe. From him manifested the extreme female power of the universe Sati. Then the preserver of the universe Vishnu took three forms – MahaVishnu, Garbhodakasayivisnu and Ksirodaksayi vishnu. Maha Visnu have several Garbhodakasayivishnus in the spiritual sky.

 

 

The secret of Bermuda Triangle

The Bermuda Triangle, is also known as the Devil’s triangle

and is loosely defined region in the western part of North Atlantic Ocean ,where a number of air crafts and ships are said to have disappeared under mysterious circumstances.




NATURAL EXPLANATIONS

Compass problems are one of the cited phrases in many Triangle Incidents;but some have theorized that unusual local magnetic anomalies may exist in the area.Such anomalies may have not been found.Compasses have natural magnetic variations in the relation to the magnetic poles ,a fact which navigators have known for century.

Mobile Phone Technology

A mobile phone is a small portable radio telephone.

A cell phone combines technologies, mainly telephone, radio, and computer. Most also have a digital camera inside.

Cell phones work as two-way radios. They send electromagnetic microwaves from base station to base station. The waves are sent through antennas. This is called wireless communication.

Early cell telephones used analog networks. They became rare late in the 20th century. Modern phones use digital networks.

The first digital networks are also known as second generation, or 2G, technologies. The most used digital network is GSM (Global System for Mobile communication). It is used mainly in Europe and Asia, while CDMA (Code-Division Multiple Access) networks are mainly used in North America. The difference is in communication protocol. Other countries like Japan have different 2G protocols. A few 2G networks are still used. 3G are more common, and many places have 4G.

The radio waves that the mobile phone networks use are split into different frequencies. The frequency is measured in Hz. Low frequencies can send the signal farther. Higher frequencies provide better connections and the voice communications are generally clearer. Four main frequencies are used around the world: 850, 900, 1800 and 1900 MHz. Europe uses 900 and 1800 MHz and North America uses 850 and 1900 MHz.

Today there are mobile phones that work on two, three or four frequencies. The most advanced phones work on all frequencies. They are called ‘world’ phones and can be used everywhere.

NATURE

The term “nature” may refer to living plants and animals, geological processes, weather, and physics, such as matter and energy. The term is often refers to the “natural environment” or wilderness—wild animals, rocks, forest, beaches, and in general areas that have not been substantially altered by humans, or which persist despite human intervention. For, example, manufactured objects and human interaction are generally not considered part of nature, unless qualified as, for example, “human nature” or “the whole of nature“. This more traditional concept of “nature” implies a distinction between natural and artificial elements of the Earth, with the artificial as that which has been brought into being by a human consciousness or a human mind

 

Close your eyes and imagine that you are outside. What can you see, hear, smell, and touch? Can you see the hills and the trees? Can you hear the birds and wind blowing? Can you smell the flowers blooming near you? Can you touch the soft grass on your feet or feel the air against your face?

Everything you just saw in your mind is called nature. Nature is the physical world and everything in it that is not made by people.

Nature is not only what you imagined being outside in your yard. It is so much more! Have you ever been to your state’s zoo? All the animals, or living things, you see in the zoo, such as lions, tigers, bears, penguins, snakes, and elephants, are nature!There are also many other places to find living things in nature. In the forest, there are trees and plants as well as deer, squirrels, rabbits, and bears. In the ocean, there are fish, , sharks, and dolphins. Nature really is everywhere!

science

Science

 

In the bus fair,

a talk on science fair.

There was an artist,

who became a scientist.


Children love science formula,

and now forgot the math formula.

They liked ice-cream,

now forgot its cream.


 They stopped playing, 

and did the reading.

The mothers were happy,

instead of taking on .


Science is a good subject,

not a waste subject.

Science and Technology

TECH

Rise of the Robots–The Future of Artificial Intelligence

By 2050 robot “brains” based on computers that execute 100 trillion instructions per second will start rivaling human intelligence

Rise of the Robots--The Future of Artificial Intelligence
Credit: PYTAK
ADVERTISEMENT

Editor’s Note: This article was originally printed in the 2008 Scientific American Special Report on Robots. It is being published on the Web as part of ScientificAmerican.com’s In-Depth Report on Robots.

In recent years the mushrooming power, functionality and ubiquity of computers and the Internet have outstripped early forecasts about technology’s rate of advancement and usefulness in everyday life. Alert pundits now foresee a world saturated with powerful computer chips, which will increasingly insinuate themselves into our gadgets, dwellings, apparel and even our bodies.

Yet a closely related goal has remained stubbornly elusive. In stark contrast to the largely unanticipated explosion of computers into the mainstream, the entire endeavor of robotics has failed rather completely to live up to the predictions of the 1950s. In those days experts who were dazzled by the seemingly miraculous calculational ability of computers thought that if only the right software were written, computers could become the articial brains of sophisticated autonomous robots. Within a decade or two, they believed, such robots would be cleaning our oors, mowing our lawns and, in general, eliminating drudgery from our lives.

ADVERTISEMENT

Obviously, it hasn’t turned out that way. It is true that industrial robots have transformed the manufacture of automobiles, among other products. But that kind of automation is a far cry from the versatile, mobile, autonomous creations that so many scientists and engineers have hoped for. In pursuit of such robots, waves of researchers have grown disheartened and scores of start-up companies have gone out of business.

It is not the mechanical “body” that is unattainable; articulated arms and other moving mechanisms adequate for manual work already exist, as the industrial robots attest. Rather it is the computer-based articial brain that is still well below the level of sophistication needed to build a humanlike robot.

Nevertheless, I am convinced that the decades-old dream of a useful, general-purpose autonomous robot will be realized in the not too distant future. By 2010 we will see mobile robots as big as people but with cognitive abilities similar in many respects to those of a lizard. The machines will be capable of carrying out simple chores, such as vacuuming, dusting, delivering packages and taking out the garbage. By 2040, I believe, we will nally achieve the original goal of robotics and a thematic mainstay of science ction: a freely moving machine with the intellectual capabilities of a human being.

Reasons for Optimism
In light of what I have just described as a history of largely unfullled goals in robotics, why do I believe that rapid progress and stunning accomplishments are in the ofng? My condence is based on recent developments in electronics and software, as well as on my own observations of robots, computers and even insects, reptiles and other living things over the past 30 years.

The single best reason for optimism is the soaring performance in recent years of mass-produced computers. Through the 1970s and 1980s, the computers readily available to robotics researchers were capable of executing about one million instructions per second (MIPS). Each of these instructions represented a very basic task, like adding two 10-digit numbers or storing the result in a specied location in memory.

ADVERTISEMENT

In the 1990s computer power suitable for controlling a research robot shot through 10 MIPS, 100 MIPS and has lately reached 50,000 MIPS in a few high-end desktop computers with multiple processors. Apple’s MacBook laptop computer, with a retail price at the time of this writing of $1,099, achieves about 10,000 MIPS. Thus, functions far beyond the capabilities of robots in the 1970s and 1980s are now coming close to commercial viability.

For example, in October 1995 an experimental vehicle called Navlab V crossed the U.S. from Washington, D.C., to San Diego, driving itself more than 95 percent of the time. The vehicle’s self-driving and navigational system was built around a 25-MIPS laptop based on a microprocessor by Sun Microsystems. The Navlab V was built by the Robotics Institute at Carnegie Mellon University, of which I am a member. Similar robotic vehicles, built by researchers elsewhere in the U.S. and in Germany, have logged thousands of highway kilometers under all kinds of weather and driving con­ditions. Dramatic progress in this field became evident in the DARPA Grand Challenge contests held in California. In October 2005 several fully autonomous cars successfully traversed a hazard-studded 132-mile desert course, and in 2007 several successfully drove for half a day in urban traffic ­conditions.

In other experiments within the past few years, mobile robots mapped and navigated unfamiliar ofce suites, and computer vision systems located textured objects and tracked and analyzed faces in real time. Meanwhile personal com­puters became much more adept at recognizing text and speech.

Still, computers are no match today for humans in such functions as recognition and navigation. This puzzled experts for many years, because computers are far superior to us in calculation. The explanation of this apparent paradox follows from the fact that the human brain, in its entirety, is not a true programmable, general-purpose computer (what computer scientists refer to as a universal machine; almost all computers nowadays are examples of such machines).

To understand why this is requires an evolutionary perspective. To survive, our early ancestors had to do several things repeatedly and very well: locate food, escape predators, mate and protect offspring. Those tasks depended strongly on the brain’s ability to recognize and navigate. Honed by hundreds of millions of years of evolution, the brain became a kind of ultrasophisticated—but special-­purpose—computer.

ADVERTISEMENT

The ability to do mathematical calculations, of course, was irrelevant for survival. Nevertheless, as language trans­formed human culture, at least a small part of our brains evolved into a universal machine of sorts. One of the hallmarks of such a machine is its ability to follow an arbitrary set of instructions, and with language, such instructions could be transmitted and carried out. But because we visualize numbers as complex shapes, write them down and perform other such functions, we process digits in a monumentally awkward and inefcient way. We use hundreds of billions of neurons to do in minutes what hundreds of them, specially “rewired” and arranged for calculation, could do in milliseconds.

A tiny minority of people are born with the ability to do seemingly amazing mental calculations. In absolute terms, it’s not so amazing: they calculate at a rate perhaps 100 times that of the average person. Computers, by comparison, are millions or billions of times faster.

Can Hardware Simulate Wetware?
The challenge facing roboticists is to take general-­purpose computers and program them to match the largely special-purpose human brain, with its ultraoptimized perceptual inheritance and other peculiar evolutionary traits. Today’s robot-controlling computers are much too feeble to be applied successfully in that role, but it is only a matter of time before they are up to the task.

Implicit in my assertion that computers will eventually be capable of the same kind of perception, cognition and thought as humans is the idea that a sufciently advanced and sophisticated articial system—for example, an electronic one—can be made and programmed to do the same thing as the human nervous system, including the brain. This issue is controversial in some circles right now, and there is room for brilliant people to disagree.

At the crux of the matter is the question of whether biological structure and behavior arise entirely from physical law and whether, moreover, physical law is computable—that is to say, amenable to computer simulation. My view is that there is no good scientic evidence to negate either of these propositions. On the contrary, there are compelling indications that both are true.

ADVERTISEMENT

Molecular biology and neuroscience are steadily uncovering the physical mechanisms underlying life and mind but so far have addressed mainly the simpler mechanisms. Evidence that simple functions can be composed to produce the higher capabilities of nervous systems comes from programs that read, recognize speech, guide robot arms to assemble tight components by feel, classify chemicals by articial smell and taste, reason about abstract matters, and so on. Of course, computers and robots today fall far short of broad human or even animal competence. But that situation is understandable in light of an analysis, summarized in the next section, that concludes that today’s computers are only powerful enough to function like insect nervous systems. And, in my experience, robots do indeed perform like insects on simple tasks.

Ants, for instance, can follow scent trails but become disoriented when the trail is interrupted. Moths follow pheromone trails and also use the moon for guidance. Similarly, many commercial robots can follow guide wires installed below the surface they move over, and some orient themselves using lasers that read bar codes on walls.

If my assumption that greater computer power will eventually lead to human-level mental capabilities is true, we can expect robots to match and surpass the capacity of various animals and then nally humans as computer-processing rates rise sufciently high. If on the other hand the assumption is wrong, we will someday nd specic animal or human skills that elude implementation in robots even after they have enough computer power to match the whole brain. That would set the stage for a fascinating sci­entic challenge—to somehow isolate and identify the fundamental ability that brains have and that computers lack. But there is no evidence yet for such a missing principle.

The second proposition, that physical law is amenable to computer simulation, is increasingly beyond dispute. Scientists and engineers have already produced countless useful simulations, at various levels of abstraction and approximation, of everything from automobile crashes to the “color” forces that hold quarks and gluons together to make up protons and neutrons.

Nervous Tissue and Computation
If we accept that computers will eventually become powerful enough to simulate the mind, the question that naturally arises is: What processing rate will be necessary to yield performance on a par with the human brain? To explore this issue, I have considered the capabilities of the vertebrate retina, which is understood well enough to serve as a Rosetta stone roughly relating nervous tissue to computation. By comparing how fast the neural circuits in the retina perform image-processing operations with how many instructions per second it takes a computer to accomplish similar work, I believe it is possible to at least coarsely estimate the information-processing power of nervous tissue—and by extrapolation, that of the entire human nervous system.

ADVERTISEMENT

The human retina is a patch of nervous tissue in the back of the eyeball half a millimeter thick and approximately two centimeters across. It consists mostly of light-sensing cells, but one tenth of a millimeter of its thickness is populated by image-processing circuitry that is capable of detecting edges (boundaries between light and dark) and motion for about a million tiny image regions. Each of these regions is associated with its own ber in the optic nerve, and each performs about 10 detections of an edge or a motion each second. The results ow deeper into the brain along the associated ber.

From long experience working on robot vision systems, I know that similar edge or motion detection, if performed by efcient software, requires the execution of at least 100 computer instructions. Therefore, to accomplish the retina’s 10 million detections per second would necessitate at least 1,000 MIPS.

The entire human brain is about 75,000 times heavier than the 0.02 gram of processing circuitry in the retina, which implies that it would take, in round numbers, 100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain. Personal computers in 2008 are just about a match for the 0.1-gram brain of a guppy, but a typical PC would have to be at least 10,000 times more powerful to perform like a human brain.

Brainpower and Utility
Though dispiriting to articial-intelligence experts, the huge decit does not mean that the goal of a humanlike articial brain is unreachable. Computer power for a given price doubled each year in the 1990s, after doubling every 18 months in the 1980s and every two years before that. Prior to 1990 this progress made possible a great decrease in the cost and size of robot-controlling computers. Cost went from many millions of dollars to a few thousand, and size went from room-lling to handheld. Power, meanwhile, held steady at about 1 MIPS. Since 1990 cost and size reductions have abated, but power has risen to about 10,000 MIPS for a home computer. At the present pace, only about 20 or 30 years will be needed to close the gap. Better yet, useful robots don’t need full human-scale brainpower.

Commercial and research experiences convince me that the mental power of a guppy—about 10,000 MIPS—will sufce to guide mobile utility robots reliably through unfamiliar surroundings, suiting them for jobs in hundreds of thousands of industrial locations and eventually hundreds of millions of homes. A few machines with 10,000 MIPS are here already, but most industrial robots still use processors with less than 1,000 MIPS.

Commercial mobile robots have found few jobs. A paltry 10,000 work worldwide, and the companies that made them are struggling or defunct. (Makers of robot manipulators are not doing much better.) The largest class of commercial mobile robots, known as automatic guided vehicles (AGVs), transport materials in factories and warehouses. Most follow buried signal-emitting wires and detect end points and collisions with switches, a technique developed in the 1960s.

It costs hundreds of thousands of dollars to install guide wires under concrete oors, and the routes are then xed, making the robots economical only for large, exceptionally stable factories. Some robots made possible by the advent of microprocessors in the 1980s track softer cues, like magnets or optical patterns in tiled oors, and use ultrasonics and infrared proximity sensors to detect and negotiate their way around obstacles.

The most advanced industrial mobile robots, developed since the late 1980s, are guided by occasional navigational markers—for instance, laser-sensed bar codes—and by preexisting features such as walls, corners and doorways. The costly labor of laying guide wires is replaced by custom software that is carefully tuned for each route segment. The small companies that developed the robots discovered many industrial customers eager to automate transport, oor cleaning, security patrol and other routine jobs. Alas, most buyers lost interest as they realized that installation and route changing required time-consuming and expensive work by experienced route programmers of inconsistent availability. Technically successful, the robots zzled commercially.

In failure, however, they revealed the essentials for success. First, the physical vehicles for various jobs must be reasonably priced. Fortunately, existing AGVs, forklift trucks, oor scrubbers and other industrial machines designed for accommodating human riders or for following guide wires can be adapted for autonomy. Second, the customer should not have to call in specialists to put a robot to work or to change its routine; oor cleaning and other mundane tasks cannot bear the cost, time and uncertainty of expert installation. Third, the robots must work reliably for at least six months before encountering a problem or a situation requiring downtime for reprogramming or other alterations. Customers routinely rejected robots that after a month of awless operation wedged themselves in corners, wandered away lost, rolled over employees’ feet or fell down stairs. Six months, though, earned the machines a sick day.

Robots exist that have worked faultlessly for years, perfected by an iterative process that xes the most frequent failures, revealing successively rarer problems that are corrected in turn. Unfortunately, that kind of reliability has been achieved only for prearranged routes. An insectlike 10 MIPS is just enough to track a few handpicked landmarks on each segment of a robot’s path. Such robots are easily confused by minor surprises such as shifted bar codes or blocked corridors (not unlike ants thrown off a scent trail or a moth that has mistaken a streetlight for the moon).

A Sense of Space
Robots that chart their own routes emerged from laboratories worldwide in the mid-1990s, as microprocessors reached 100 MIPS. Most build two-dimensional maps from sonar or laser range­nder scans to locate and route themselves, and the best seem able to navigate ofce hallways for days before becoming disoriented. Of course, they still fall far short of the six-month commercial criterion. Too often different locations in the coarse maps resemble one another. Conversely, the same location, scanned at different heights, looks different, or small obstacles or awkward protrusions are overlooked. But sensors, computers and techniques are improving, and success is in sight.

My efforts are in the race. In the 1980s at Carnegie Mellon we devised a way to distill large amounts of noisy sensor data into reliable maps by accumulating statistical evidence of emptiness or occupancy in each cell of a grid representing the surroundings. The approach worked well in two dimensions and still guides many of the robots described above.

Three-dimensional maps, 1,000 times richer, promised to be much better but for years seemed computationally out of reach. In 1992 we used economies of scale and other tricks to reduce the computational costs of three-dimensional maps 100-fold. Continued research led us to found a company, Seegrid, that sold its first dozen robots by late 2007. These are load-pulling warehouse and factory “tugger” robots that, on command, autonomously follow routes learned in a single human-guided walk-through. They navigate by three-dimensionally grid-mapping their route, as seen through four wide-angle stereoscopic cameras mounted on a “head,” and require no guide wires or other navigational markers.

Robot, Version 1.0
In 2008 desktop PCs offer more than 10,000 MIPS. Seegrid tuggers, using slightly older processors doing about 5,000 MIPS, distill about one visual “glimpse” per second. A few thousand visually distinctive patches in the surroundings are selected in each glimpse, and their 3-D positions are statistically estimated. When the machine is learning a new route, these 3-D patches are merged into a chain of 3-D grid maps describing a 30-meter “tunnel” around the route. When the tugger is automatically retracing a taught path, the patches are compared with the stored grid maps. With many thousands of 3-D fuzzy patches weighed statistically by a so-called sensor model, which is trained offline using calibrated example routes, the system is remarkably tolerant of poor sight, changes in lighting, movement of objects, mechanical inaccuracies and other perturbations.

Seegrid’s computers, perception programs and end products are being rapidly improved and will gain new functionalities such as the ability to find, pick up and drop loads. The potential market for materials-handling automation is large, but most of it has been inaccessible to older approaches involving buried guide wires or other path markers, which require extensive planning and installation costs and create inflexible routes. Vision-guided robots, on the other hand, can be easily installed and rerouted.

Fast Replay
Plans are afoot to improve, extend and miniaturize our techniques so that they can be used in other applications. On the short list are consumer robot vacuum cleaners. Externally these may resemble the widely available Roomba machines from iRobot. The Roomba, however, is a simple beast that moves randomly, senses only its immediate obstacles and can get trapped in clutter. A Seegrid robot would see, explore and map its premises and would run unattended, with a cleaning schedule minimizing owner disturbances. It would remember its recharging locations, allowing for frequent recharges to run a powerful vacuum motor, and also would be able to frequently empty its dust load into a larger container.

Commercial success will provoke competition and ac­celerate investment in manufacturing, engineering and research. Vacuuming robots ought to beget smarter cleaning robots with dusting, scrubbing and picking-up arms, followed by larger multifunction utility robots with stronger, more dexterous arms and better sensors. Programs will

be written to make such machines pick up clutter, store, retrieve and deliver things, take inventory, guard homes, open doors, mow lawns, play games, and so on. New applications will expand the market and spur further advances when robots fall short in acuity, precision, strength, reach, dexterity, skill or processing power. Capability, numbers sold, engineering and manufacturing quality, and cost-effectiveness will increase in a mutually reinforcing spiral. Perhaps by 2010 the process will have produced the rst broadly competent “universal robots,” as big as people but with lizardlike 20,000-MIPS minds that can be programmed for almost any simple chore.

Like competent but instinct-ruled reptiles, rst-generation universal robots will handle only contingencies explicitly covered in their application programs. Unable to adapt to changing circumstances, they will often perform inefciently or not at all. Still, so much physical work awaits them in businesses, streets, elds and homes that robotics could begin to overtake pure information technology commercially.

A second generation of universal robot with a mouselike 100,000 MIPS will adapt as the rst generation does not and will even be trainable. Besides application programs, such robots would host a suite of software “conditioning modules” that would generate positive and negative reinforcement signals in pre­de­ned circumstances. For example, doing jobs fast and keeping its batteries charged will be positive; hitting or breaking something will be negative. There will be other ways to accomplish each stage of an application program, from the minutely specic (grasp the handle underhand or overhand) to the broadly general (work indoors or outdoors). As jobs are repeated, alternatives that result in positive reinforcement will be favored, those with negative outcomes shunned. Slowly but surely, second-generation robots will work increasingly well.

A monkeylike ve million MIPS will permit a third generation of robots to learn very quickly from mental rehearsals in simulations that model physical, cultural and psychological factors. Physical properties include shape, weight, strength, texture and appearance of things, and ways to handle them. Cultural aspects include a thing’s name, value, proper location and purpose. Psychological factors, applied to humans and robots alike, include goals, beliefs, feelings and preferences. Developing the simulators will be a huge undertaking involving thousands of programmers and experience-gathering robots. The simulation would track external events and tune its models to keep them faithful to reality. It would let a robot learn a skill by imitation and afford a kind of consciousness. Asked why there are candles on the table, a third-generation robot might consult its simulation of house, owner and self to reply that it put them there because its owner likes candlelit dinners and it likes to please its owner. Further queries would elicit more details about a simple inner mental life concerned only with concrete situations and people in its work area.

Fourth-generation universal robots with a humanlike 100 million MIPS will be able to abstract and generalize. They will result from melding powerful reasoning programs to third-generation machines. These reasoning programs will be the far more sophisticated descendants of today’s theorem provers and expert systems, which mimic human reasoning to make medical diagnoses, schedule routes, make nancial decisions, con­gure computer systems, analyze seismic data to locate oil deposits, and so on.

Properly educated, the resulting robots will become quite formidable. In fact, I am sure they will outperform us in any conceivable area of endeavor, intellectual or physical. Inevitably, such a development will lead to a fundamental restructuring of our society. Entire corporations will exist without any human employees or investors at all. Humans will play a pivotal role in formulating the intricate complex of laws that will govern corporate behavior. Ultimately, though, it is likely that our descendants will cease to work in the sense that we do now. They will probably occupy their days with a variety of social, recreational and artistic pursuits, not unlike today’s comfortable retirees or the wealthy leisure classes.

The path I’ve outlined roughly recapitulates the evolution of human intelligence—but 10 million times more rapidly. It suggests that robot intelligence will surpass our own well before 2050. In that case, mass-produced, fully educated robot scientists working diligently, cheaply, rapidly and increasingly effectively will ensure that most of what science knows in 2050 will have been discovered by our articial progeny!

Rights & Permissions

ABOUT THE AUTHOR(S)

HANS MORAVEC is an adjunct professor at Carnegie Mellon University. He constructed his first mobile robot–an assemblage of tin cans, batteries, lights and a motor–at age 10. His current work focuses on enabling robots to determine their position and to navigate by a three-dimensional awareness of their surroundings. Since 2004 Moravec has been chief scientist of Seegrid Corporation, founded to commercialize “tuggers” and other robots for warehouses and factories.

LOAD COMMENTS
FOLLOW US

robotic animals in future

Bio-inspired robotic locomotion is a fairly new subcategory of bio-inspired design. It is about learning concepts from nature and applying them to the design of real-world engineered systems. More specifically, this field is about making robots that are inspired by biological systems. Biomimicry and bio-inspired design are sometimes confused. Biomimicry is copying the nature while bio-inspired design is learning from nature and making a mechanism that is simpler and more effective than the system observed in nature. Biomimicry has led to the development of a different branch of robotics called soft robotics. The biological systems have been optimized for specific tasks according to their habitat. However, they are multifunctional and are not designed for only one specific functionality. Bio-inspired robotics is about studying biological systems, and look for the mechanisms that may solve a problem in the engineering field. The designer should then try to simplify and enhance that mechanism for the specific task of interest. Bio-inspired roboticists are usually interested in biosensors (e.g. eye), bioactuators (e.g. muscle), or biomaterials (e.g. spider silk). Most of the robots have some type of locomotion system. Thus, in this article different modes of animal locomotion and few examples of the corresponding bio-inspired robots are introduced.

how to make a air cooler

 

Introduction: How to Make a Simple Air Cooler – Low Cost

Hello friends, welcome to my new instructable. this instructable is about how you can make a simple air cooler at low cost to get temporarily rid of heat in this hot summer.

so let’s start…..

Step 1: Simple Air Cooler (low Cost)

So friends, watch the video till end to learn step by step to make a air cooler easily…..

Step 2: Things Needed –

> Thermocol – 1 pcs (thick)

> 12v dc Motor – 1pcs.

>12v dc water pump – 1pcs.(you can easily make your own)

>Wire – 5m length

> Wood small chunk.

> 1m of small diameter pipe.(i used sline set as pipe)

> some piece of cardboard .

> fan blade – 1 pcs

> fevicol.

> Hot glue gun.

> knife – for cutting purpose.

> a polythene.

After gettiing all these things. watch the video and learn easily…

Step 3: Working of Air Cooler

The working of air cooler is very simple as you can easily understand from the above image.it works as the warm air gets inside the cooler from the ventilator side and as we know the ventilator is wet so the hot air becomes cold after getting out and the fan gives the air outside to us.that’s all about the working of air cooler.

if you have any question regarding this then ask from me in comment box…

Thanks…

Comments

Post a comment
Be nice!We have a be nice comment policy.
Please be positive and constructive.

  

About This Instructable

1,588views

18favorites

Bio: Hello friends, i am a guy love to make things related to tech and creativity.my main moive is to make engineering easy for everyone … More »
More by TECHY SAM:Simple Rechargeable Light V2.0Cheap Automatic Water Level ControllerSimple Rechargable Light