Thursday 22 November 2012

Facebook proposes to end voting on privacy issues


4:36PM EST November 21. 2012 -

NEW YORK (AP) — Facebook is proposing to end its practice of letting users vote on changes to its privacy policies, though it will continue to let users comment on proposed updates.

The world's biggest social media company said in a blog post Wednesday that its voting mechanism, which is triggered only if enough people comment on proposed changes, has become a system that emphasizes quantity of responses over quality of discussion. Users tend to leave one or two-word comments objecting to changes instead of more in-depth responses.

Facebook said it will continue to inform users of "significant changes" to its privacy policy, called its data use policy, and to its statement of user rights and responsibilities. The company will keep its seven-day comment period and take users' feedback into consideration.


"We will also provide additional notification mechanisms, including email, for informing you of those changes," wrote Elliot Schrage, Facebook's vice president of communications, public policy and marketing, in the post.

Facebook began letting users vote on privacy changes in 2009. Since then, it has gone public and its user base has ballooned from around 200 million to more than 1 billion. As part of the 2009 policy, users' votes only count if more than 30% of all Facebook's active users partake. That did not happen during either of the two times users voted and it's unlikely that it will now, given that more than 300 million people would have to participate.

Jules Polonetsky, director of the Future of Privacy Forum, an industry-backed think tank in Washington, said the voting process was a "noble experiment" that didn't lead to informed debate.

Facebook said in June that it was reviewing how to get the best feedback from users on its
policies.

Facebook is also proposing changes to its data use policy, such as making it clear that when users hide a post or photo from their profile page, the "timeline," those posts are not truly hidden and can be visible elsewhere, including on another person's page.

Polonetsky called Facebook's data use policy "kind of a good handbook" and a "reasonable read" on how to navigate the site's complex settings.

But most people don't read the privacy policies of websites they frequent, even Facebook's.

"I certainly recommend that people read it, but most users just want to poke someone and like someone and look at a picture," Polonetsky said.

Facebook's task, he added, will be to continue to evolve its user interface — the part of the site that its users interact with — so that answers to questions are obvious and people don't need to wade through the policy.

Thursday 1 November 2012





New Technology Allows Better Extreme Weather Forecasts

After the deafening roar of a thunderstorm, an eerie silence descends. Then theblackened sky over Joplin, Mo., releases the tentacles of an enormous, screaming multiple-vortex tornado. Winds exceeding 200 miles per hour tear a devastating path three quarters of a mile wide for six miles through the town, destroying schools, a hospital, businesses and homes and claiming roughly 160 lives.
Nearly 20 minutes before the twister struck on the Sunday evening of May 22, 2011, government forecasters had issued a warning. A tornado watch had been in effect for hours and a severe weather outlook for days. The warnings had come sooner than they typically do, but apparently not soon enough. Although emergency officials were on high alert, many local residents were not.
The Joplin tornado was only one of many twister tragedies in the spring of 2011. A month earlier a record-breaking swarm of tornadoes devastated parts of the South, killing more than 300 people. April was the busiest month ever recorded, with about 750 tornadoes.
At 550 fatalities, 2011 was the fourth-deadliest tornado year in U.S. history. The stormy year was also costly. Fourteen extreme weather and climate events in 2011—from the Joplin tornado to hurricane flooding and blizzards—each caused more than $1 billion in damages. The intensity continued early in 2012; on March 2, twisters killed more than 40 people across 11 Midwestern and Southern states.
Tools for forecasting extreme weather have advanced in recent decades, but researchers and engineers at the National Oceanic and Atmospheric Administration are working to enhance radars, satellites and supercomputers to further lengthen warning times for tornadoes and thunderstorms and to better determine hurricane intensity and forecast floods. If the efforts succeed, a decade from now residents will get an hour’s warning about a severe tornado, for example, giving them plenty of time to absorb the news, gather family and take shelter.
The Power of Radar
Meteorologist doug forsyth is heading up efforts to improve radar, which plays a role in forecasting most weather. Forsyth, who is chief of the Radar Research and Development division at NOAA’s National Severe Storms Laboratory in Norman, Okla., is most concerned about improving warning times for tornadoes because deadly twisters form quickly and radar is the forecaster’s primary tool for sensing a nascent tornado.
Radar works by sending out radio waves that reflect off particles in the atmosphere, such as raindrops or ice or even insects and dust. By measuring the strength of the waves that return to the radar and how long the round-trip takes, forecasters can see the location and intensity of precipitation. The Doppler radar currently used by the National Weather Service also measures the frequency change in returning waves, which provides the direction and speed at which the precipitation is moving. This key information allows forecasters to see rotation occurring inside thunderstorms before tornadoes form.

Tuesday 9 October 2012


Sleeping Brain Behaves as If It's Remembering Something

UCLA researchers have for the first time measured the activity of a brain region known to be involved in learning, memory and Alzheimer's disease during sleep. They discovered that this part of the brain behaves as if it's remembering something, even under anesthesia, a finding that counters conventional theories about memory consolidation during sleep.
The research team simultaneously measured the activity of single neurons from multiple parts of the brain involved in memory formation. The technique allowed them to determine which brain region was activating other areas of the brain and how that activation was spreading

At three connected brain regions in mice -- the new brain or the neocortex, the old brain or the hippocampus, and the entorhinal cortex, an intermediate brain that connects the new and the old brains. While previous studies have suggested that the dialogue between the old and the new brain during sleep was critical for memory formation, researchers had not investigated the contribution of the entorhinal cortex to this conversation
When you go to sleep, you can make the room dark and quiet and although there is no sensory input, the brain is still very active," Mehta said. "We wanted to know why this was happening and what different parts of the brain were saying to each other.
During sleep, the neocortex goes into a slow wave pattern for about 90 percent of that time. During this period, its activity slowly fluctuates between active and inactive states about once every second. Mehta and his team focused on the entorhinal cortex, which has many parts.
The outer part of the entorhinal cortex mirrored the neocortical activity. However, the inner part behaved differently. When the neocortex became inactive, the neurons in the inner entorhinal cortex persisted in the active state, as if they were remembering something the neocortex had recently "said," a phenomenon called spontaneous persistent activity. Further, they found that when the inner part of the entorhinal cortex became spontaneously persistent, it prompted the hippocampus neurons to become very active. On the other hand, when the neocortex was active, the hippocampus became quieter. This data provided a clear interpretation of the conversation.

Thursday 4 October 2012


4G
In telecommunications, 4G is the fourth generation of cell phone mobile communications standards. It is a successor of the third generation (3G) standards. A 4G system provides mobile ultra-broadband Internet access, for example to laptops with USB wireless modems, to smartphones, and to other mobile devices. Conceivable applications include amended mobile web access, IP telephony, gaming services, high-definition mobile TV, video conferencing and 3D television. Recently, Android and Windows-enabled cellular devices have fallen in the 4G category. One base advantage of 4G is that it can at any point of travelling time provide an internet data transfer rate higher than any existing cellular services (excluding broadband and Wi-Fi connections)
Two 4G candidate systems are commercially deployed: the Mobile WiMAX standard (at first in South Korea in 2006), and the first-release Long Term Evolution (LTE) standard (in Scandinavia since 2009). It has however been debated if these first-release versions should be considered as 4G or not, as the technical definition below.
In the U.S. Sprint Nextel has deployed Mobile WiMAX networks since 2008, and MetroPCS was the first operator to offer LTE service in 2010. USB wireless modems have been available since the start, while WiMAX smartphones have been available since 2010, and LTE smartphones since 2011. Equipment made for different continents are not always compatible, because of different frequency bands. Mobile WiMAX are currently (April 2012) not available for the European market.
In March 2008, the International Telecommunications Union-Radio communications sector (ITU-R) specified a set of requirements for 4G standards, named the International Mobile Telecommunications Advanced (IMT-Advanced) specification, setting peak speed requirements for 4G service at 100 megabit per second (Mbit/s) for high mobility communication (such as from trains and cars) and 1 gigabit per second (Gbit/s) for low mobility communication (such as pedestrians and stationary users).[4]
Since the first-release versions of Mobile WiMAX and LTE support much less than 1 Gbit/s peak bit rate, they are not fully IMT-Advanced compliant, but are often branded 4G by service providers. On December 6, 2010, ITU-R recognized that these two technologies, as well as other beyond-3G technologies that do not fulfill the IMT-Advanced requirements, could nevertheless be considered "4G", provided they represent forerunners to IMT-Advanced compliant versions and "a substantial level of improvement in performance and capabilities with respect to the initial third generation systems now deployed"
Data speeds of LTE
LTE
Peak download100 Mbit/s
Peak upload50 Mbit/s
Data speeds of WiMAX
WiMAX
Peak download128 Mbit/s
Peak upload56 Mbit/s

Tuesday 25 September 2012


Using Precisely-Targeted Lasers, Researchers Manipulate Neurons in Worms' Brains and Take Control of Their Behavior



In the quest to understand how the brain turns sensory input into behavior, Harvard scientists have crossed a major threshold. Using precisely-targeted lasers, researchers have been able to take over an animal's brain, instruct it to turn in any direction they choose, and even to implant false sensory information, fooling the animal into thinking food was nearby.
 A team made up of Sharad Ramanathan, an Assistant Professor of Molecular and Cellular Biology, and of Applied Physics, Askin Kocabas, a Post-Doctoral Fellow in Molecular and Cellular Biology, Ching-Han Shen, a Research Assistant in Molecular and Cellular Biology, and Zengcai V. Guo, from the Howard Hughes Medical Institute were able to take control of Caenorhabditis elegans -- tiny, transparent worms -- by manipulating neurons in the worms' "brain."
The work, Ramanathan said, is important because, by taking control of complex behaviors in a relatively simple animal -- C. elegans have just 302 neurons -we can understand how its nervous system functions..
"If we can understand simple nervous systems to the point of completely controlling them, then it may be a possibility that we can gain a comprehensive understanding of more complex systems," Ramanathan said. "This gives us a framework to think about neural circuits, how to manipulate them, which circuit to manipulate and what activity patterns to produce in them ."
"Extremely important work in the literature has focused on ablating neurons, or studying mutants that affect neuronal function and mapping out the connectivity of the entire nervous system. " he added. "Most of these approaches have discovered neurons necessary for specific behavior by destroying them. The question we were trying to answer was: Instead of breaking the system to understand it, can we essentially hijack the key neurons that are sufficient to control behavior and use these neurons to force the animal to do what we want?"

Before Ramanathan and his team could begin to answer that question, however, they needed to overcome a number of technical challenges.
Using genetic tools, researchers engineered worms whose neurons gave off fluorescent light, allowing them to be tracked during experiments. Researchers also altered genes in the worms which made neurons sensitive to light, meaning they could be activated with pulses of laser light.
The largest challenges, though, came in developing the hardware necessary to track the worms and target the correct neuron in a fraction of a second.
"The goal is to activate only one neuron," he explained. "That's challenging because the animal is moving, and the neurons are densely packed near its head, so the challenge is to acquire an image of the animal, process that image, identify the neuron, track the animal, position your laser and shoot the particularly neuron -- and do it all in 20 milliseconds, or about 50 times a second. The engineering challenges involved seemed insurmountable when we started. But Askin Kocabas found ways to overcome these challenges"

Friday 21 September 2012

Technical drawbacks due to Diesel hike 
ANALYSIS: Diesel vs Petrol

 Diesel price hike would have an additional implication of Rs four to Rs 17 per kg in the cost of production of yarn, causing unrest in handloom and powerloom segments of textile industry, Southern India Spinning Mills (SIMA) said today.There would also be an substantial increase in indirect costs like transportation.
Tamil Nadu accounted for one-third of textile business in the country giving direct employment to 50 lakh people, fetching Rs 50,000 crore forex earning and accounting for 47 per cent of yarn production.
The state has to procure over 95 per cent of its raw materials, particularly cotton, from states like Gujarat and Maharashtra.
Over 6 million handloom and power loom weavers across the country were suffering due to the abnormal cost of inputs in the last three years and the price hike would lead to unrest in the sectors, he cautioned.
SIMA also demanded immediate roll back of the hike to protect the industry and the people who depended on it.
Though the response to the bandh was mixed, a Confederation of Indian Industry (CII) release said that Thursday's bandh has been disruptive for business and trade in many parts of the country. The release further said that though an exact loss for the entire economy is not known, it can be estimated that almost Rs. 12,500 crores has been the loss to the country in terms of disruptions in production and trade.
"After a cut in our spending on buying vegetables and other essentials, we many now have cut down even the cooking," said Rama Devi, a housewife at a protest in the neighbouring state Andhra Pradesh, where effigies were also burned.
Can't the government brings a change...??????