Author Archives: rafaelnadalindia

PLASMA


Plasma is a phase of matter distinct from solids, liquids, and gases. It is the most abundant phase of matter in the universe — both stars and interstellar dust consist of plasma. Although it is its own phase of matter, plasma is often referred to as an ionized gas. This is similar to a normal gas, except that electrons have been stripped from their respective nucleons and float freely within the plasma. Even if only 1% of the atoms have lost their electrons, a gas will display plasma-like behavior.

Plasma is electrically conductive and can be manipulated by magnetic fields. It can be found in a variety of everyday contexts, including plasma displays, fluorescent lamps, neon signs, plasma balls, photolithographic etching machines, flames, lightning, aurora borealis, tesla coils, and more.

Plasmas vary widely. Some parameters used for their classification are the degree of ionization, temperature, density of the magnetic field, and particle density. For example, the gas in a candle flame is only very slightly ionized, whereas the air in the path of a lightning bolt is highly ionized. Some plasmas are very low temperature, like the intergalactic medium, while some are very high temperature, like the center of a star.

Unlike gases, which are composed of neutral atoms, charged plasmas have distinct constituents that behave on their own accord. Free electrons are negatively charged. The nuclei, lacking electrons, are positively charged ions. Most plasmas still contain whole atoms which are electrically neutral as well. Since each of these components can behave differently in response to changes in external and internal conditions, a variety of complex wave-like phenomena can emerge.

Liquid Crystal


At one time it was firmly believed that there were three, and only three, states of matter: solid, liquid, and gas. This was the case in 1888, when an Austrian chemist named Friedrich Reinitzer, working at the University of Prague, was working on a cholesterol-based substance that didn’t seem to fit his expectations. As he tried to determine the melting point, he found that the substance, which was a solid crystal at room temperature, had two distinct melting points at 293.9ºF (145.5º C) and 353.3ºF (178.5ºC). In between those two points, it was a cloudy liquid, and when heated above the second point, it became transparent. Reinitzer consulted Otto Lehmann, an expert in crystal optics, who realized that the cloudy liquid was an unspecified state of matter, for which he coined the name liquid crystal.

A liquid crystal is a substance that is considered to be in between its solid and liquid phases. Often, its molecules are shaped like plates or rods — shapes that tend to align in a certain direction. The molecular order in liquid crystal can be altered by exposing it to electric, magnetic, or mechanical forces.

There are two main phases for a liquid crystal. In the nematic phase, which is close to being liquid, molecules float, but stay ordered. In this phase when a liquid crystal is what is called cholesteric, liquid crystals can create a twisted structure and reflect visible light in a temperature-dependent color pattern. The link between temperature and color allows them to be used in thermometers.

The other phase is the smectic phase. In this phase, the liquid crystal is close to solid, and is ordered in layers. The liquid crystals move within layers, but not between layers.

The liquid crystal display (LCD) was developed in Princeton, New Jersey, at the David Sarnoff Research Center in 1963. Monochrome LCD digital watches were first manufactured in the 1970s, and the first commercial LCD television was built in 1988. LCD color computer monitors began to be sold in the 1990s, and outsold CRT monitors for the first time in 2003. As prices for LCD technology fell, more LCD televisions than either plasma

TIBET – Historical & Contemporary


Self-immolation is the latest mode of protest adopted by the Tibetans. This is an act of desperation as there are no other viable avenues to express their grievances. This mode is not a Tibetan innovation as it has been resorted to by others before. However, unlike other cases where a single self-immolation captured international headlines and triggered major reactions, in the case of Tibet the effect has been different. Despite nearly a hundred persons having immolated themselves over the last few years, these events have passed by without much notice, let alone reaction.

This double standard of the international community is partly to be blamed on the Tibetans themselves. They failed to think and act like a nation according to the general trend in their neighbourhood and the rest of the world. They preoccupied themselves with religion and closed themselves to outside influence. Tibetan leaders bartered away their sovereignty for protection in the garb of a patron-priest relationship with China. Tibetans allowed their martial instincts, well known in their recorded history from the seventh century onwards, to be subdued. In short, Tibetans preoccupied themselves with the next life, forsaking the ways of living this present ‘conventional’ life.

Historical evidence suggests that Tibet as a nation had inadvertently committed major blunders, and that the people of Tibet, both inside and outside Tibet at present time, are bearing the harsh consequences of those blunders. Tibet is an ancient country with a recorded history of its existence since the seventh century. Its foundation, basic characteristics and consolidation as a distinct country took shape under the reign of 42 ingenious kings, who ruled from around 127 BC up to 842 AD. In the seventh to ninth centuries, Tibet emerged as a formidable military power in Central Asia and adopted expansionist activities towards its neighbours. The King of Nepal and the Emperor of China had to offer their daughters to the Tibetan Emperor in marriage. However, when Lang Dharma, the last of the aforementioned kings, was assassinated in 842 AD, Tibet underwent a period of turmoil and fragmented into small principalities.

This period had been referred to by the Tibetan historians as “Sil-bu-dus”, a veritable dark age, but in reality it was a period of cultural renaissance in Tibet. During this period, Buddhism had transformed from a courtly interest into a social force which permeated every aspect of Tibetan life. Moreover, different schools of Tibetan Buddhism started flourishing during this period. It was the Mongols’ invasion of Tibet and handing over the reign of Tibet to Sakya Lama that eventually paved the way for the system of rulers in whose hands both the earthly authority and the prestige of religious sanctity were united, and the whole of Tibet was once again brought under one central authority. The rule of the Lamas, first by the Sakyas (1247-1358) and later by the Dalai Lamas (1642-1959), brought about the historic transition from royal authority based on force to a lama-ist authority based on religious belief. The predominance of religion had the effect of neglecting statecraft and killing the martial spirit of the Tibetans.

In order to protect the lama-ist rule from external threats, a unique patron-priest relationship developed between the rulers of China and Tibet. Under this system, the Chinese rulers accepted the lama rulers of Tibet as their spiritual leaders and, in return, provided military protection to the latter . However, when the protector itself started posing a threat, after the Communist takeover in Beijing, it became necessary for the Tibetan government to interact with the rest of the world.

In the mid-1950s, the Tibetan government sent missions to India, Nepal, Britain and the United States to explain the crisis developing in its relationship with the new regime in Beijing, to inform the threat of China’s action against Tibet, and to seek their assistance. It also sent an appeal to the United Nations (UN) on November 7, 1950. In a letter to the Secretary General, it explained:

Tibet recognize that she is in no position to resist (The Chinese advance)…. This unwarranted act of aggression…has created a grave situation in Tibet and may eventually deprive Tibet of her long cherished independence…. We therefore appeal through you to the nations of the world to intercede in our behalf and restrain Chinese aggression.

Neither these countries nor the UN responded positively to Tibet’s pleas for assistance. When a full-scale military attack was launched on Tibet on October 5, 1950, Tibetan soldiers fought bravely at Chamdo but were defeated. Tibet was also faced with a diplomatic set-back as the UN decided to defer the discussion on the Tibet issue mainly on the ground of its unclear status. With these setbacks, Tibet had no alternative but to sign the contentious Seventeen-Point Agreement on May 23, 1951. However, the simmering discontent and resistance to China’s policy in Tibet continued throughout the 1950s and finally erupted into a full-scale national uprising against Chinese rule in March 1959.

When, for the first time, a full-scale discussion on Tibet took place in the plenary session of the UN General Assembly, Tibet was discussed not as a nation subjected to aggression and colonial occupation but under the diluted, term “human rights violation”, thus evading any reference to the political situation. A resolution was passed in that august body calling for “respect for the fundamental human rights of the Tibetan people and their distinctive cultural and religious life”. The draft resolution was passed by the General Assembly by 46 votes to nine, with 26 abstentions. Subsequently, two more resolutions of a similar nature were passed in the General Assembly in 1961 and 1965. Thus, according to the UN, the ancient nation of Tibet was not qualified to be treated as a nation-state; nor did the august international body consider Tibet to be an occupied territory. It simply denigrated the issue as merely a question of the denial of human rights of the Tibetan people by the Chinese state.

Despite these setbacks, Tibet did not become a lost cause. Nationalist feelings in Tibet remained uncontaminated even after persistent Chinese indoctrination, which was manifested for the first time in the spontaneous outpouring of emotion in greeting the fact-finding delegations of exiled Tibetans in the 1980s and 1990s. Thereafter, Tibetans in Tibet used every opportunity to protest against Chinese rule and express their aspiration for independence. They resorted to various innovative methods of protest (discussed earlier), leading to the imposition of martial law in Tibet in March 1989.

This was the period when western countries were celebrating their victory over communism with the end of the Cold War and disintegration of the former Soviet Union, and when their people were expecting governments to promote their cherished values worldwide. In this context the governments and parliaments, especially in the West, needed to be seen doing something for Tibetans to placate their public. Their high-sounding resolutions and pronouncements proved to be futile as they lacked the political will to take concerted and coordinated action against China. In fact, as their stand on human rights clashed with their core national interests of markets, trade and investment opportunities, human rights became side-lined. In order to satisfy themselves and their general public, western leaders seems to have persuaded the exiled Tibetan leaders to engage in dialogue with China and induced them to give up Tibet’s core issue of independence as a concession to start the dialogue with China. The series of dialogue that ensued between the exiled leaders and the Chinese government proved futile mainly because the pressures exerted by the western governments were not compelling enough for China to yield ground. In hindsight, it is obvious that China engaged in these fruitless rounds of dialogue to buy time with the intention of finding a “final solution” by its own means.

Tibetans, both in Tibet and in the diaspora, are in deep despair due to the lack of progress in the dialogue process despite giving in on their most cherished goal of independence as well as because of the lack of visible reaction to the loss of nearly hundred precious lives from self-immolation. However, their decision to celebrate the centenary of the “Proclamation of Tibet Independence” issued on February 13, 1913, seems to indicate that they seek to resurrect the issue once again. This time round, the comity of nations should not blame the Tibetans for going back on their stand of independence or their willingness to work within the Chinese constitution. It would not be a surprise if they do; after all, state authorities always take a stand on the basis of costs and benefits.

Tibetans should learn to focus on mobilizing the support of and relying on international civil society as this is an era of civil society activism the world over. For the international civil society, the Tibet issue is the test case to prove that they stand for what they claim to stand for.

FARADAY SHIELD – read only if Interested!


Courtesy : VISWAROOPAM

Imagine flying in an airplane that’s suddenly struck by lightning. This isn’t a rare occurrence — it actually happens regularly, yet the plane and its passengers aren’t affected. That’s because the aluminum hull of the plane creates a Faraday cage. The charge from the lightning can pass harmlessly over the surface of the plane without damaging the equipment or people inside.

Your car, for example, is basically a Faraday cage. It’s the cage’s effect, not the rubber tires, that protects you in case of a nearby lightning strike.

A lot of buildings act as Faraday cages, too, if only by accident. With their plaster or concrete walls strewn with metal rebar or wire mesh, they often wreak havoc with wireless Internet networks and cellphone signals.

But the shielding effect most often benefits humankind. Microwave ovens reverse the effect, trapping waves within a cage and quickly cooking your food. Screened TV cables help to maintain a crisp, clear image by reducing interference.

Power utility linemen often wear specially made suits that exploit the Faraday cage concept. Within these suits, the linemen can work on high-voltage power lines with a much-reduced risk of electrocution.

Governments can protect vital telecommunications equipment from lightning strikes and other electromagnetic interference by building Faraday cages around them. Science labs at universities and corporations employ advanced Faraday cages to completely exclude all external electric charges and electromagnetic radiation to create a totally neutral testing environment for all sorts of experiments and product development.

Swing by a hospital and you’ll find Faraday cages in the form of MRI (magnetic resonance scanning) rooms. MRI scans rely on powerful magnetic fields to create medically useful scans of the human body. MRI rooms must be shielded to prevent stray electromagnetic fields from affecting a patient’s diagnostic images.

There are plenty of political and military uses for Faraday cages, too. Politicians may opt to discuss sensitive matters only in shielded rooms that can block out eavesdropping technologies. All modern armed forces depend on electronics for communications and weapons systems, but there’s a catch –these systems are vulnerable to aggressive EMPs (electromagnetic pulses), which can be a result of a solar storm or even man-made EMP attacks. To safeguard critical systems, militaries sometimes use shielded bunkers and vehicles.

Electrostatic for the People

In order to understand how Faraday cages work, you need a basic understanding of how electricity operates in conductors. The process is simple: Metal objects, such as an aluminum mesh, are conductors, and have electrons (negatively charged particles) that move around in them. When no electrical charge is present, the conductor has roughly the same number of commingling positive and negative particles.

If an external object with an electrical charge approaches the conductor, the positive and negative particles separate. Electrons with a charge opposite that of the external charge are drawn to that external object. Electrons with the same charge as the external object are repelled and move away from that object. This redistribution of charges is called electrostatic induction.

With the external charged object present, the positive and negative particles wind up on opposite sides of the conductor. The result is an opposing electric field that cancels out the field of the external object’s charge inside the metal conductor. The net electric charge inside the aluminum mesh, then, is zero.

And here’s the real kicker. Although there’s no charge inside the conductor, the opposing electric field does have an important effect– it shields the interior from exterior static electric charges and also from electromagnetic radiation, like radio waves and microwaves. Therein lies the true value of Faraday cages.

The effectiveness of this shielding varies depending on the cage’s construction. Variations in the conductivity of different metals, such as copper or aluminum, affect the cage’s function. The size of the holes in the screen or mesh also changes the cage’s capabilities and can be adjusted depending on the frequency and wavelength of the electromagnetic radiation you want to exclude from the interior of the cage.

Faraday cages sometimes go by other names. They can be called Faraday shields, RF (radio frequency) cages, or EMF (electromotive force) cages.

An Overview of Fast Track Courts


Recently, Delhi witnessed large scale protests by various groups demanding stricter punishment and speedier trial in cases of sexual assault against women.  In light of the protests, the central government has constituted a Commission (headed by Justice Verma) to suggest possible amendments in the criminal law to ensure speedier disposal of cases relating to sexual assault.  Though the Supreme Court, in 1986, had recognised speedy trial to be a fundamental right, India continues to have a high number of pending cases.

In 2012, the net pendency in High Courts and subordinate courts decreased by over 6 lakh cases. However, there is still a substantial backlog of cases across various courts in the country.  As per the latest information given by the Ministry of Law and Justice, there are 43.2 lakh cases pending in the High Courts and 2.69 crore cases pending in the district courts.[1]

After the recent gang-rape of a 23 year old girl, the Delhi High Court directed the state government to establish five Fast Track Courts (FTCs) for the expeditious adjudication of cases relating to sexual assault.   According to a news report, other states such as Maharashtra and Tamil Nadu have also begun the process of establishing FTCs for rape cases.  In this blog, we look at the status of pending cases in various courts in the country, the number of vacancies of judges and the status of FTCs in the country.

Vacancies in the High Courts and the Subordinate Courts

One of the reasons for the long delay in the disposal of cases is the high number of vacancies in position for judges in the High Courts and the District Courts of the country.  As of December 1, 2012, the working strength of the High Court judges was 613 as against the sanctioned strength of 895 judges.  This reflects a 32% vacancy of judges across various High Courts in the country.  The highest number of vacancies is in the Allahabad High Court with a working strength of 86 judges against the sanctioned strength of 160 judges (i.e. vacancy of 74 judges).   The situation is not much better at the subordinate level.  As on September 30, 2011, the sanctioned strength of judges at the subordinate level was 18,123 judges as against a working strength of 14,287 judges (i.e. 21% vacancy).  The highest vacancy is in Gujarat with 794 vacancies of judges, followed by Bihar with 690 vacancies.

Fast Track Courts

The 11th Finance Commission had recommended a scheme for the establishment of 1734 FTCs for the expeditious disposal of cases pending in the lower courts.  In this regard, the Commission had allocated Rs 500 crore.   FTCs were to be established by the state governments in consultation with the respective High Courts.  An average of five FTCs were to be established in each district of the country.  The judges for these FTCs were appointed on an adhoc basis.  The judges were selected by the High Courts of the respective states.  There are primarily three sources of recruitment.  First, by promoting members from amongst the eligible judicial officers; second, by appointing retired High Court judges and third, from amongst members of the Bar of the respective state.

FTCs were initially established for a period of five years (2000-2005).  However, in 2005, the Supreme Court[2] directed the central government to continue with the FTC scheme, which was extended until 2010-2011.  The government discontinued the FTC scheme in March 2011.  Though the central government stopped giving financial assistance to the states for establishing FTCs, the state governments could establish FTCs from their own funds.  The decision of the central government not to finance the FTCs beyond 2011 was challenged in the Supreme Court.  In 2012, the Court upheld the decision of the central government.[3]  It held that the state governments have the liberty to decide whether they want to continue with the scheme or not.  However, if they decide to continue then the FTCs have to be made a permanent feature.

As of September 3, 2012, some states such as Arunachal Pradesh, Assam, Maharashtra, Tamil Nadu and Kerala decided to continue with the FTC scheme.  However, some states such as Haryana and Chhattisgarh decided to discontinue it. Other states such as Delhi and Karnataka have decided to continue the FTC scheme only till 2013.[4]

Reviewing regulations in the sugar sector


There have been some recent developments in the sugar sector, which pertain to the pricing of sugarcane and deregulation of the sector.  On January 31, the Cabinet approved the fair and remunerative price (FRP) of sugarcane for the 2013-14 season at Rs 210 per quintal, a 23.5% increase from last year’s FRP of Rs 170 per quintal.  The FRP of sugarcane is the minimum price set by the centre and is payable by mills to sugarcane farmers throughout the country.  However, states can also set a State Advised Price (SAP) that mills would have to pay farmers instead of the FRP.

In addition, a recent news report mentioned that the food ministry has decided to seek Cabinet approval to lift controls on sugar, particularly relating to levy sugar and the regulated release of non-levy sugar.

The Rangarajan Committee report, published in October 2012, highlighted challenges in the pricing policy for sugarcane.  The Committee recommended deregulating the sugar sector with respect to pricing and levy sugar.

In this blog, we discuss the current regulations related to the sugar sector and key recommendations for deregulation suggested by the Rangarajan Committee.

Current regulations in the sugar sector

A major step to liberate the sugar sector from controls was taken in 1998 when the licensing requirement for new sugar mills was abolished.  Delicensing caused the sugar sector to grow at almost 7% annually during 1998-99 and 2011-12 compared to 3.3% annually during 1990-91 and 1997-98.

Although delicensing removed some regulations in the sector, others still persist.  For instance, every designated mill is obligated to purchase sugarcane from farmers within a specified cane reservation area, and conversely, farmers are bound to sell to the mill.  Also, the central government has prescribed a minimum radial distance of 15 km between any two sugar mills.

However, the Committee found that existing regulations were stunting the growth of the industry and recommended that the sector be deregulated.  It was of the opinion that deregulation would enable the industry to leverage the expanding opportunities created by the rising demand of sugar and sugarcane as a source of renewable energy.

Rangarajan Committee’s recommendations on deregulation of the sugar sector

Price of sugarcane: The central government fixes a minimum price, the FRP that is paid by mills to farmers.  States can also intervene in sugarcane pricing with an SAP to strengthen farmer’s interests.  States such as Uttar Pradesh and Tamil Nadu have set SAPs for the past few years, which have been higher than FRPs.

The Committee recommended that states should not declare an SAP because it imposes an additional cost on mills.  Farmers should be paid a uniform FRP.  It suggested determining cane prices according to scientifically sound and economically fair principles.  The Committee also felt that high SAPs, combined with other controls in the sector, would deter private investment in the sugar industry.

Levy sugar: Every sugar mill mandatorily surrenders 10% of its production to the central government at a price lower than the market price – this is known as levy sugar.  This enables the central government to get access to low cost sugar stocks for distribution through the Public Distribution System (PDS).  At present prices, the centre saves about Rs 3,000 crore on account of this policy, the burden of which is borne by the sugar sector.

The Committee recommended doing away with levy sugar.  States wanting to provide sugar under PDS would have to procure it directly from the market.

Regulated release of non-levy sugar: The central government allows the release of non-levy sugar into the market on a periodic basis.  Currently, release orders are given on a quarterly basis.  Thus, sugar produced over the four-to-six month sugar season is sold throughout the year by distributing the release of stock evenly across the year.  The regulated release of sugar imposes costs directly on mills (and hence indirectly on farmers).  Mills can neither take advantage of high prices to sell the maximum possible stock, nor dispose of their stock to raise cash for meeting various obligations.  This adversely impacts the ability of mills to pay sugarcane farmers in time.

The Committee recommended removing the regulations on release of non-levy sugar to address these problems.

Trade policy: The government has set controls on both export and import of sugar that fluctuate depending on the domestic availability, demand and price of sugarcane.  As a result, India’s trade in the world trade of sugar is small.  Even though India contributes 17% to global sugar production (second largest producer in the world), its share in exports is only 4%.  This has been at the cost of considerable instability for the sugar cane industry and its production.

The committee recommended removing existing restrictions on trade in sugar and converting them into tariffs.

Solar War – India – US


Domestic content requirements

Domestic content requirements compel firms to purchase a certain percentage of their inputs from domestic firms as a precondition for local market access or preferential policy treatment. In general, domestic content requirements act as a protectionist measure, since they usually improve the competitive position of domestic firms in relation to foreign firms. Nonetheless, the ultimate effect of domestic content requirements depends on the form of the requirements, the characteristics of demand, market structure, and the nature of the production process.

US is trying to drag India to WTO by using the DCR. India is saying that its not a signatory to the Agreement on Government Procurement, hence no violation.

Economic Indicators: Purchasing Managers Index (PMI)


By Ryan Barnes

Release Date: The first business day of the month
Release Time: 10am Eastern Standard Time
Coverage: Previous month’s data
Released By: Institute for Supply Management (ISM)
Latest Release: http://www.ism.ws/ISMReport/

Background
The Institute for Supply Management (ISM) is responsible for maintaining the Purchasing Managers Index (PMI), which is the headline indicator in the monthly ISM Report on Business.The ISM is a non-profit group boasting more than 40,000 members engaged in the supply management and purchasing professions.

The PMI is a composite index of five “sub-indicators”, which are extracted through surveys to more than 400 purchasing managers from around the country, chosen for their geographic and industry diversification benefits. The five sub-indexes are given a weighting, as follows:

  • Production level (.25)
  • New orders (from customers) (.30)
  • Supplier deliveries – (are they coming faster or slower?) (.15)
  • Inventories (.10)
  • Employment level (.20)

A diffusion process is done to the survey answers, which come in only three options; managers can either respond with “better”, “same”, or “worse” to the questions about the industry as they see it. The resulting PMI figure (which can be from 0 to 100) is calculated by taking the percentage of respondents that reported better conditions than the previous month and adding to that total half of the percentage of respondents that reported no change in conditions. For example, a PMI reading of 50 would indicate an equal number of respondents reporting “better conditions” and “worse conditions”.

What it Means for Investors
PMI is a very important sentiment reading, not only for manufacturing, but also the economy as a whole. Although U.S. manufacturing is not the huge component of total gross domestic product (GDP) that it once was, this industry is still where recessions tend to begin and end. For this reason, the PMI is very closely watched, setting the tone for the upcoming month and other indicator releases.

The magic number for the PMI is 50. A reading of 50 or higher generally indicates that the industry is expanding. If manufacturing is expanding, the general economy should be doing likewise. As such, it is considered a good indicator of future GDP levels. Many economists will adjust their GDP estimates after reading the PMI report. Another useful figure to remember is 42. An index level higher than 42%, over time, is considered the benchmark for economic (GDP) expansion. The different levels between 42 and 50 speak to the strength of that expansion. If the number falls below 42%, recession could be just around the corner. (To learn more, read Recession: What Does It Mean To Investors?)

As with many other indicators, the rate of change from month to month is vital. A reading of 51 (expanding manufacturing industry) coming after a month with a reading of 56 would not be seen favorably by the markets, especially if the economy had been showing solid growth previously.

The PMI can be considered a hybrid indicator in that is has actual data elements but also a confidence element, like the Consumer Confidence Index. Answers are subjective, and may not always relate to events as much as perceptions. Both can have value to investors looking to get a sense of actual experiences as well as see the PMI index level itself.

Bond markets may look more intently at the growth in supplier deliveries and prices paid areas of the report, as these have been historical pivot points for inflationary concerns. Bond markets will usually move in advance of an anticipated interest rate move, sending yields lower if rate cuts are expected and vice versa. (For more insight, see Get Acquainted With Bond Price/Yield Duo.)

PMI is considered a leading indicator in the eyes of the Fed, as evidenced by its mention in the FOMC minutes that are publicly released after its closed-door meetings. The supplier deliveries component itself is an official variable in calculating the Conference Board’s U.S. Leading Index.

There are regional purchasing manager reports, some of which come out earlier than the PMI for a given month, but the PMI is the only national indicator.

Strengths:

  • Very timely, coming out on the first day of the month following the survey month
  • A good predictor of future releases, such as GDP and the Bureau of Labor Statistics (BLS) manufacturing reports
  • Anecdotal remarks within the release can provide a more complete perspective from actual professionals (like in the Beige Book).
  • Report displays point changes from the previous report, along with the length in months of any long-term trends shown for the “sub-indicators”, such as inventories or prices.
  • Commodities, such as silver, steel and copper are reported individually regarding the supply tightness and price levels noted in the previous month.

Weaknesses:

  • Only covers manufacturing sector – the PMI Non-Manufacturing Business Report covers many other industries in the same manner
  • Survey is very subjective in its data retrieval compared to other indicators.
  • Regional reports released earlier (Philly Fed, Chicago NAPM) may have high correlations and can take some of the steam out of this release.

The Closing Line
The PMI is a uniquely constructed, timely indicator with a lot of value on Wall Street.

It is most useful when taken in context with more data-driven indicators, such as the Producer Price Index and GDP, or in conjunction with the ISM Report Non-Manufacturing Report on Business.

Read more: http://www.investopedia.com/university/releases/napm.asp#ixzz2KNTPp2rG

India’s 3 stage Nuclear Process


BARC

 

India has consciously proceeded to explore the possibility of tapping nuclear energy for the purpose of power generation and the Atomic Energy Act was framed and implemented with the set objectives of using two naturally occurring elements Uranium and Thorium having good potential to be utilized as nuclear fuel in Indian Nuclear Power Reactors. The estimated natural deposits of these elements in india are :

 

  • Natural Uranium deposits – ~70,000 tonnes

  • Thorium deposits – ~ 3,60,000 tonnes

 

Indian Nuclear Power Generation : Envisages A Three Stage Programme

 

  • STAGE 1» Pressurised Heavy Water Reactor using

  • STAGE 2 » Fast Breeder Reactor

  • STAGE 3 » Breeder Reactor

 

STAGE 1 » Pressurised Heavy Water Reactor using

 

  • Natural UO2 as fuel matrix

  • Heavy water as moderator and coolant

 

Natural U isotopic composition  is  0.7 % fissile U-235 and the rest is U-238. In the reactor

 

  • The first two plants were of boiling water reactors based on imported technology. Subsequent plants are of PHWR type through indigenous R&D efforts. India achieved complete self- reliance in this technology and this stage of the programme is in the industrial domain.

 

The future plan  includes

 

  • Setting up of VVER type plants based on Russian Technology is under progress to augment power generation  .

  • MOX fuel (Mixed oxide) is developed and introduced at Tarapur To conserve fuel and to develop new fuel technology.

 


Reprocessing of spent fuel » By an Open Cycle or a Closed Cycle mode.

Open cycle” refers to disposal of the entire waste after subjecting to proper waste treatment.

This Results in huge underutilization of the energy potential of Uranium (~ 2 % is exploited)

Closed cycle” refers to chemical separation of U-238 and Pu-239 and further recycled while the other radioactive fission products were separated, sorted out according to their half lives and activity and appropriately disposed off with minimum environmental disturbance.

 

  • Both the options are in practice.

  • As a part of long – term energy strategy, Japan and France has opted “closed cycle”

  • India preferred a closed cycle mode in view of its phased expansion of nuclear power generation extending through the second and third stages.

  • Indigenous technology for the reprocessing of the spent fuel as well as waste management programme has been developed by India through its own comprehensive R&D efforts and reprocessing plants were set up and are in operation thereby attaining self – reliance in this domain.

 

STAGE 2 » Fast Breeder Reactor

 

India’s second stage of nuclear power generation envisages the use of Pu-239 obtained from the first stage reactor operation, as the fuel core in fast breeder reactors (FBR). The main features of FBTR are

 

  • Pu-239 serves as the main fissile element in the FBR

  • A blanket of U-238 surrounding the fuel core will undergo nuclear transmutation to produce fresh Pu-239 as more and more Pu-239 is consumed during the operation.

  • Besides a blanket of Th-232 around the FBR core also undergoes neutron capture reactions leading to the formation of U-233. U-233 is the nuclear reactor fuel for the third stage of India’s Nuclear Power Programme.

  • It is technically feasible to produce sustained energy output of 420 GWe from FBR.

  • Setting up Pu-239 fuelled fast Breeder Reactor of 500 MWe power generation is in advanced stage of completion. Concurrently, it is proposed to use thorium-based fuel, along with a small feed of plutonium-based fuel in Advanced Heavy Water Reactors (AHWRs). The AHWRs are expected to shorten the period of reaching the stage of large-scale thorium utilization.

 

STAGE STAGE 3 » Breeder Reactor

 

The third phase of India’s Nuclear Power Generation programme is, breeder reactors using U-233 fuel. India’s vast thorium deposits permit design and operation of U-233 fuelled breeder reactors.

 

  • U-233 is obtained from the nuclear transmutation of Th-232 used as a blanket in the second phase Pu-239 fuelled FBR.

  • Besides, U-233 fuelled breeder reactors will have a Th-232 blanket around the U-233 reactor core which will generate more U-233 as the reactor goes operational thus resulting in the production of more and more U-233 fuel from the Th-232 blanket as more of the U-233 in the fuel core is consumed helping to sustain the long term power generation fuel requirement.

  • These U-233/Th-232 based breeder reactors are under development and would serve as the mainstay of the final thorium utilization stage of the Indian nuclear programme. The currently known Indian thorium reserves amount to 358,000 GWe-yr of electrical energy and can easily meet the energy requirements during the next century and beyond.

 

 

Fast Breeder Reactors, which use plutonium, are called so because they have no moderator (heavy water or light water) and breed more fuel than they consume.

Photonics vs Optonics


both deals with the same topic light but they differ in their approach……..photonics considers wave model of light and considers it as a wave of massless photons while optonics considers the mass model of light considering it as a beam…….photonics obeys the rules of quantum mechanics while optonics obeys the rules of geometry

Infrastucture Status


The finance ministry has asked the Reserve Bank to consider giving infrastructure status to the housing sector, and relax provisioning norms for it so banks can extend attractive loans to buyers.

RBI has mandated that banks set aside from their profits an amount equal to 1% of total standard assets in commercial real estate, which also includes housing projects. This means if a bank lends 100 towards a commercial real estate project, it will have to keep aside 1 to offset any loan to the sector turning bad. The provisioning rises to 15% of net investment in case of secured sub-standard asset.

How does a CT scan work?


A CT scanner emits a series of narrow beams through the human body as it moves through an arc, unlike an X-ray machine which sends just one radiation beam. The final picture is far more detailed than an X-ray one.

Inside the CT scanner there is an X-ray detector which can see hundreds of different levels of density. It can see tissues inside a solid organ. This data is transmitted to a computer, which builds up a 3-D cross-sectional picture of the part of the body and displays it on the screen.

Pulsar / Neutron Star


A neutron star is a stellar remnant–a super-compressed object left over when stars with a mass between 1.4 and about 3 times the mass of our Sun exhaust their nuclear fuel and collapse inwards. The result is a condensed sphere of matter about 20 km (12 miles) across, with a gravitational field approximately 2 x 10^11 times stronger than that of Earth’s.

The density of a neutron star is so great that the protons and electrons making up the atoms fuse to form electrically neutral neutrons, the primary particles making up the neutron star. Because they are electrically neutral, such particles can be packed very closely together, resulting in a celestial object with similar density to that of the atomic nucleus.

The neutron star is an exotic astronomical object whose existence was predicted by theory 35 years before one was actually discovered in 1968. The escape velocity for a neutron star is approximately half the speed of light. The tallest “mountains” on such a star measure in the millimeters (fractions of an inch) rather than kilometers (feet). Because the rotation speed of the star accelerates as it collapses, tremendous rates of angular velocity may be achieved, on the order of 30,000 km/sec (18,640 mi/sec), or one rotation every millisecond or two. When these rapidly rotating stars emit electromagnetic radiation that can be detected on Earth, they are received in continuous pulses, prompting the title “pulsar.

Formed from the core of expired suns, the neutron star is home to exotic forms of matter found nowhere else in the universe: Nuclei composed of huge amounts of neutrons with no orbiting electrons, free neutrons floating in a superdense “neutronium” soup, and possibly exotic forms of matter such as pions or kaons. These are particles composed of unusual configurations or types of quarks, the constituents of subatomic particles. Because conventional atomic forms of matter would be ripped to shreds by the immense gravity and pressure of a neutron star, we may never be able to perform experiments or observations on such objects directly. The primary types of neutron stars include the x-ray burster , pulsar, and magnetar.

A neutron star is a stellar remnant–a super-compressed object left over when stars with a mass between 1.4 and about 3 times the mass of our Sun exhaust their nuclear fuel and collapse inwards. The result is a condensed sphere of matter about 20 km (12 miles) across, with a gravitational field approximately 2 x 10^11 times stronger than that of Earth’s.

The density of a neutron star is so great that the protons and electrons making up the atoms fuse to form electrically neutral neutrons, the primary particles making up the neutron star. Because they are electrically neutral, such particles can be packed very closely together, resulting in a celestial object with similar density to that of the atomic nucleus.

The neutron star is an exotic astronomical object whose existence was predicted by theory 35 years before one was actually discovered in 1968. The escape velocity for a neutron star is approximately half the speed of light. The tallest “mountains” on such a star measure in the millimeters (fractions of an inch) rather than kilometers (feet). Because the rotation speed of the star accelerates as it collapses, tremendous rates of angular velocity may be achieved, on the order of 30,000 km/sec (18,640 mi/sec), or one rotation every millisecond or two. When these rapidly rotating stars emit electromagnetic radiation that can be detected on Earth, they are received in continuous pulses, prompting the title “pulsar.”

Atomic Force Microscopes
The MudBug
mageba – bridge products
TIS Calibrators
QVI StarLite

Formed from the core of expired suns, the neutron star is home to exotic forms of matter found nowhere else in the universe: Nuclei composed of huge amounts of neutrons with no orbiting electrons, free neutrons floating in a superdense “neutronium” soup, and possibly exotic forms of matter such as pions or kaons. These are particles composed of unusual configurations or types of quarks, the constituents of subatomic particles. Because conventional atomic forms of matter would be ripped to shreds by the immense gravity and pressure of a neutron star, we may never be able to perform experiments or observations on such objects directly. The primary types of neutron stars include the x-ray burster , pulsar, and magnetar.

Lithium – ion Battery


Lithium ion (Li-ion) batteries pack high energy density in a tiny package, making them the ideal choice for devices such as laptops and cell phones. Commercialized in 1991 by Sony,Lithium ion batteries provided a superior alternative to the prevalent nickel-cadmium(Ni-Cad) batteries of the day.

Lithium has long been desirable for batteries because it is the lightest of all metals, making it a tantalizing choice for a portable energy source. In fact, ever since the 1970s, lithiumbased batteries have been available in a non-rechargeable form. Watch batteries are one well-known example.

The relative instability of the lithium proved even more apparent during charging, leading to its slow adoption as a rechargeable battery. The end result is a compromise where the name says it all – lithium ion batteries use only the ions rather than the metal itself. The outcome is a much more stable though slightly less powerful energy source ideal for recharging. And even with the decrease in power, lithium ion based batteries still deliver more than double the voltage of nickel-cadmium.

Other than higher power and lower weight, li-ion batteries are user friendly as well. Unlike its predecessor, the nickel-cadmium, lithium-ion batteries do not suffer from the “memory effect.” That is, the battery does not have to be fully discharged before being recharged. On the other hand, earlier nickel-cadmium batteries would “remember” where they were recharged, leading them to charge only to that point again. Later developed nickel-metal-hydride batteries also solved this problem.

opposite that users should be wary of. Lithium ion batteries shouldn’t be run all the way down before charging; they respond much better with constant recharges. Battery gauges, on the other hand, are often impacted and display incorrect readings from this practice. This leads some people to believe a memory effect exists, when in fact it’s the meter that needs to be reset. Draining the battery all the way down every 30 charges or so can recalibrate the gauge.

Eventually all rechargeable lithium ion batteries will meet their end. After about two to three years, li-ion batteries expire, whether or not they are being used. To prolong the battery when not in use, store it in a cool dry place at approximately 40 percent capacity. Also, avoid exposing a lithium ion battery to extreme temperatures for prolonged periods of time, and recharge constantly when in use. When it’s time to eventually dispose, lithium ion batteriesare much safer than many other types of rechargeable batteries, allowing them to be safely placed in the trash. As with most other things – if recycling is an option, that is the best one of all.

Basic principles on which Kepler Telescope works to find Exoplanets


One of the great problems in the search for exoplanets is detecting the darn things. Most are simply too small and too far away to be observed directly. Our Earth-based telescopes can’t resolve a faraway planet as a dot separate from its host star. Luckily, astronomers have other means at their disposal, and they all call for sophisticated telescopes armed with photometers (a device that measures light), spectrographs and infrared cameras.

The first method, known as the wobble method, looks for changes in a star’s relative velocity caused by the gravitational tug of a nearby planet. These tugs cause the star to surge toward Earth and then away, creating periodic variations that we can detect by analyzing the spectrum of light from the star. As it surges toward Earth, its light waves are compressed, shortening the wavelength and shifting the color to the blue side of the spectrum. As it surges away from Earth, its light waves spread out, increasing the wavelength and shifting the color to the red side of the spectrum. Larger planets intensify the wobble of their parent stars, which is why this technique has been so efficient at finding gas giants several times larger than Earth.

What’s one thing that all planets can do well? Block light. If a planet’s orbit crosses between its parent star and Earth, it will block some of the light and cause the star to dim. Astronomers call this a transit, and the related planet-hunting technique the transit method. Telescopes equipped with sensitive photometers can easily discern large planets, but they can also catch even the slight dimming caused by an Earth-sized object.

Finally, some astronomers have been turning to a technique known as microlensing. Microlensing occurs when one star passes precisely in front of another star. When this happens, the gravity of the foreground star acts like a magnifying lens and amplifies the brightness of the background star. If a planet orbits the foreground star, its additional gravity intensifies the amplification effect. This handily reveals the planet, which would otherwise be invisible to other detection techniques.