marți, 28 septembrie 2010

Electronic warfare

Electronic warfare (EW) refers to any action involving the use of the electromagnetic spectrum or directed energy to control the spectrum, attack an enemy, or impede enemy assaults via the spectrum. The purpose of electronic warfare is to deny the opponent the advantage of, and ensure friendly unimpeded access to, the EM spectrum. EW can be applied from air, sea, land, and space by manned and unmanned systems, and can target communication, radar, or other services.[1] EW includes three major subdivisions: Electronic Attack (EA), Electronic Protection (EP), and Electronic warfare Support (ES).

Electronic Support

Electronic Warfare Support (ES), is the subdivision of EW involving actions tasked by, or under direct control of, an operational commander to search for, intercept, identify, and locate or localize sources of intentional and unintentional radiated electromagnetic (EM) energy for the purpose of immediate threat recognition, targeting, planning, and conduct of future operations.[1]
An overlapping discipline, signals intelligence (SIGINT) is the related process of analyzing and identifying the intercepted frequencies (e.g. as a mobile phone or RADAR). SIGINT is broken into three categories: ELINT, COMINT, and FISINT.
Where these activities are under the control of an operational commander and being applied for the purpose of situational awareness, threat recognition, or EM targeting, they also serve the purpose of Electronic Warfare surveillance (ES).

Electronic attack

Electronic attack (EA) or electronic countermeasures (ECM) involves the use of the electromagnetic energy, or anti-radiation weapons to attack personnel, facilities, or equipment with the intent of degrading, neutralizing, or destroying enemy combat capability and is considered a form of fires (see Joint Publication [JP] 3-09, Joint Fire Support).[1]
EA operations can be detected by an adversary due to their active transmissions. Many modern EA techniques are considered to be highly classified. Examples of EA include communications jamming, IADS suppression, DE/LASER attack, expendable decoys (e.g., flares and chaff), and counter radio controlled improvised explosive device (C-RCIED) systems.

Electronic Protection

A right front view of a USAF Boeing E-4 advanced airborne command post (AABNCP) on the electromagnetic pulse (EMP) simulator (HAGII-C) for testing.
Electronic Protection (EP) (previously known as electronic protective measures (EPM) or electronic counter countermeasures (ECCM)) involves actions taken to protect personnel, facilities, and equipment from any effects of friendly or enemy use of the electromagnetic spectrum that degrade, neutralize, or destroy friendly combat capability. Jamming is not part of EP, it is an EA measure.
The use of flare rejection logic on an IR missile to counter an adversary’s use of flares is EP. While defensive EA actions and EP both protect personnel, facilities, capabilities, and equipment, EP protects from the EFFECTS of EA (friendly and/or adversary). Other examples of EP include spread spectrum technologies, use of Joint Restricted Frequency List (JRFL), emissions control (EMCON), and low observability or "stealth".[1]

( source: http://en.wikipedia.org/wiki/Electronic_Warfare )

Electronic countermeasures

Electronic countermeasures (ECM) are a subsection of electronic warfare which includes any sort of electrical or electronic device designed to trick or deceive radar, sonar, or other detection systems like IR (infrared) and Laser. It may be used both offensively or defensively in any method to deny targeting information to an enemy. The system may make many separate targets appear to the enemy, or make the real target appear to disappear or move about randomly. It is used effectively to protect aircraft from guided missiles. Most air forces use ECM to protect their aircraft from attack. That is also true for military ships and recently on some advanced tanks to fool laser/IR guided missiles. It is frequently coupled with stealth advances so that the ECM system has an easier job. Offensive ECM often takes the form of jamming. Defensive ECM includes using blip enhancement and jamming of missile terminal homers.

History

One of the first examples of electronic countermeasures being applied in a combat situation took place during the Russo-Japanese war. On April 15, 1904, Russian wireless telegraphy stations installed in the Port Arthur fortress and on board Russian light cruisers successfully interrupted wireless communication between a group of Japanese battleships. The spark-gap transmitters in the Russian stations radioed a senseless noise while the Japanese were making attempts to coordinate their efforts in the bombing of a Russian naval base. Germany and Great Britain interfered with enemy communications along the western front during World War I while the Royal Navy tried to intercept German naval radio transmissions.[1] There were also efforts at sending false radio signals, having shore stations send transmissions using ships' call signs, and jamming enemy radio signals.[1] World War II ECM expanded to include jamming and spoofing RADAR and navigation signals.[1] Cold War developments included missiles designed to home in on enemy RADAR transmitters.[1]

RADAR ECM

Basic RADAR ECM strategies are (1) RADAR interference, (2) target modifications, and (3) changing the electrical properties of air.[1] Interference techniques include jamming and deception. Jamming is accomplished by a friendly platform transmitting signals on the RADAR frequency to produce a noise level sufficient to hide echos.[1] The jammer's continuous transmissions will provide a clear direction to the enemy RADAR, but no range information.[1] Deception may use a transponder to mimic the RADAR echo with a delay to indicate incorrect range.[1] Transponders may alternatively increase return echo strength to make a small decoy appear to be a larger target.[1] Target modifications include RADAR absorbing coatings and modifications of the surface shape to either "stealth" a high-value target or enhance reflections from a decoy.[1] Dispersal of small aluminum strips called chaff is a common method of changing the electromagnetic properties of air to provide confusing RADAR echos.[1]

Aircraft ECM

ECM is practiced by nearly all modern military units—land, sea or air. Aircraft, however, are the primary weapons in the ECM battle because they can "see" a larger patch of earth than a sea or land-based unit. When employed effectively, ECM can keep aircraft from being tracked by search radars, or targeted by surface-to-air missiles or air-to-air missiles. On aircraft ECM can take the form of an attachable underwing pod or could be embedded in the airframe. Active Electronically Scanned Array (AESA) radars like those mounted on the F-22, MiG-35, Su-35BM or the F-35 can also act as an ECM device to track, locate and eventually jam enemy radar. Previous radar types were not capable of performing these activities due to:
  • the inability of the antenna to use suboptimal frequencies
  • the processing power needed
  • the impossibility to practically intermix or segment antenna usages

Future Airborne Jammers

The Next Generation Jammer will be carried on the F-18G and F-35 fighters and use AESA technologies in side mounted pods to provide all around coverage with highly selective directional jamming.
DARPA's Precision Electronic Warfare (PREW) project aims to combine AESA with Synthetic aperture radar spread over multiple platforms for very tightly focused jamming.[2]
The Air Force Research Laboratory is exploring the concept of a Cognitive Jammer to deal with Dynamic Spectrum Access technologies.[3]

Examples of dedicated electronic countermeasures aircraft

Heat and Sound Analogies

Infrared homing systems can be decoyed with flares.[1] Sound detection and homing systems used for ships are also susceptible to countermeasures. United States warships use Masker and PRAIRIE (PRopellor AIR Ingestion and Emission) systems to create small air bubbles around a ship's hull and wake to reduce sound transmission.[1] Surface ships tow noisemakers like the AN/SLQ-25 Nixie to decoy homing torpedoes.[1] Submarines can deploy similar acoustic device countermeasures (or ADCs) from a 3-inch (75-mm) signal launching tube.[1] United States ballistic missile submarines could deploy the Mark 70 MOSS (MObile Submarine Simulator) decoy from torpedo tubes to simulate a full size submarine.[1]

Shipboard ECM

The ULQ-6 deception transmitter was one of the earlier shipboard ECM installations.[4] The Raytheon SLQ-32 shipboard ECM package came in three versions providing warning, identification and bearing information about RADAR-guided cruise missiles.[4] The SLQ-32 V3 included quick reaction electronic countermeasures for cruisers and large amphibious ships and auxiliaries in addition to the RBOC (Rapid Blooming Off-board Chaff) launchers found on most surface ships.[4] The BLR-14 Submarine Acoustic Warfare System (or SAWS) provides an integrated receiver, processor, display, and countermeasures launch system for submarines.[4]

Electronics

Surface mount electronic components
Electronics is the branch of science and technology which makes use of the controlled motion of electrons through different media and vacuum. The ability to control electron flow is usually applied to information handling or device control. Electronics is distinct from electrical science and technology, which deals with the generation, distribution, control and application of electrical power. This distinction started around 1906 with the invention by Lee De Forest of the triode, which made electrical amplification possible with a non-mechanical device. Until 1950 this field was called "radio technology" because its principal application was the design and theory of radio transmitters, receivers and vacuum tubes.
Most electronic devices today use semiconductor components to perform electron control. The study of semiconductor devices and related technology is considered a branch of physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering. This article focuses on engineering aspects of electronics.

Electronic devices and components

An electronic component is any physical entity in an electronic system used to affect the electrons or their associated fields in a desired manner consistent with the intended function of the electronic system. Components are generally intended to be connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function (for example an amplifier, radio receiver, or oscillator). Components may be packaged singly or in more complex groups as integrated circuits. Some common electronic components are capacitors, resistors, diodes, transistors, etc. Components are often categorized as active (e.g. transistors and thyristors) or passive (e.g. resistors and capacitors).

Types of circuits

Circuits and components can be divided into two groups: analog and digital. A particular device may consist of circuitry that has one or the other or a mix of the two types.

Analog circuits

Hitachi J100 adjustable frequency drive chassis.
Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage as opposed to discrete levels as in digital circuits.
The number of different analog circuits so far devised is huge, especially because a 'circuit' can be defined as anything from a single component, to systems containing thousands of components.
Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators.
One rarely finds modern circuits that are entirely analog. These days analog circuitry may use digital or even microprocessor techniques to improve performance. This type of circuit is usually called "mixed signal" rather than analog or digital.
Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output.

Digital circuits

Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra and are the basis of all digital computers. To most engineers, the terms "digital circuit", "digital system" and "logic" are interchangeable in the context of digital circuits. Most digital circuits use two voltage levels labeled "Low"(0) and "High"(1). Often "Low" will be near zero volts and "High" will be at a higher level depending on the supply voltage in use. Ternary (with three states) logic has been studied, and some prototype computers made.
Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital Signal Processors are another example.

Heat dissipation and thermal management

Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Techniques for heat dissipation can include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, & radiation of heat energy.

Noise

Noise is associated with all electronic circuits. Noise is defined[1] as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties.

Electronics theory

Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis.
Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current through a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator.
Also important to electronics is the study and understanding of electromagnetic field theory.

Computer aided design (CAD)

Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), Eagle PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others.

Construction methods

Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wraps were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) - characterised by its light yellow-to-brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to the European Union, with its Restriction of Hazardous Substances Directive (RoHS) and Waste Electrical and Electronic Equipment Directive (WEEE), which went into force in July 2006.

( source : http://en.wikipedia.org/wiki/Electronics )

Electronic stability control

Electronic stability control (ESC) is a computerized technology that improves the safety of a vehicle's stability by detecting and minimizing skids. When ESC detects loss of steering control, it automatically applies the brakes to help "steer" the vehicle where the driver intends to go. Braking is automatically applied to individual wheel, such as the outer front wheel to counter oversteer or the inner rear wheel to counter understeer. Some ESC systems also reduce engine power until control is regained. ESC does not improve a vehicle's cornering performance, it rather helps minimize the loss of control. According to IIHS and NHTSA, one-third of fatal accidents could have been prevented by the technology.[1][2]

History

In 1987, the earliest innovators of ESC, Mercedes-Benz and BMW, introduced their first traction control systems. Traction control works by applying individual wheel braking and throttle to keep traction while accelerating but, unlike the ESC, it is not designed to aid in steering.
Named simply TCL in 1990, the system has since evolved into Mitsubishi's modern Active Skid and Traction Control (ASTC) system. Developed to help the driver maintain the intended path through a corner, an onboard computer monitored several vehicle operating parameters through the use of various sensors. When too much throttle has been used, while taking a curve, engine output and braking are automatically regulated to ensure the proper path through a curve and to provide the proper amount of traction under various road surface conditions. While conventional traction control systems at the time featured only a slip control function, Mitsubishi developed a TCL system which had a preventive (active) safety feature. This improved the course tracing performance by automatically adjusting the traction force, thereby restraining the development of excessive lateral acceleration, while turning. Although not a ‘true’ modern stability control system, trace control monitors steering angle, throttle position and individual wheel speeds and there is no yaw rate input. The TCL system's standard wheel slip control function improves traction on slippery surfaces or during cornering. In addition to the TCL's traction control feature, it also works together with Diamante's electronic controlled suspension and four-wheel steering that Mitsubishi had equipped to improve total handling and performance.[3][4][5][6][7][8][9][10]
BMW, working with Robert Bosch GmbH and Continental Automotive Systems, developed a system to reduce engine torque to prevent loss of control and applied it to the entire BMW model line for 1992. From 1987 to 1992, Mercedes-Benz and Robert Bosch GmbH co-developed a system called Elektronisches Stabilitätsprogramm (Ger. "Electronic Stability Programme" trademarked as ESP) a lateral slippage control system, the electronic stability control (ESC).
GM worked with Delphi Corporation and introduced its version of ESC called "StabiliTrak" in 1997 for select Cadillac models. StabiliTrak was made standard equipment on all GM SUVs and vans sold in the U.S. and Canada by 2007 except for certain commercial and fleet vehicles. While the "StabiliTrak" name is used on most General Motors vehicles for the U.S. market, the "Electronic Stability Control" identity is used for GM overseas brands, such as Opel, Holden and Saab, except in the case of Saab's 9-7X which also uses the "StabiliTrak" name. Ford's version of ESC, called AdvanceTrac, was launched in the year 2000. Ford later added Roll Stability Control to AdvanceTrac[11] which was first introduced in Volvo XC90 in 2003 when Volvo Cars was fully owned by Ford and it is now being implemented in many Ford vehicles.

[edit] Introduction

In 1995, automobile manufacturers introduced ESC systems. Mercedes-Benz supplied by Bosch was the first to implement this with their W140 S-Class model. That same year BMW, supplied by Bosch and ITT Automotive (later acquired by Continental Automotive Systems), and Volvo Cars[citation needed] began to offer ESC on some of their models while Toyota's own Vehicle Stability Control system (also in 2004, a preventive system called VDIM) appeared on the Crown Majesta.[12] Meanwhile others investigated and developed their own systems.
During a moose test (swerving to avoid an obstacle) which became famous in Germany as "the Elk test" the Swedish journalist Robert Collin of Teknikens Värld (World of Technology) in October 1997[13] rolled a Mercedes A-Class (without ESC) at 37 km/h. Because Mercedes-Benz promotes a reputation for safety, they recalled and retrofitted 130,000 A-Class cars with ESC. This produced a significant reduction in crashes and the number of vehicles with ESC rose. Today virtually all premium brands have made ESC standard on all vehicles, and the number of models with ESC continues to increase.[14] Ford and Toyota have announced that all their North American vehicles will be equipped with ESC standard by the end of 2009 (Toyota SUVs standard in 2004, Toyota has yet to fit the Scion tC).[15][16] However, as of 2010, both companies still sell models without ESC in North America.[17] General Motors had made a similar announcement for the end of 2010.[18] The NHTSA requires all passenger vehicles to be equipped with ESC by 2011 and estimates it will prevent 5,300-9,600 annual fatalities once all passenger vehicles are equipped with the system.[19]

[edit] Operation

During normal driving, ESC works in the background and continuously monitors steering and vehicle direction. It compares the driver's intended direction (determined through the measured steering wheel angle) to the vehicle's actual direction (determined through measured lateral acceleration, vehicle rotation (yaw), and individual road wheel speeds).
ESC intervenes only when it detects loss of steering control, i.e. when the vehicle is not going where the driver is steering.[20] This may happen, for example, when skidding during emergency evasive swerves, understeer or oversteer during poorly judged turns on slippery roads, or hydroplaning. ESC estimates the direction of the skid, and then applies the brakes to individual wheels asymmetrically in order to create torque about the vehicle's vertical axis, opposing the skid and bringing the vehicle back in line with the driver's commanded direction. Additionally, the system may reduce engine power or operate the transmission to slow the vehicle down.
ESC can work on any surface, from dry pavement to frozen lakes.[21][22] It reacts to and corrects skidding much faster and more effectively than the typical human driver, often before the driver is even aware of any imminent loss of control.[23] In fact, this led to some concern that ESC could allow drivers to become overconfident in their vehicle's handling and/or their own driving skills. For this reason, ESC systems typically inform the driver when they intervene, so that the driver knows that the vehicle's handling limits have been approached. Most activate a dashboard indicator light and/or alert tone; some intentionally allow the vehicle's corrected course to deviate very slightly from the driver-commanded direction, even if it is possible to more precisely match it.[24]
Indeed, all ESC manufacturers emphasize that the system is not a performance enhancement nor a replacement for safe driving practices, but rather a safety technology to assist the driver in recovering from dangerous situations. ESC does not increase traction, so it does not enable faster cornering (although it can facilitate better-controlled cornering). More generally, ESC works within inherent limits of the vehicle's handling and available traction between the tires and road. A reckless maneuver can still exceed these limits, resulting in loss of control. For example, in a severe hydroplaning scenario, the wheel(s) that ESC would use to correct a skid may not even initially be in contact with the road, reducing its effectiveness.
In July 2004, on the Crown Majesta, Toyota offered a Vehicle Dynamics Integrated Management (VDIM) system that incorporated formerly independent systems, including ESC. This worked not only after the skid was detected but also to prevent the skid from occurring in the first place. Using electric variable gear ratio steering power steering this more advanced system could also alter steering gear ratios and steering torque levels to assist the driver in evasive maneuvers.

[edit] Effectiveness

Numerous studies around the world confirm that ESC is highly effective in helping the driver maintain control of the car, thereby saving lives and reducing the severity of crashes.[25] In the fall of 2004 in the U.S., the National Highway and Traffic Safety Administration confirmed the international studies, releasing results of a field study in the U.S. of ESC effectiveness. The NHTSA in United States concluded that ESC reduces crashes by 35%. Additionally, Sport utility vehicles (SUVs) with stability control are involved in 67% fewer accidents than SUVs without the system. The United States Insurance Institute for Highway Safety (IIHS) issued its own study in June 2006 showing that up to 10,000 fatal US crashes could be avoided annually if all vehicles were equipped with ESC[26] The IIHS study concluded that ESC reduces the likelihood of all fatal crashes by 43%, fatal single-vehicle crashes by 56%, and fatal single-vehicle rollovers by 77-80%.
ESC is described as the most important advance in auto safety by many experts.[27] including Nicole Nason,[28] Administrator of the NHTSA,[29] Jim Guest and David Champion[30] of Consumers Union [31] of the Fédération Internationale de l'Automobile (FIA), E-Safety Aware,[32] Csaba Csere, editor of Car and Driver,[33] and Jim Gill, long time ESC proponent of Continental Automotive Systems[29] The European New Car Assessment Program (EuroNCAP) "strongly recommends" that people buy cars fitted with stability control.
The IIHS requires that a vehicle must have ESC as an available option in order for it to qualify for their Top Safety Pick award for occupant protection and accident avoidance.[34][35]

[edit] Components and design

ESC incorporates yaw rate control into the anti-lock braking system (ABS). Yaw is a rotation around the vertical axis; i.e. spinning left or right. Anti-lock brakes enable ESC to brake individual wheels. Many ESC systems also incorporate a traction control system (TCS or ASR), which senses drive-wheel slip under acceleration and individually brakes the slipping wheel or wheels and/or reduces excess engine power until control is regained. However, ESC achieves a different purpose than ABS or Traction Control.[22]
The ESC system uses several sensors to determine what the driver wants (input). Other sensors indicate the actual state of the vehicle (response). The control algorithm compares driver input to vehicle response and decides, when necessary, to apply brakes and/or reduce throttle by the amounts calculated through the state space (set of equations used to model the dynamics of the vehicle).[36] The ESC controller can also receive data from and issue commands to other controllers on the vehicle such as an all wheel drive system or an active suspension system to improve vehicle stability and controllability.
The sensors used for ESC have to send data at all times in order to detect possible defects as soon as possible. They have to be resistant to possible forms of interference (rain, holes in the road, etc.). The most important sensors are:
  • Steering wheel angle sensor: determines the driver's intended rotation; i.e. where the driver wants to steer. This kind of sensor is often based on AMR-elements.
  • Yaw rate sensor : measures the rotation rate of the car; i.e. how much the car is actually turning. The data from the yaw sensor is compared with the data from the steering wheel angle sensor to determine regulating action.
  • Lateral acceleration sensor: often based on the Hall effect. Measures the lateral acceleration of the vehicle.
  • Wheel speed sensor : measures the wheel speed.
Other sensors can include:
  • Longitudinal acceleration sensor: similar to the lateral acceleration sensor in design but can offer additional information about road pitch and also provide another source of vehicle acceleration and speed.
  • Roll rate sensor: similar to the yaw rate sensor in design but improves the fidelity of the controller's vehicle model and correct for errors when estimating vehicle behavior from the other sensors alone.
ESC uses a hydraulic modulator to assure that each wheel receives the correct brake force. A similar modulator is used in ABS. ABS needs to reduce pressure during braking, only. ESC additionally needs to increase pressure in certain situations and an active vacuum brake booster unit may be utilized in addition to the hydraulic pump to meet these demanding pressure gradients.
The heart of the ESC system is the Electronic Control Unit (ECU). The various control techniques are embedded in it. Often, the same ECU is used for diverse systems at the same time (ABS, Traction control system, climate control, etc.). The input signals are sent through the input-circuit to the digital controller. The desired vehicle state is determined based upon the steering wheel angle, its gradient and the wheel speed. Simultaneously, the yaw sensor measures the actual state. The controller computes the needed brake or acceleration force for each wheel and directs via the driver circuits the valves of the hydraulic modulator. Via a CAN interface the ECU is connected with other systems (ABS, etc.) in order to avoid giving contradictory commands.
Many ESC systems have an "off" override switch so the driver can disable ESC, which may be desirable when badly stuck in mud or snow, or driving on a beach, or if using a smaller-sized spare tire which would interfere with the sensors. Some systems also offer an additional mode with raised thresholds so that a driver can utilize the limits of adhesion with less electronic intervention. However, ESC defaults to "On" when the ignition is re-started. Some ESC systems that lack an "off switch", such as on many recent Toyota and Lexus vehicles, can be temporarily disabled through an undocumented series of brake pedal and handbrake operations.[37] Furthermore, unplugging a wheel speed sensor is another method of disabling most ESC systems. The ESC implementation on newer Ford vehicles cannot be completely disabled even through the use of the "off switch". The ESC will automatically reactivate at highway speeds, and below that if it detects a skid with the brake pedal depressed.

[edit] Availability and cost

ESC is built on top of an anti-lock brake (ABS) system, and all ESC-equipped vehicles are fitted with traction control. The ESC components include a yaw rate sensor, a lateral acceleration sensor, a steering wheel sensor, and an upgraded integrated control unit. According to National Highway Traffic Safety Administration research, ABS in 2005 cost an estimated US$368; ESC cost a further US$111. The retail price of ESC varies; as a stand-alone option it retails for as little as $250 USD.[38] However, ESC is rarely offered as a sole option, and is generally not available for aftermarket installation. Instead, it is frequently bundled it with other features or more expensive trims, so the cost of a package that includes ESC could be several thousand dollars. Nonetheless, ESC is considered highly cost-effective[39] and it might pay for itself in reduced insurance premiums.[40] When new federal regulations requiring a safety tool called electronic stability control kick in during 2012, all cars will employ it. (cite: http://redtape.msnbc.com/2010/03/toyota-woes-raise-ghost-in-the-machine-fears.html)
Availability of ESC in passenger vehicles varies between manufacturers and countries. In 2007, ESC was available in roughly 50% of new North American models compared to about 75% in Sweden. However, consumer awareness affects buying patterns so that roughly 45% of vehicles sold in North America and the UK are purchased with ESC,[41] contrasting with 78-96% in other European countries such as Germany, Denmark, and Sweden. While few vehicles had ESC prior to 2004, increased awareness will increase the number of vehicles with ESC on the used car market.
ESC is available on cars, SUVs and pickup trucks from all major auto makers. Luxury cars, sports cars, SUVs, and crossovers are usually equipped with ESC. Midsize cars are also gradually catching on, though the 2008 model years of the, Nissan Altima and Ford Fusion only offered ESC on their V6 engine-equipped cars. While ESC includes traction control, there are vehicles such as the 2008 Chevrolet Malibu LS and 2008 Mazda6 that have traction control but not ESC. ESC is rare among subcompact cars as of 2008. The 2009 Toyota Corolla in the United States (but not Canada) has stability control as a $250 option on all trims below that of the XRS which has it as standard.[38] In Canada, for the 2010 Mazda3, ESC is as an option on the midrange GS trim as part of the moonroof package, and is standard on the top-of-the-line GT version.[42] The 2009 Ford Focus has ESC as an option for the S and SE models, and standard on the SEL and SES models[43]
ESC is also available on some motor homes. Elaborate ESC and ESP systems (including Roll Stability Control (RSC)[44]) are available for many commercial vehicles,[45] including transport trucks, trailers, and buses from manufacturers such as Bendix Corporation,[46] WABCO [47] Daimler, Scania AB,[48] and Prevost[49]
The ChooseESC! campaign, run by the EU's eSafetyAware! project, provides a global perspective on ESC. One ChooseESC! publication shows the availability of ESC in EU member countries.
In the US, the Insurance Institute for Highway Safety (IIHS) website[50] shows availability of ESC in individual US models and the National Highway Traffic Safety Administration (NHTSA website)[14] lists US models with ESC.
In Australia, the National Roads and Motorists' Association NRMA shows the availability of ESC in Australian models.[51]

[edit] Future

The market for ESC is growing quickly, especially in European countries such as Sweden, Denmark, and Germany. For example, in 2003 in Sweden the purchase rate on new cars with ESC was 15%. The Swedish road safety administration issued a strong ESC recommendation and in September 2004, 16 months later, the purchase rate was 58%. A stronger ESC recommendation was then given and in December 2004, the purchase rate on new cars had reached 69%[52] and by 2008 it had grown to 96%. ESC advocates around the world are promoting increased ESC use through legislation and public awareness campaigns and by 2012, most new vehicles should be equipped with ESC.
Just as ESC is founded on the Anti-lock braking system (ABS), ESC is the foundation for new advances such as Roll Stability Control (RSC) [53][54] that works in the vertical plane much like ESC works in the horizontal plane. When RSC detects impending rollover (usually on transport trucks[47] or SUVs[55]), RSC applies brakes, reduces throttle, induces understeer, and/or slows down the vehicle.
The computing power of ESC facilitates the networking of active and passive safety systems, addressing other causes of crashes. For example, sensors may detect when a vehicle is following too closely and slow down the vehicle, straighten up seat backs, and tighten seat belts, avoiding and/or preparing for a crash.

[edit] Regulation

While Sweden used public awareness campaigns to promote ESC use,[56] others implemented or proposed legislation.
The Canadian province of Quebec was the first jurisdiction to implement an ESC law, making it compulsory for carriers of dangerous goods (without data recorders) in 2005.[57]
The United States was next, requiring ESC for all passenger vehicles under 10,000 pounds (4536 kg), phasing in the regulation starting with 55% of 2009 models (effective 1 September 2008), 75% of 2010 models, 95% of 2011 models, and all 2012 models.[14]
Canada[58][59] will require all new passenger vehicles to have ESC from 1 September 2011.[60]
The Australian Government announced on 23 June 2009 that ESC would be compulsory from 1 November 2011 for all new passenger vehicles sold in Australia, and for all new vehicles from November 2013.[61]
The European Parliament has also called for the accelerated introduction of ESC.[62] The European Commission has confirmed a proposal for the mandatory introduction of ESC on all new cars and commercial vehicle models sold in the EU from 2012, with all new cars being equipped by 2014.[63]
The United Nations Economic Commission for Europe has passed a Global Technical Regulation to harmonize ESC standards.[64]

( source: http://en.wikipedia.org/wiki/Electronic_stability_control )

Data storage device

A data storage device is a device for recording (storing) information (data). Recording can be done using virtually any form of energy, spanning from manual muscle power in handwriting, to acoustic vibrations in phonographic recording, to electromagnetic energy modulating magnetic tape and optical discs.
A storage device may hold information, process information, or both. A device that only holds information is a recording medium. Devices that process information (data storage equipment) may either access a separate portable (removable) recording medium or a permanent component to store and retrieve information.
Electronic data storage is storage which requires electrical power to store and retrieve that data. Most storage devices that do not require vision and a brain to read data fall into this category. Electromagnetic data may be stored in either an analog or digital format on a variety of media. This type of data is considered to be electronically encoded data, whether or not it is electronically stored in a semiconductor device, for it is certain that a semiconductor device was used to record it on its medium. Most electronically processed data storage media (including some forms of computer data storage) are considered permanent (non-volatile) storage, that is, the data will remain stored when power is removed from the device. In contrast, most electronically stored information within most types of semiconductor (computer chips) microcircuits are volatile memory, for it vanishes if power is removed.
With the exception of barcodes and OCR data, electronic data storage is easier to revise and may be more cost effective than alternative methods due to smaller physical space requirements and the ease of replacing (rewriting) data on the same medium. However, the durability of methods such as printed data is still superior to that of most electronic storage media. The durability limitations may be overcome with the ease of duplicating (backing-up) electronic data.

Terminology

things that are not used exclusively for recording (e.g. hands, mouths, musical instruments) and devices that are intermediate in the storing/retrieving process (e.g. eyes, ears, cameras, scanners, microphones, speakers, monitors, video projectors) are not usually considered storage devices. Devices that are exclusively for recording (e.g. printers), exclusively for reading (e.g. barcode readers), or devices that process only one form of information (e.g. phonographs) may or may not be considered storage devices. In computing these are known as input/output devices.
An organic brain may or may not be considered a data storage device.[2]
All information is data. However, not all data is information.
Many data storage devices are also media players. Any device that can store and playback multimedia may also be considered a media player such as in the case with the HDD media player. Designated hard drives are used to play saved or streaming media on home entertainment systems.

Trends

International Data Corporation estimated that the total amount of digital data was 281 billion gigabytes in 2007, and had for the first time exceeded the amount of storage.[3]

Data storage equipment

Any input/output equipment may be considered data storage equipment if it writes to and reads from a data storage medium. Data storage equipment uses either:
  • portable methods (easily replaced),
  • semi-portable methods requiring mechanical disassembly tools and/or opening a chassis, or
  • inseparable methods meaning loss of memory if disconnected from the unit.

Recording medium

A recording medium is a physical material that holds data expressed in any of the existing recording formats. With electronic media, the data and the recording medium is sometimes referred to as "software" despite the more common use of the word to describe computer software. With (traditional art) static media, art materials such as crayons may be considered both equipment and medium as the wax, charcoal or chalk material from the equipment becomes part of the surface of the medium.
Some recording media may be temporary either by design or by nature. Volatile organic compounds may be used to preserve the environment or to purposely make data expire over time. Data such as smoke signals or skywriting are temporary by nature. Depending on the volatility, a gas (e.g. atmosphere, smoke) or a liquid surface such as a lake would be considered a temporary recording medium if at all.

Weight and volume

Especially for carrying around data, the weight and volume per MB are relevant. They are quite large for written and printed paper compared with modern electronic media. On the other hand, written and printer paper do not require (the weight and volume of) reading equipment, and handwritten edits only require simple writing equipment, such as a pen.
With mobile data connections the data need not be carried around to have them available.

Telecommunication

Telecommunication is the transmission of messages, over significant distances, for the purpose of communication. In earlier times, telecommunications involved the use of visual signals, such as smoke, semaphore telegraphs, signal flags, and optical heliographs, or audio messages via coded drumbeats, lung-blown horns, or sent by loud whistles, for example.
In the modern age of electricity and electronics, telecommunications has typically involved the use of electric means such as the telegraph, the telephone, and the teletype, the use of microwave communications, the use of fiber optics and their associated electronics, and/or the use of the Internet. The first breakthrough into modern electrical telecommunications came with the development of the telegraph during the 1830s and 1840s. The use of these electrical means of communications exploded into use on all of the continents of the world during the 19th century, and these also connected the continents via cables on the floors of the ocean. These three systems of communications all required the use of conducting metal wires.
A revolution in wireless telecommunications began in the first decade of the 20th century, with Guglielmo Marconi winning the Nobel Prize in Physics in 1909 for his pioneering developments in wireless radio communications. Other early inventors and developers in the field of electrical and electronic telecommunications included Samuel F.B. Morse, Edwin Armstrong, Joseph Henry, and Lee de Forest (who invented the triode) of the United States, as well as John Logie Baird of Scotland, Nikola Tesla, an Serbian emigrant to the United States, and Alexander Graham Bell of Scotland, who lived in Canada, and then invented the telephone in the United States
Telecommunications play an important role in the world economy and the worldwide telecommunication industry's revenue was estimated to be $3.85 trillion in 2008.[1] The service revenue of the global telecommunications industry was estimated to be $1.7 trillion in 2008, and is expected to touch $2.7 trillion by 2013.[1]

History

Early telecommunications

A replica of one of Chappe's semaphore towers in Nalbach
During the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London that signaled the arrival of the Spanish warships.[2]
In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system (or semaphore line) between Lille and Paris.[3] However semaphore systems suffered from the need for skilled operators and the expensive towers at intervals of ten to thirty kilometers (six to twenty miles). As a result of competition from the electrical telegraph, the last commercial semaphore line was abandoned in 1880.[4]

The telegraph and the telephone

The first commercial electrical telegraph was constructed by Sir Charles Wheatstone and Sir William Fothergill Cooke, and its use began on April 9, 1839. Both Wheatstone and Cooke viewed their device as "an improvement to the [already-existing, so-called] electromagnetic telegraph" not as a new device.[5]
The businessman Samuel F.B. Morse and the physicist Joseph Henry of the United States developed their own, simpler version of the electrical telegraph, independently. Morse successfully demonstrated this system on September 2, 1837. Morse's most important technical contribution to this telegraph was the rather simple and highly efficient Morse Code, which was an important advance over complicated Wheatstone's telegraph system. The communications efficiency of the Morse Code anticipated that of the Huffman code in digital communications by over 100 years, but Morse had developed his code purely empirically, unlike Huffman, who gave a detailed theoretical explanation of how his method worked.
The first permanent transatlantic telegraph cable was successfully completed on 27 July 1866, allowing transatlantic electrical communication for the first time.[6] An earlier transatlantic cable had operated for a few months in 1859, and among other things, it carried messages of greeting back and forth between President James Buchanan of the United States and Queen Victoria of the United Kingdom.
However, that transatlantic cable failed soon, and the project to lay a replacement line was delayed for five years by the American Civil War. Also, these transatlantic cables would have been completely incapable of carrying telephone calls even had the telephone already been invented. The first transatlantic telephone cable (which incorporated hundreds of electronic amplifiers) was not operational until 1956.[7]
The conventional telephone now in use worldwide was first patented by Alexander Graham Bell in March 1876.[8] That first patent by Bell was the master patent of the telephone, from which all other patents for electric telephone devices and features flowed. Credit for the invention of the electric telephone has been frequently disputed, and new controversies over the issue have arisen from time-to-time. As with other great inventions such as radio, television, the light bulb, and the digital computer, there were several inventors who did pioneering experimental work on voice transmission over a wire, and then they improved on each other's ideas. However, the key innovators were Alexander Graham Bell and Gardiner Greene Hubbard, who created the first telephone company, the Bell Telephone Company of the United States, which later evolved into American Telephone & Telegraph (AT&T).
The first commercial telephone services were set up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven, Connecticut, and London, England.[9][10]

Radio and television

In 1832, James Lindsay gave a classroom demonstration of wireless telegraphy via conductive water to his students. By 1854, he was able to demonstrate a transmission across the Firth of Tay from Dundee, Scotland, to Woodhaven, a distance of about two miles (3 km), again using water as the transmission medium.[11] In December 1901, Guglielmo Marconi established wireless communication between St. John's, Newfoundland and Poldhu, Cornwall (England), earning him the Nobel Prize in Physics for 1909, one which he shared with Karl Braun.[12] However small-scale radio communication had already been demonstrated in 1893 by Nikola Tesla in a presentation before the National Electric Light Association.[13]
On March 25, 1925, John Logie Baird of England was able to demonstrate the transmission of moving pictures at the Selfridge's department store in London, England. Baird's system relied upon the fast-rotating Nipkow disk, and thus it became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929.[14] However, for most of the 20th century, television systems were designed around the cathode ray tube, invented by Karl Braun. The first version of such an electronic television to show promise was produced by Philo Farnsworth of the United States, and it was demonstrated to his family in Idaho on September 7, 1927.[15]

Computer networks and the Internet

On 11 September 1940, George Stibitz was able to transmit problems using teletype to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire.[16] This configuration of a centralized computer or mainframe computer with remote "dumb terminals" remained popular throughout the 1950s and into the 60's. However, it was not until the 1960s that researchers started to investigate packet switching — a technology that allows chunks of data to be sent between different computers without first passing through a centralized mainframe. A four-node network emerged on December 5, 1969. This network soon became the ARPANET, which by 1981 would consist of 213 nodes.[17]
ARPANET's development centred around the Request for Comment process and on 7 April 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet, and many of the communication protocols that the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol version 4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today.
However, not all important developments were made through the Request for Comment process. Two popular link protocols for local area networks (LANs) also appeared in the 1970s. A patent for the token ring protocol was filed by Olof Soderblom on October 29, 1974, and a paper on the Ethernet protocol was published by Robert Metcalfe and David Boggs in the July 1976 issue of Communications of the ACM.[18][19] The Ethernet protocol had been inspired by the ALOHAnet protocol which had been developed by electrical engineering researchers at the University of Hawaii.

Key concepts

Etymology
The word telecommunication was adapted from the French word télécommunication. It is a compound of the Greek prefix tele- (τηλε-), meaning "far off", and the Latin communicare, meaning "to share".[20] The French word télécommunication was coined in 1904 by the French engineer and novelist Édouard Estaunié.[21]
A number of key concepts reoccur throughout the literature on modern telecommunication systems. Some of these concepts are discussed below.

Basic elements

A basic telecommunication system consists of three primary units that are always present in some form:
For example, in a radio broadcasting station the station's large power amplifier is the transmitter; and the broadcasting antenna is the interface between the power amplifier and the "free space channel". The free space channel is the transmission medium; and the receiver's antenna is the interface between the free space channel and the receiver. Next, the radio receiver is the destination of the radio signal, and this is where it is converted from electricity to sound for people to listen to.
Sometimes, telecommunication systems are "duplex" (two-way systems) with a single box of electronics working as both a transmitter and a receiver, or a transceiver. For example, a cellular telephone is a transceiver.[22] The transmission electronics and the receiver electronics in a transceiver are actually quite independent of each other. This can be readily explained by the fact that radio transmitters contain power amplifiers that operate with electrical powers measured in the watts or kilowatts, but radio receivers deal with radio powers that are measured in the microwatts or nanowatts. Hence, transceivers have to be carefully designed and built to isolate their high-power circuitry and their low-power circuitry from each other.
Telecommunication over telephone lines is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast communication because it is between one powerful transmitter and numerous low-power but sensitive radio receivers.[22]
Telecommunications in which multiple transmitters and multiple receivers have been designed to cooperate and to share the same physical channel are called multiplex systems.

Analog or digital communications?

Communications signals can be either by analog signals or digital signals. There are analog communication systems and digital communication systems. For an analog signal, the signal is varied continuously with respect to the information. In a digital signal, the information is encoded as a set of discrete values (for example, a set of ones and zeros). During the propagation and reception, the information contained in analog signals will inevitably be degraded by undesirable physical noise. (The output of a transmitter is noise-free for all practical purposes.) Commonly, the noise in a communication system can be expressed as adding or subtracting from the desirable signal in a completely random way. This form of noise is called "additive noise", with the understanding that the noise can be negative or positive at different instants of time. Noise that is not additive noise is a much more difficult situation to describe or analyze, and these other kinds of noise will be omitted here.
On the other hand, unless the additive noise disturbance exceeds a certain threshold, the information contained in digital signals will remain intact. Their resistance to noise represents a key advantage of digital signals over analog signals.[23]

Communications networks

A communications network is a collection of transmitters, receivers, and communications channels that send messages to one another. Some digital communications networks contain one or more routers that work together to transmit information to the correct user. An analog communications network consists of one or more switches that establish a connection between two or more users. For both types of network, repeaters may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from the noise.[24]

Communication channels

The term "channel" has two different meanings. In one meaning, a channel is the physical medium that carries a signal between the transmitter and the receiver. Examples of this include the atmosphere for sound communications, glass optical fibers for some kinds of optical communications, coaxial cables for communications by way of the voltages and electric currents in them, and free space for communications using visible light, infrared waves, ultraviolet light, and radio waves. This last channel is called the "free space channel". The sending of radio waves from one place to another has nothing to do with the presence or absence of an atmosphere between the two. Radio waves travel through a perfect vacuum just as easily as they travel through air, fog, clouds, or any other kind of gas besides air.
The other meaning of the term "channel" in telecommunications is seen in the phrase communications channel, which is a subdivision of a transmission medium so that it can be used to send multiple streams of information simultaneously. For example, one radio station can broadcast radio waves into free space at frequencies in the neighborhood of 94.5 MHz (megahertz) while another radio station can simultaneously broadcast radio waves at frequencies in the neighborhood of 96.1 MHz. Each radio station would transmit radio waves over a frequency bandwidth of about 180 kHz (kilohertz), centered at frequencies such as the above, which are called the "carrier frequencies". Each station in this example is separated from its adjacent stations by 200 kHz, and the difference between 200 kHz and 180 kHz (20 kHz) is an engineering allowance for the imperfections in the communication system.
In the example above, the "free space channel" has been divided into communications channels according to frequencies, and each channel is assigned a separate frequency bandwidth in which to broadcast radio waves. This system of dividing the medium into channels according to frequency is called "frequency-division multiplexing" (FDM).
Another way of dividing a communications medium into channels is to allocate each sender a recurring segment of time (a "time slot", for example, 20 milliseconds out of each second), and to allow each sender to send messages only within its own time slot. This method of dividing the medium into communication channels is called "time-division multiplexing" (TDM), and is used in optical fiber communication.[24][25] Some radio communication systems use TDM within an allocated FDM channel. Hence, these systems use a hybrid of TDM and FDM.

Modulation

The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analog waveform. This is commonly called "keying" - a term derived from the older use of Morse Code in telecommunications - and several keying techniques exist (these include phase-shift keying, frequency-shift keying, and amplitude-shift keying). The "Bluetooth" system, for example, uses phase-shift keying to exchange information between various devices.[26][27] In addition, there are combinations of phase-shift keying and amplitude-shift keying which is called (in the jargon of the field) "quadrature amplitude modulation" (QAM) that are used in high-capacity digital radio communication systems.
Modulation can also be used to transmit the information of low-frequency analog signals at higher frequencies. This is helpful because low-frequency analog signals cannot be effectively transmitted over free space. Hence the information from a low-frequency analog signal must be impressed into a higher-frequency signal (known as the "carrier wave") before transmission. There are several different modulation schemes available to achieve this [two of the most basic being amplitude modulation (AM) and frequency modulation (FM)]. An example of this process is a disc jockey's voice being impressed into a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel "96 FM").[28] In addition, modulation has the advantage of being about to use frequency division multiplexing (FDM).

Society and telecommunication

Telecommunication has a significant social, cultural. and economic impact on modern society. In 2008, estimates placed the telecommunication industry's revenue at $3.85 trillion (USD) or just under 3.0 percent of the gross world product (official exchange rate).[1] The following sections discuss the impact of telecommunication on society.

Economic impact

Microeconomics

On the microeconomic scale, companies have used telecommunications to help build global business empires. This is self-evident in the case of online retailer Amazon.com but, according to academic Edward Lenert, even the conventional retailer Wal-Mart has benefited from better telecommunication infrastructure compared to its competitors.[29] In cities throughout the world, home owners use their telephones to organize many home services ranging from pizza deliveries to electricians. Even relatively-poor communities have been noted to use telecommunication to their advantage. In Bangladesh's Narshingdi district, isolated villagers use cellular phones to speak directly to wholesalers and arrange a better price for their goods. In Côte d'Ivoire, coffee growers share mobile phones to follow hourly variations in coffee prices and sell at the best price.[30]

Macroeconomics

On the macroeconomic scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good telecommunication infrastructure and economic growth.[31] Few dispute the existence of a correlation although some argue it is wrong to view the relationship as causal.[32]
Because of the economic benefits of good telecommunication infrastructure, there is increasing worry about the inequitable access to telecommunication services amongst various countries of the world—this is known as the digital divide. A 2003 survey by the International Telecommunication Union (ITU) revealed that roughly one-third of countries have fewer than one mobile subscription for every 20 people and one-third of countries have fewer than one land-line telephone subscription for every 20 people. In terms of Internet access, roughly half of all countries have fewer than one out of 20 people with Internet access. From this information, as well as educational data, the ITU was able to compile an index that measures the overall ability of citizens to access and use information and communication technologies.[33] Using this measure, Sweden, Denmark and Iceland received the highest ranking while the African countries Nigeria, Burkina Faso and Mali received the lowest.[34]

Social impact

Telecommunication has played a significant role in social relationships. Nevertheless' devices like the telephone system were originally advertised with an emphasis on the practical dimensions of the device (such as the ability to conduct business or order home services) as opposed to the social dimensions. It was not until the late 1920s and 1930s that the social dimensions of the device became a prominent theme in telephone advertisements. New promotions started appealing to consumers' emotions, stressing the importance of social conversations and staying connected to family and friends.[35]
Since then the role that telecommunications has played in social relations has become increasingly important. In recent years, the popularity of social networking sites has increased dramatically. These sites allow users to communicate with each other as well as post photographs, events and profiles for others to see. The profiles can list a person's age, interests, sexuality and relationship status. In this way, these sites can play important role in everything from organising social engagements to courtship.[36]
Prior to social networking sites, technologies like SMS and the telephone also had a significant impact on social interactions. In 2000, market research group Ipsos MORI reported that 81% of 15 to 24 year-old SMS users in the United Kingdom had used the service to coordinate social arrangements and 42% to flirt.[37]

Other impacts

In cultural terms, telecommunication has increased the public's ability to access to music and film. With television, people can watch films they have not seen before in their own home without having to travel to the video store or cinema. With radio and the Internet, people can listen to music they have not heard before without having to travel to the music store.
Telecommunication has also transformed the way people receive their news. A survey by the non-profit Pew Internet and American Life Project found that when just over 3,000 people living in the United States were asked where they got their news "yesterday", more people said television or radio than newspapers. The results are summarised in the following table (the percentages add up to more than 100% because people were able to specify more than one source).[38]
Local TV National TV Radio Local paper Internet National paper
59% 47% 44% 38% 23% 12%
Telecommunication has had an equally significant impact on advertising. TNS Media Intelligence reported that in 2007, 58% of advertising expenditure in the United States was spent on mediums that depend upon telecommunication.[39] The results are summarised in the following table.

Internet Radio Cable TV Syndicated TV Spot TV Network TV Newspaper Magazine Outdoor Total
Percent 7.6% 7.2% 12.1% 2.8% 11.3% 17.1% 18.9% 20.4% 2.7% 100%
Dollars $11.31 billion $10.69 billion $18.02 billion $4.17 billion $16.82 billion $25.42 billion $28.22 billion $30.33 billion $4.02 billion $149 billion

Telecommunication and government

Many countries have enacted legislation which conform to the International Telecommunication Regulations establish by the International Telecommunication Union (ITU), which is the "leading United Nations agency for information and communication technology issues."[40] In 1947, at the Atlantic City Conference, the ITU decided to "afford international protection to all frequencies registered in a new international frequency list and used in conformity with the Radio Regulation." According to the ITU's Radio Regulations adopted in Atlantic City, all frequencies referenced in the International Frequency Registration Board, examined by the board and registered on the International Frequency List "shall have the right to international protection from harmful interference."[41]
From a global perspective, there have been political debates and legislation regarding the management of telecommunication and broadcasting. The history of broadcasting discusses some of debates in relation to balancing conventional communication such as printing and telecommunication such as radio broadcasting.[42] The onset of World War II brought on the first explosion of international broadcasting propaganda.[42] Countries, their governments, insurgents, terrorists, and militiamen have all used telecommunication and broadcasting techniques to promote propaganda.[42][43] Patriotic propaganda for political movements and colonization started the mid 1930s. In 1936, the BBC did broadcast propaganda to the Arab World to partly counter similar broadcasts from Italy, which also had colonial interests in North Africa.[42]
Modern insurgents, such as those in the latest Iraq war, often use intimidating telephone calls, SMSs and the distribution of sophisticated videos of an attack on coalition troops within hours of the operation. "The Sunni insurgents even have their own television station, Al-Zawraa, which while banned by the Iraqi government, still broadcasts from Erbil, Iraqi Kurdistan, even as coalition pressure has forced it to switch satellite hosts several times." [43]

Modern operation

Telephone

Optical fibre provides cheaper bandwidth for long distance communication
In an analog telephone network, the caller is connected to the person he wants to talk to by switches at various telephone exchanges. The switches form an electrical connection between the two users and the setting of these switches is determined electronically when the caller dials the number. Once the connection is made, the caller's voice is transformed to an electrical signal using a small microphone in the caller's handset. This electrical signal is then sent through the network to the user at the other end where it is transformed back into sound by a small speaker in that person's handset. There is a separate electrical connection that works in reverse, allowing the users to converse.[44][45]
The fixed-line telephones in most residential homes are analog — that is, the speaker's voice directly determines the signal's voltage. Although short-distance calls may be handled from end-to-end as analog signals, increasingly telephone service providers are transparently converting the signals to digital for transmission before converting them back to analog for reception. The advantage of this is that digitized voice data can travel side-by-side with data from the Internet and can be perfectly reproduced in long distance communication (as opposed to analog signals that are inevitably impacted by noise).
Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totalled 816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific (204 m), Western Europe (164 m), CEMEA (Central Europe, the Middle East and Africa) (153.5 m), North America (148 m) and Latin America (102 m).[46] In terms of new subscriptions over the five years from 1999, Africa has outpaced other markets with 58.2% growth.[47] Increasingly these phones are being serviced by systems where the voice content is transmitted digitally such as GSM or W-CDMA with many markets choosing to depreciate analog systems such as AMPS.[48]
There have also been dramatic changes in telephone communication behind the scenes. Starting with the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based on optic fibres. The benefit of communicating with optic fibers is that they offer a drastic increase in data capacity. TAT-8 itself was able to carry 10 times as many telephone calls as the last copper cable laid at that time and today's optic fibre cables are able to carry 25 times as many telephone calls as TAT-8.[49] This increase in data capacity is due to several factors: First, optic fibres are physically much smaller than competing technologies. Second, they do not suffer from crosstalk which means several hundred of them can be easily bundled together in a single cable.[50] Lastly, improvements in multiplexing have led to an exponential growth in the data capacity of a single fibre.[51][52]
Assisting communication across many modern optic fibre networks is a protocol known as Asynchronous Transfer Mode (ATM). The ATM protocol allows for the side-by-side data transmission mentioned in the second paragraph. It is suitable for public telephone networks because it establishes a pathway for data through the network and associates a traffic contract with that pathway. The traffic contract is essentially an agreement between the client and the network about how the network is to handle the data; if the network cannot meet the conditions of the traffic contract it does not accept the connection. This is important because telephone calls can negotiate a contract so as to guarantee themselves a constant bit rate, something that will ensure a caller's voice is not delayed in parts or cut-off completely.[53] There are competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and are expected to supplant ATM in the future.[54]

Radio and television

Digital television standards and their adoption worldwide.
In a broadcast system, the central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analog (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values).[22][55]
The broadcast media industry is at a critical turning point in its development, with many countries moving from analog to digital broadcasts. This move is made possible by the production of cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is that they prevent a number of complaints with traditional analog broadcasts. For television, this includes the elimination of problems such as snowy pictures, ghosting and other distortion. These occur because of the nature of analog transmission, which means that perturbations due to noise will be evident in the final output. Digital transmission overcomes this problem because digital signals are reduced to discrete values upon reception and hence small perturbations do not affect the final output. In a simplified example, if a binary message 1011 was transmitted with signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would still decode to the binary message 1011 — a perfect reproduction of what was sent. From this example, a problem with digital transmissions can also be seen in that if the noise is great enough it can significantly alter the decoded message. Using forward error correction a receiver can correct a handful of bit errors in the resulting message but too much noise will lead to incomprehensible output and hence a breakdown of the transmission.[56][57]
In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards; the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2.[58][59] The choice of modulation also varies between the schemes. In digital audio broadcasting, standards are much more unified with practically all countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception being the United States which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission that allows digital information to "piggyback" on normal AM or FM analog transmissions.[60]
However, despite the pending switch to digital, analog television remains being transmitted in most countries. An exception is the United States that ended analog television transmission (by all but the very low-power TV stations) on 12 June 2009[61] after twice delaying the switchover deadline. For analog television, there are three standards in use for broadcasting color TV (see a map on adoption here). These are known as PAL (British designed), NTSC (North American designed), and SECAM (French designed). (It is important to understand that these are the ways from sending color TV, and they do not have anything to do with the standards for black & white TV, which also vary from country to country.) For analog radio, the switch to digital radio is made more difficult by the fact that analog receivers are sold at a small fraction of the price of digital receivers.[62][63] The choice of modulation for analog radio is typically between amplitude modulation (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM.

The Internet

The Internet is a worldwide network of computers and computer networks that can communicate with each other using the Internet Protocol.[64] Any computer on the Internet has a unique IP address that can be used by other computers to route information to it. Hence, any computer on the Internet can send a message to any other computer using its IP address. These messages carry with them the originating computer's IP address allowing for two-way communication. The Internet is thus an exchange of messages between computers.[65]
As of 2008, an estimated 21.9% of the world population has access to the Internet with the highest access rates (measured as a percentage of the population) in North America (73.6%), Oceania/Australia (59.5%) and Europe (48.1%).[66] In terms of broadband access, Iceland (26.7%), South Korea (25.4%) and the Netherlands (25.3%) led the world.[67]
The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run more-or-less independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows an Internet browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model (pictured on the right), which emerged in 1983 as the first step in an unsuccessful attempt to build a universally adopted networking protocol suite.[68]
For the Internet, the physical medium and data link protocol can vary several times as packets traverse the globe. This is because the Internet places no constraints on what physical medium or data link protocol is used. This leads to the adoption of media and protocols that best suit the local network situation. In practice, most intercontinental communication will use the Asynchronous Transfer Mode (ATM) protocol (or a modern equivalent) on top of optic fibre. This is because for most intercontinental communication the Internet shares the same infrastructure as the public switched telephone network.
At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the World Wide Web, these "IP addresses" are derived from the human readable form using the Domain Name System (e.g. 72.14.207.99 is derived from www.google.com). At the moment, the most widely used version of the Internet Protocol is version four but a move to version six is imminent.[69]
At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). TCP is used when it is essential every message sent is received by the other computer where as UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers. With UDP, packets are not ordered or retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by.[70] Because certain application-level protocols use certain ports, network administrators can manipulate traffic to suit particular requirements. Examples are to restrict Internet access by blocking the traffic destined for a particular port or to affect the performance of certain applications by assigning priority.
Above the transport layer, there are certain protocols that are sometimes used and loosely fit in the session and presentation layers, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that the data transferred between two parties remains completely confidential and one or the other is in use when a padlock appears in the address bar of your web browser.[71] Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and OSCAR (instant messaging).

Local Area Networks and Wide Area Networks

Despite the growth of the Internet, the characteristics of local area networks ("LANs" - computer networks that do not extend beyond a few kilometers in size) remain distinct. This is because networks on this scale do not require all the features associated with larger networks and are often more cost-effective and efficient without them. When they are not connected with the Internet, they also have the advantages of privacy and security. However, purposefully lacking a direct connection to the Internet will not provide 100% protection of the LAN from hackers, military forces, or economic powers. These threats exist if there are any methods for connecting remotely to the LAN.
There are also independent wide area networks ("WANs" - private computer networks that can and do extend for thousands of kilometers.) Once again, some of their advantages include their privacy, security, and complete ignoring of any potential hackers - who cannot "touch" them. Of course, prime users of private LANs and WANs include armed forces and intelligence agencies that must keep their information completely secure and secret.
In the mid-1980s, several sets of communication protocols emerged to fill the gaps between the data-link layer and the application layer of the OSI reference model. These included Appletalk, IPX, and NetBIOS with the dominant protocol set during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point, but it was typically only used by large government and research facilities.[72]
As the Internet grew in popularity and a larger percentage of traffic became Internet-related, LANs and WANs gradually moved towards the TCP/IP protocols, and today networks mostly dedicated to TCP/IP traffic are common. The move to TCP/IP was helped by technologies such as DHCP that allowed TCP/IP clients to discover their own network address — a function that came standard with the AppleTalk/ IPX/ NetBIOS protocol sets.[73]
It is at the data-link layer, though, that most modern LANs diverge from the Internet. Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data-link protocols for larger networks such as WANs; Ethernet and Token Ring are typical data-link protocols for LANs. These protocols differ from the former protocols in that they are simpler (e.g. they omit features such as Quality of Service guarantees) and offer collision prevention. Both of these differences allow for more economical systems.[74] Despite the modest popularity of IBM token ring in the 1980s and 90's, virtually all LANs now use either wired or wireless Ethernets. At the physical layer, most wired Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). However, some early implementations used heavier coaxial cables and some recent implementations (especially high-speed ones) use optical fibers.[75] When optic fibers are used, the distinction must be made between multimode fibers and single-mode fiberes. Multimode fibers can be thought of as thicker optical fibers that are cheaper to manufacture devices for but that suffers from less usable bandwidth and worse attenuation - implying poorer long-distance performance.[76]

( source : http://en.wikipedia.org/wiki/Electronic_communications )