free counters

Thursday, June 18, 2009

Multi-degree Reconfigurable Optical Add/Drop Multiplexer (ROADM) functionality extends the flexibility of the CN 4200 to the optical layer

Available for deployment on the CN 4200 RS platform, ROADM allows the same control over the optical path that network operators enjoy over individual services. A ROADM architecture enables networks to maximize available system bandwidth by adding dynamic reconfigurability at the individual wavelength level, ideal for network applications in which wavelength planning is difficult due to uncertain traffic projections. As a result, changes in the network can happen on demand without affecting other wavelengths and services.

CN 4200 ROADM is provided using four key modules:

  • Dynamic Wavelength Router (DWR)
  • Optical Channel Monitor (OCM-8)
  • Variable Gain (Rx) Amplifier (OAV-VS-U-C)
  • Fixed Gain (Tx) Amplifier (OAF-BC-B)

The nine-port, Wavelength Selectable Switch (WSS)-based DWR module performs the primary multi-degree optical switching functionality at each ROADM node. Each DWR module contains a WSS capable of dynamically adding, dropping, or expressing any of 44 wavelengths to any of nine ports, in any combination, and can support 10G and 40G wavelengths simultaneously.

The DWR module also incorporates a passive wavelength combiner that can add or multiplex optical signals from up to nine tributary ports into an aggregate signal. Network reconfiguration using the DWR module allows flexible, remote provisioning of any demand, and simplifies network planning by safeguarding upgrade capacity and extending network life—resulting in operational and capital savings and faster revenue capture.

In addition to remotely provisioned add/drop routing, CN 4200 ROADM supports automatic optical power control, which automatically adjusts optical power levels for add/drop and express traffic. To achieve power equalization across all wavelengths, each ROADM node requires one OCM module (OCM-8) to monitor the optical power levels of up to 44 different wavelengths on eight inputs.

The OAV-VS-U-C is a variable gain amplifier typically used as the receiving amplifier to offset fiber loss in the preceding span. The OAV-VS-U-C uses SmartGain™ dynamic gain control, transient suppression and FlexSpan™ auto span loss compensation, and provides mid-stage access for dispersion compensation.

The OAF-BC-B is a booster-combiner amplifier that produces a fixed gain of 20.5 dB on the incoming Dense Wavelength Division Multiplexing (DWDM) signal. The booster acts as an aggregation point for multiple degrees and add/drops, with the capability to support two local add DWR modules. Each OAF-BC-B supports four degrees and is expandable to eight degrees.


Source : www.ciena.com

Monday, June 1, 2009

Digital Filters












Digital filters are used for two general purposes:



(1) separation of signals that have been combined.



(2) restoration of signals that have been distorted in some way. Analog (electronic) filters can be used for these same tasks; however, digital filters can achieve far superior results. The most popular digital filters are described and compared in the next seven chapters. This introductory chapter describes the parameters you want to look for when learning about each of these filters.


Filter Basics
Digital filters are a very important part of DSP. In fact, their extraordinary performance is one of the key reasons that DSP has become so popular. As mentioned in the introduction, filters have two uses: signal separation and signal restoration. Signal separation is needed when a signal has been contaminated with interference, noise, or other signals. For example, imagine a device for measuring the electrical activity of a baby's heart (EKG) while still in the womb. The raw signal will likely be corrupted by the breathing and heartbeat of the mother. A filter might be used to separate these signals so that they can be individually analyzed.
Signal restoration is used when a signal has been distorted in some way. For example, an audio recording made with poor equipment may be filtered to better represent the sound as it actually occurred. Another example is the deblurring of an image acquired with an improperly focused lens, or a shaky camera.
These problems can be attacked with either analog or digital filters. Which is better? Analog filters are cheap, fast, and have a large dynamic range in both amplitude and frequency. Digital filters, in comparison, are vastly superior in the level of performance that can be achieved. For example, a low-pass digital filter presented in Chapter 16 has a gain of 1 +/- 0.0002 from DC to 1000 hertz, and a gain of less than 0.0002 for frequencies above 1001 hertz. The entire transition occurs within only 1 hertz. Don't expect this from an op amp circuit! Digital filters can achieve thousands of times better performance than analog filters. This makes a dramatic difference in how filtering problems are approached. With analog filters, the emphasis is on handling limitations of the electronics, such as the accuracy and stability of the resistors and capacitors. In comparison, digital filters are so good that the performance of the filter is frequently ignored. The emphasis shifts to the limitations of the signals, and the theoretical issues regarding their processing.
It is common in DSP to say that a filter's input and output signals are in the time domain. This is because signals are usually created by sampling at regular intervals of time. But this is not the only way sampling can take place. The second most common way of sampling is at equal intervals in space. For example, imagine taking simultaneous readings from an array of strain sensors mounted at one centimeter increments along the length of an aircraft wing. Many other domains are possible; however, time and space are by far the most common. When you see the term time domain in DSP, remember that it may actually refer to samples taken over time, or it may be a general reference to any domain that the samples are taken in.
As shown in Fig. 14-1, every linear filter has an impulse response, a step response and a frequency response. Each of these responses contains complete information about the filter, but in a different form. If one of the three is specified, the other two are fixed and can be directly calculated. All three of these representations are important, because they describe how the filter will react under different circumstances.
The most straightforward way to implement a digital filter is by convolving the input signal with the digital filter's impulse response. All possible linear filters can be made in this manner. (This should be obvious. If it isn't, you probably don't have the background to understand this section on filter design. Try reviewing the previous section on DSP fundamentals). When the impulse response is used in this way, filter designers give it a special name: the filter kernel.
There is also another way to make digital filters, called recursion. When a filter is implemented by convolution, each sample in the output is calculated by weighting the samples in the input, and adding them together. Recursive filters are an extension of this, using previously calculated values from the output, besides points from the input. Instead of using a filter kernel, recursive filters are defined by a set of recursion coefficients. This method will be discussed in detail in Chapter 19. For now, the important point is that all linear filters have an impulse response, even if you don't use it to implement the filter. To find the impulse response of a recursive filter, simply feed in an impulse, and see what comes out. The impulse responses of recursive filters are composed of sinusoids that exponentially decay in amplitude. In principle, this makes their impulse responses infinitely long. However, the amplitude eventually drops below the round-off noise of the system, and the remaining samples can be ignored. Becauseof this characteristic, recursive filters are also called Infinite Impulse Response or IIR filters. In comparison, filters carried out by convolution are called Finite Impulse Response or FIR filters.
As you know, the impulse response is the output of a system when the input is an impulse. In this same manner, the step response is the output when the input is a step (also called an edge, and an edge response). Since the step is the integral of the impulse, the step response is the integral of the impulse response. This provides two ways to find the step response: (1) feed a step waveform into the filter and see what comes out, or (2) integrate the impulse response. (To be mathematically correct: integration is used with continuous signals, while discrete integration, i.e., a running sum, is used with discrete signals). The frequency response can be found by taking the DFT (using the FFT algorithm) of the impulse response. This will be reviewed later in this chapter. The frequency response can be plotted on a linear vertical axis, such as in (c), or on a logarithmic scale (decibels), as shown in (d). The linear scale is best at showing the passband ripple and roll-off, while the decibel scale is needed to show the stopband attenuation.
Don't remember decibels? Here is a quick review. A bel (in honor of Alexander Graham Bell) means that the power is changed by a factor of ten. For example, an electronic circuit that has 3 bels of amplification produces an output signal with 10 × 10 × 10 = 1000 times the power of the input. A decibel (dB) is one-tenth of a bel. Therefore, the decibel values of: -20dB, -10dB, 0dB, 10dB & 20dB, mean the power ratios: 0.01, 0.1, 1, 10, & 100, respectively. In other words, every ten decibels mean that the power has changed by a factor of ten.
Here's the catch: you usually want to work with a signal's amplitude, not its power. For example, imagine an amplifier with 20dB of gain. By definition, this means that the power in the signal has increased by a factor of 100. Since amplitude is proportional to the square-root of power, the amplitude of the output is 10 times the amplitude of the input. While 20dB means a factor of 100 in power, it only means a factor of 10 in amplitude. Every twenty decibels mean that the amplitude has changed by a factor of ten. In equation form:
The above equations use the base 10 logarithm; however, many computer languages only provide a function for the base e logarithm (the natural log, written logex or ln x ). The natural log can be use by modifying the above equations: dB = 4.342945 loge(P2/P1) and dB = 8.685890 loge(A2/A1).
Since decibels are a way of expressing the ratio between two signals, they are ideal for describing the gain of a system, i.e., the ratio between the output and the input signal. However, engineers also use decibels to specify the amplitude (or power) of a single signal, by referencing it to some standard. For example, the term: dBV means that the signal is being referenced to a 1 volt rms signal. Likewise, dBm indicates a reference signal producing 1 mW into a 600 ohms load (about 0.78 volts rms).

New telecommunications technologies

While many new technologies might be considered in this review of new telecommunications technologies, this paper will be restricted to a consideration of those likely to have a major impact on corporate networks and communications.
OPTICAL NETWORKS, ATM TECHNOLOGY AND GIGABIT ETHERNET PRIVATE NETWORKS
Optical Networks
A small band of researchers expect that optical fiber amplifiers will revolutionize communications. Optical amplifiers and optical switches are now being installed in many networks, providing an opportunity to utilize the latent fiber bandwidth and provide an enormous increase in backbone capacity. All-optical, or lightwave-to-lightwave networks will become a reality in the near future. The need to transmit using lightwaves over fiber optic backbones and convert lightwaves to electrical signals for switching and transfers to end points in the network will soon become a thing of the past.
Today, major communications networks are already using substantial lightwave devices in their high-speed networks. The managers of these networks foresee the day in the not distant future when network links will go "lightwave to lightwave" without much conversion to other forms. In addition, these networks can use new compression and modulation technologies to add more capacity to their fiber optic backbones, raising capacity by a factor of nearly ten over the next few years.
As a consequence, telephone companies are likely to offer large corporations substantially greater speed connections to their networks, with 2.4 gigabits per second being the most likely. In addition, the price of such connections, now $200,000 per month or more, will probably decline to about $10,000 by early in the next decade, a twenty-fold decline in price over just four or five years. Such a change would certainly be revolutionary. It could also herald a shift in network connection devices from electronic to lightwave, making communications even cheaper and less complex.
The main issue these carriers face in the coming years is how costly it will be to divide high-speed drops into links that bring communications to phones and desktops. At present, this conversion in office buildings is likely to be quite expensive, slowing the economic impact of the advantages of lightwave technology. If rapid technological innovation takes place in this arena, it could accelerate the creation of very inexpensive broadband services, first to businesses and later to consumers.
Asynchronous Transfer Mode (ATM) Technology
Prior to the recent interest in optical networks, Asynchronous Transfer Mode (ATM) technology was considered to be the major technology that would shape the future of communications. Many long distance firms, including AT&T, Sprint, and MCI, plus Internet Service Providers UUNet and PSINet, have adopted ATM technology for high speed communications that can offer direct access to customers. The advent of optical networks, plus competition from Gigabit Ethernet and less expensive frame relay services, may result in ATM losing out as a viable technology. Many vendors of ATM equipment have begun to shift to optical switching technologies.
ATM, when first proposed, seemed to be a remedy for many communications problems. It could alleviate bottlenecks at the desktop and deliver vast amounts of information across long distances. ATM is a switched, connection-oriented technology with a fixed cell size, making it an inherently reliable, scaleable technology that can speed data from one desktop to another or across national networks. The Achilles heel of ATM has been the high cost of its deployment and the slow speed it brings to the desktop, only twenty-five Megabits per second (Mb/s) at a time when Gigabit Ethernet will soon offer users 1000 Mb/s. This has retarded its deployment.
Nevertheless, due to its high 155 Mb/s speed, many corporate network backbones have adopted ATM technology. In addition, long-distance phone companies have favored using the technology, with some using it at 622 Mb/s. These networks have responded to the immediate demand for speed that users needed.
Now, the future of ATM seems more unpredictable. In some environments, such as corporate networks, it may continue a rapid pace of deployment. In national long-distance networks, ATM no longer seems to be the technology of choice, given the significant advantages of optical switched networks.
Gigabit Ethernet
Gigabit Ethernet will relieve congestion problems in communications networks used for corporate data, Local Area Networks (LANs). The bottlenecks are present because the speed of external network connections and desktops has increased dramatically, but the ability to send complex graphics documents and use videoconferencing in LANs has not improved at complementary speed. Thus, there is a growing speed bottleneck in data networks.
Gigabit Ethernet is a new standard that will let corporate network users send packets of data at 1,000 Megabits per second (Mb/s) through campus networks. These networks originally functioned using the Ethernet standard at ten Mb/s. The new standard, which should be official in March 1998, builds on the Fiber Channel, which offers 800 Mb/s throughput that can be accelerated by boosting the signaling rate to 1.25 Gigabits per second (Gb/s). Commercial Gigabit Ethernet products are likely to be adopted in large numbers in 1999.

Telecom's hottest technologies

Future tech: telecom's hottest technologies - Cover Story - critical technologies that will drive the telecom industry during next five years and beyond are discussed

These are tough times in telecoms; capex is out, ROI is in. CFOs stalk telco boardrooms striking down new projects.
In this environment it's sometimes hard to remember that telecom thrives on technology advances. In the last 30 years the introduction of optical fiber, digital switching and cellular radio alone has revolutionized this industry and the way the planet communicates.
And while the telecom downturn raises the bar on carrier network spending, that doesn't stop technology competition. In fact, the demand for increased efficiency and ROI is driving CTOs and, yes, CFOs to acquire smarter, more efficient and more robust technologies before rivals do.
This special report by Telecom Asia staff analyzes the critical technologies most likely to drive the industry over the next five years and beyond.
Some are core technologies, like nanotech and mesh networking, that could change the entire cost structure of hardware and networks. Others are major enhancements of existing tech, such as Wi-Fi and optical Ethernet, that also have serious disruptive potential. Others, like UWB and Powerline are new innovations altogether.
Though many of these are still in early stage R&D, some are already making their way into carrier business plans. Yet, as ever, it will be some time before we can truly assess how these will impact the way operators build networks and deliver and price services; it is this unpredictability that is the core challenge of telecommunications.
Despite the uncertainty, of one thing there is no doubt: the slowdown may have hit growth, but it certainly hasn't halted innovation.
NANOTECHNOLOGY: Let's get small
Recent advances in nanotech promise great things for telecoms. In the strictest sense, nanotechnology is a thousand times smaller than microtechnology, and 1/80,000 the diameter of a human hair. The MEMS (microelectromechanical systems) technology used in tunable lasers, tunable filters, variable optical attenuators, dynamic gain equalizers, and the micromirrors in all-optical switches is microtech, not nanotech.
The same goes for the next-gen 90-nm transistors that Motorola, STMicrolectronics and Philips plan to start manufacturing by the end of 2002, as well as the 90-nm "strained silicon" transistors that Intel says it will produce next year.
Regardless of the (sorry) hair-splitting, however, the components of telecoms technology are getting smaller, and there are over 900 start-ups working to make it happen. In the optical space, MEMS is already an inherent part of all-optical switches currently on sale from Ciena, Corvis, Sycamore, and Tellium, but that's just the start.
New Jersey-based start-up NanoOpto has been generating considerable buzz with its "subwavelength optical elements", which essentially do the same things as simple passive optical components like filters and couplers, but on the nanometer level.
One reason for the buzz is that NanoOpto's components aren't vaporware. The company introduced its first components in March this year. This past September, NanoOpto began shipping trial samples of its SubWave Phase Management components called waveplates, which helps optical subsystems to compensate for dispersion by slowing down light.
Meanwhile, companies like Canada's Galian Photonics and University of Southampton offshoot Mesophotonics are developing so-called "photonic crystals" which guide light along a path on a micron distance scale, overcoming design barriers for ail-optical components, although actual products are a few years away.
MEMS/nanotech isn't just about fiber optics. In July, for instance, US semiconductor company Kopin announced a new range of LED chips called CyberLite that use nanotech to get around the natural atomic-level defects of LED chips that occur every 100 nanometers which usually prevent the chips from operating any lower than 3 volts. Kopin's "NanoPockets" technique essentially keeps light away from the defects, resulting in a brighter, low-power LED at 2.8 or 2.9 volts that can be used as backlighting for cell phone or PDA screens and keypads, among other things.
Another company, Discera, uses MEMS technology to make a receiver called a "vibrating mechanical resonator" which could give radio devices like cell phones better frequency selectivity and improved battery life.
Nanotech is also promising to change the rules on things like data storage. The University of Arizona Optical Data Storage Center is working on a technique for using MEMS probe devices to read and write on cheap nanotech organic films, allowing data to be written, read and stored in clusters of molecules. The result: storage measured in terabits per square inch. In lay terms, that's 1,540 CDs packed onto a single CD.
Nanotech will even have its own markup language in the near future. Virginia-based NanoTitan has already written one--an open source software code called nanoML that's intended to do for nanocomputing what HTML did for the Web by helping engineers define the elements needed to build integrated nanocomputing devices and nanosystems.

Taiwan Launches Its First WiMax Network


The first commercial WiMax broadband wireless network in Taiwan opened for business on Monday.
Tatung InfoComm formally launched WiMax services on Penghu, Taiwan's largest outlying island, which is famous for windsurfing and will soon be home to several casinos under a new gambling initiative.
The company is offering several specials to entice Penghu's 93,000 citizens to sign up for the new high speed wireless Internet access service.
Anyone who signs up for unlimited monthly WiMax service that includes both a WiMax card for their laptop PC as well as a WiMax modem for their home between now and June 30 will pay NT$1,680 (US$50) per month.
People who just want a WiMax data card for their laptop can sign up for NT$699 per month unlimited service, and people who want WiMax at home can opt for an NT$649 per month plan.
The minimum price for monthly service will be NT$1000 per month after the special introductory rates expire.
Taiwan has been on the forefront of investing in and promoting WiMax technology for use in Taiwan and to boost production of WiMax gear among the island's manufacturers.
The government handed out WiMax network licenses to several companies in Taiwan, some of which allow them to build a WiMax network in Northern Taiwan and others for the South.
WiMax is part of the government's M-Taiwan (Mobile Taiwan) program aimed at ensuring people all over the island, including remote mountain villages and offshore islands, will be able to access the Internet wirelessly. The high speed wireless technology has been promoted globally as a speedier replacement for the Wi-Fi technology found in coffee shops and elsewhere.
As part of M-Taiwan, the government has offered generous research grants and co-investment to companies on the island to help jump start WiMax services. The hope is that by being an early adopter and producer of WiMax products, Taiwanese manufacturers will benefit from the global deployment of WiMax.
Source : www.pcworld.com

Don't overhype WiMAX speeds, analyst warns operators

WiMAX operators who overhype their networks' peak speeds risk angering early adopters who feel their expectations haven't been met, an analyst from Pyramid Research said Wednesday.
During a Webinar on best practices for deploying WiMAX, Pyramid Research analyst Özgür Aytar said that one of the biggest potential pitfalls for WiMAX operators is emphasizing network speed too much in their pitches to customers. Although she acknowledged that overhyping peak network speeds is a bad practice for any network operator, she said it is particularly bad for those offering new technologies such as WiMAX because early adopters could quickly grow disillusioned if they find themselves frustrated by slower-than-expected speeds.
"The factors that lead to success for all kinds of broadband deployments come when companies haven't just been the first to market but have been the best to market," said Aytar, who recently completed a study of WiMAX deployment practices that involved interviewing executives from 17 WiMAX operators worldwide. "Early adopters are likely to generate backlash against ISPs if their expectations are not being me t… a number of operators that we looked at shy away from promoting actual data speeds to avoid customer backlash."
To demonstrate how to properly market WiMAX services, Aytar used the example of Curaco-based ISP Scarlet, which she said marketed its plans simply as "basic," "fast," "faster," and "fastest" rather than emphasizing their peak data rates. Aytar said that because Scarlet isn't "promising to over-deliver on bandwidth," it has "established a balance" between its marketing goals and its ability to deliver strong services. Moises Abadi, CEO of Panamanian ISP Liberty Technologies, agreed with Aytar and said it was important for operators to "understand what WiMAX can and can't do" and to set "realistic expectations" among customers. He said the most important part of setting expectations is conducting extensive propagation studies and understanding the geographic topologies of the areas that operators wish to cover with their WiMAX services.