Project Member: Dusan Stepanovic
First demonstrations of functional wireless communication systems operating in recently opened 60GHz frequency band use very simple modulation schemes like BPSK. An alluring way to further increase data throughput in these systems is to increase the complexity of the modulation schemes. One of the main obstacles encountered in practical implementations of such systems is the power of baseband analog-to-digital (A/D) converters.
The goal of this research is to build a power efficient A/D converter in 2-3GHz sampling frequency range with resolution of 8 effective bits. The recent resurgence of successive approximation (SAR) A/D converters has demonstrated extreme power efficiency. Although the speed of SAR converters is steadily increasing thanks to the faster transistors in new process technologies it is still far away from our target frequency range. An attractive approach to shift the efficiency of SAR converters towards higher sampling rates is to use time-interleaving of multiple channels. For ultimate power efficiency the A/D converters should operate in the thermal noise limited regime. For moderate resolution SAR converters this requires use of sub-fF capacitors, which brings up the issue of matching due to both layout effects and random fluctuations. Therefore, in addition to standard channel mismatch effects, the nonlinearities of individual channels need to be corrected. In this project we propose a novel deterministic calibration algorithm with rapid convergence that unifies the correction of channel timing, gain and offset mismatch effects and nonlinearities of individual SAR converters.
Project Member: Katerina Papadopoulou
The rapid technology developments in the metal-oxide-semiconductor industry have lead to CMOS scaling down to the sub-30nm regime, and according to the 2009 ITRS projections printed gate lengths will scale down to approximately 12nm by 2020. As CMOS technology enters the deep submicron regime, variability presents a major challenge in analog and digital circuit design, resulting in significant changes in device manufacturing technology.
The focus of this research is the design of test circuits to characterize both systematic and random variability in a modern Fully Depleted Silicon on Insulator (FDSOI) process. The first testchip was taped out in FDSOI 22/16nm technology and contains an array of test structures designed in an attempt to decouple and characterize different sources of random variation. The variation analysis performed on data from these test structures, along with SRAM and capacitorless DRAM data also included in the testchip, can help optimize circuit performance, power and yield by improving the statistical model for variability and by minimizing its effects through process and design optimization.
Project Member : Matthew Weiner
Low density parity check (LDPC) codes have become popular in high-performance wireless systems because of their excellent error correcting performance. LDPC codes are a type of linear block code, which is characterized by its parity check matrix H. The decoding algorithm uses soft information to iteratively decode a received message by passing messages back and forth between variable and check nodes via a routing network. Fixed decoder designs implement the decoding algorithm for a single H matrix, allowing the use of a simple routing scheme. On the other hand, flexible decoders can switch between different H matrices at the cost of a more complicated routing system. This usually limits their maximum performance and minimum power dissipation compared to fixed designs. In this project, we aim to develop a flexible serial-parallel stream architecture suitable for 60GHz baseband applications. Our goal is a throughput of over 1Gb/s for each code rate and a power dissipation of approximately 10mW, which pushes both power and performance specifications previously reported for flexible decoders. This will be achieved by (1) using a pipelined architecture that requires no large memories to store check or variable messages, (2) exploiting the structure of the matrices to increase the number of check nodes available for lower code rates using the same hardware, and (3) shortening the length of the pipeline for lower rate codes.
Project member : Ji-Hoon Park
The recent advances of the CMOS RF technology paved a way to commercially viable wireless communication systems working at multi-Gbps rate. However, a design of baseband circuits for this high-speed system is facing unique challenges. Firstly, (1) minimizing the power consumption is critical especially for mobile devices because the power is proportional to the data rate. Also, (2) as the symbol rate goes up, the propagation channel shows larger delay spread, which makes it harder to equalize the channel. (3) Finally, the recovery of frequency and timing errors between a transmitter and a receiver became challenging problems.
To build a system with reduced power consumption while achieving a performance target, in addition to (1) finding power-efficient architectures and algorithms for the equalization and synchronization, we investigate (2) the power-optimal partition between the analog and digital circuits, and (3) the dynamic reconfiguration of receiver parameters. Specifically, we found an analytic expression of the receiver performance given an impulse response of the propagation channel and tried to find an optimal parameters of the mixed-signal equalizer, so that we can adjust the structure according to the channel conditions and characteristics of the signal. Also, we are working on the channel estimation and synchronization circuit structures, which can achieve its target performance with minimum power consumption.
The goal of this research is to develop a mixed-signal baseband chip to demonstrate the feasibility of this methodology and the architecture.
Continued increase in the process variability is perceived to be a major challenge to future technology scaling. These effects are most pronounced minimum-geometry devices used in SRAM cells and seriously limit the scalability of SRAM circuits beyond the 65nm node. Recent advances in robust optimization can provide an efficient framework for optimizing memory under uncertainty. Using this framework, the design of memory will be expressed as a robust geometric program (GP). We have been given an opportunity for a 45nm shuttle with ST Micro. We are designing a large (~1Mb) SRAM test-chip (right) to investigate the effects of variations on SRAM functionality at the 45nm and beyond. In particular, we wish to investigate and measure systematic and random variations in large SRAM arrays and correlate that with measured single device as well as single cell (test structure) variations. In addition, we also hope to use the test-chip to help us extract the parameter variations in 45nm SRAM to fine tune the probabilistic models in this optimization framework.
Project Member : Lauren Jones, Zheng Guo, Seng-Oon Toh, Jason Tsai
As transistor dimensions continue to scale into the deep submicron regime, process variability is significantly impacting yield and performance, threatening future scaling. The impact of this variation on static random access memory (SRAM) is of particular interest, due to the large percentage of die area dominated by memory cells. With cache memories consisting of millions of cells, functionality is dependent on up to six standard deviations of margin to variation. Fluctuations in transistor parameters such as threshold voltage (VTH), gate length and effective width shift read and write margins and degrade cell stability. While new processing effects attempt to compensate for growing variations, their high cost motivates circuit based solutions for continued scaling.
Recent studies in 45nm technology have shown systematic SRAM variation in mean read and write margins between alternating columns and rows. These studies suggest that layout variations due to processing effects cause structures that are physically mirrored across an axis to have different DC characteristics. It is likely that future scaling will increase this deviation. This presents a new challenge in variability compensation of SRAM arrays. We are exploring new compensation techniques that can adapt to variations dynamically and that address asymmetries in read and write margins within memory arrays. Test structures will be realized in a 45nm CMOS process.
Project Member : Renaldi Winoto
The goal of this research is to develop and implement an RF receiver architecture that are amenable to integration in standard digital CMOS process. Specifically, in this project, our focus will be in processing an RF signal using a discrete time ΣΔ modulator. The RF signal is first downcoverted using a current-commutating mixer with a single capacitor as the output load. This capacitor forms the first stage of a two stage passive switched capacitor filter that makes up the ΣΔ modulator loop filter. The switched-capacitor filter is run at radio frequencies which gives rise to a large oversampling ratio. Availability of very good switches is one of the advantages of scaling, as for a given on-resistance; the parasitic capacitance of MOS switch becomes increasingly smaller.
This receiver architecture also provides a very good platform to implement an interference cancellation scheme. First, the receiver actually captures the entire signal up to the sampling frequency, along with the desired RF signal. This means that large, potentially blocking signals are also available at the digital output, although with compromised signal-to-noise ratio. Furthermore, by default the ΣΔ modulator has a feedback path which can be placed very close to the antenna. A digital signal processor can then be used to synthesize a cancelling signal for large interferers based on the receiver digital output. This scheme would significantly reduce the dynamic-range requirement needed for the receiver.
Project Members: Zhengya Zhang, Pamela Lee, Lara Dolecek (MIT), Professors Borivoje Nikolic, Venkat Anantharam, Martin Wainwright
Funding Sources: National Science Foundation, Marvell Semiconductor, Intel Corporation, Infineon Technologies, UC MICRO
Low-density parity-check (LDPC) codes have been demonstrated to perform very close to the Shannon limit when decoded iteratively. Sometimes excellent performance is only observed up until a moderate bit error rate (BER); at a lower BER, the error curve often changes its slope, manifesting a so-called error floor. Such error floors are a major factor in limiting the deployment of LDPC codes in high-throughput applications.
We design a parallel-serial architecture to map the decoders of structured LDPC codes to a hardware emulation platform. Experiments in the low BER region provide statistics of the error traces, which are used to investigate the causes of the error floors . Different classes of errors cause error floors. But even with an optimal implementation, the error floors are inevitable due to certain combinatorial structures of the LDPC code, termed absorbing sets . The effect of absorbing sets in determining the error floor level is influenced by implementation. Conventional decoder implementations tend to induce low-weight weak absorbing sets, and, as a result, elevate the error floor. We propose alternative quantization schemes and demonstrate seemingly inferior algorithms that alleviate the effects of weak absorbing sets . Furthermore, we can exploit the structure of absorbing sets with a redesigned message-passing decoder to escape such local minimum states . The investigative approach and ASIC design approach are unified using a Simulink-based design flow. Rapid prototyping allows us to concurrently explore the algorithmic, architectural and implementation spaces in order to optimize the decoder design.
 Z. Zhang, L. Dolecek, B. Nikolic, V. Anantharam, M. J. Wainwright, “Investigation of error floors of structured low-density parity-check codes by hardware emulation,” in Proceedings of IEEE Global Communications Conference (GLOBECOM), San Francisco CA, November 2006.
 L. Dolecek, Z. Zhang, V. Anantharam, M. J. Wainwright, B. Nikolic, “Analysis of absorbing sets for array-based LDPC codes,” in Proceedings of IEEE International Conference on Communications, Glasgow UK, June 2007.
 Z. Zhang, L. Dolecek, M. J. Wainwright, V. Anantharam, B. Nikolic, “Quantization effects of low-density parity-check decoders,” in Proceedings of IEEE International Conference on Communications, Glasgow UK, June 2007.
 Z. Zhang, L. Dolecek, B. Nikolic, V. Anantharam, M. J. Wainwright, “Lowering LDPC error floors by postprocessing,” in Proceedings of IEEE Global Communications Conference (GLOBECOM), New Orleans LA, November 2008.