Area-efficient high speed decoding schemes for turbo/map decoders

Z. Wang, Z. Chi, K. K. Parhi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

Turbo decoders inherently have a large latency and low throughput due to iterative decoding. To increase the throughput and reduce the latency, high speed decoding schemes have to be employed. In this paper, following a discussion on basic parallel decoding architectures, two types of area-efficient parallel decoding schemes are proposed. Detailed comparison on storage requirement, number of computation units and the overall decoding latency is provided for various decoding schemes with different levels of parallelism. Hybrid parallel decoding schemes are proposed as an attractive solution for very high level parallelism implementations. Simulation results demonstrate that the proposed area-efficient parallel decoding schemes introduce no performance degradation in general. The application of the pipeline-interleaving technique to parallel Turbo decoding architectures is presented at the end.

Original languageEnglish (US)
Title of host publicationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Pages2633-2636
Number of pages4
Volume4
StatePublished - 2001
Event2001 IEEE Interntional Conference on Acoustics, Speech, and Signal Processing - Salt Lake, UT, United States
Duration: May 7 2001May 11 2001

Other

Other2001 IEEE Interntional Conference on Acoustics, Speech, and Signal Processing
Country/TerritoryUnited States
CitySalt Lake, UT
Period5/7/015/11/01

Fingerprint

Dive into the research topics of 'Area-efficient high speed decoding schemes for turbo/map decoders'. Together they form a unique fingerprint.

Cite this