Area-efficient high-speed decoding schemes for turbo decoders

Zhongfeng Wang, Zhipei Chi, Keshab K. Parhi

Research output: Contribution to journalArticle

47 Scopus citations

Abstract

Turbo decoders inherently have large decoding latency and low throughput due to iterative decoding. To increase the throughput and reduce the latency, high-speed decoding schemes have to be employed. In this paper, following a discussion on basic parallel decoding architectures, the segmented sliding window approach and two other types of area-efficient parallel decoding schemes are proposed. Detailed comparison on storage requirement, number of computation units, and the overall decoding latency is provided for various decoding schemes with different levels of parallelism. Hybrid parallel decoding schemes are proposed as an attractive solution for very high level parallelism implementations. To reduce the storage bottleneck for each subdecoder, a modified version of the partial storage of state metrics approach is presented. The new approach achieves a better tradeoff between storage part and recomputation part in general. The application of the pipeline-interleaving technique to parallel turbo decoding architectures is also presented. Simulation results demonstrate that the proposed area-efficient parallel decoding schemes do not cause performance degradation.

Original languageEnglish (US)
Pages (from-to)902-912
Number of pages11
JournalIEEE Transactions on Very Large Scale Integration (VLSI) Systems
Volume10
Issue number6
DOIs
StatePublished - Dec 1 2002

Keywords

  • Area efficient
  • High speed
  • Maximum a posteriori (MAP) algorithm
  • Parallel decoding
  • Turbo code
  • Very large scale integration (VLSI)

Fingerprint Dive into the research topics of 'Area-efficient high-speed decoding schemes for turbo decoders'. Together they form a unique fingerprint.

Cite this