Proformer: a hybrid macaron transformer model predicts expression values from promoter sequences

Il Youp Kwak, Byeong Chan Kim, Juhyun Lee, Taein Kang, Daniel J. Garry, Jianyi Zhang, Wuming Gong

Research output: Contribution to journalArticlepeer-review


The breakthrough high-throughput measurement of the cis-regulatory activity of millions of randomly generated promoters provides an unprecedented opportunity to systematically decode the cis-regulatory logic that determines the expression values. We developed an end-to-end transformer encoder architecture named Proformer to predict the expression values from DNA sequences. Proformer used a Macaron-like Transformer encoder architecture, where two half-step feed forward (FFN) layers were placed at the beginning and the end of each encoder block, and a separable 1D convolution layer was inserted after the first FFN layer and in front of the multi-head attention layer. The sliding k-mers from one-hot encoded sequences were mapped onto a continuous embedding, combined with the learned positional embedding and strand embedding (forward strand vs. reverse complemented strand) as the sequence input. Moreover, Proformer introduced multiple expression heads with mask filling to prevent the transformer models from collapsing when training on relatively small amount of data. We empirically determined that this design had significantly better performance than the conventional design such as using the global pooling layer as the output layer for the regression task. These analyses support the notion that Proformer provides a novel method of learning and enhances our understanding of how cis-regulatory sequences determine the expression values.

Original languageEnglish (US)
Article number81
JournalBMC bioinformatics
Issue number1
StatePublished - Dec 2024

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.


  • Enhancer
  • Expression prediction
  • Macaron Transformer
  • Passively Parallel Reporter Assay (MPRA)
  • Sequence model


Dive into the research topics of 'Proformer: a hybrid macaron transformer model predicts expression values from promoter sequences'. Together they form a unique fingerprint.

Cite this