## Abstract

This article describes a connectionist method for refining algorithms represented as generalized finite-state automata. The method translates the rule-like knowledge in an automaton into a corresponding artificial neural network, and then refines the reformulated automaton by applying backpropagation to a set of examples. This technique for translating an automaton into a network extends the KBANN algorithm, a system that translates a set of prepositional rules into a corresponding neural network. The extended system, FSKBANN, allows one to refine the large class of algorithms that can be represented as state-based processes. As a test, FSKBANN is used to improve the Chou–Fasman algorithm, a method for predicting how globular proteins fold. Empirical evidence shows that the multistrategy approach of FSKBANN leads to a statistically-significantly, more accurate solution than both the original Chou–Fasman algorithm and a neural network trained using the standard approach. Extensive statistics report the types of errors made by the Chou–Fasman algorithm, the standard neural network, and the FSKBANN network.

Original language | English (US) |
---|---|

Pages (from-to) | 195-215 |

Number of pages | 21 |

Journal | Machine Learning |

Volume | 11 |

Issue number | 2 |

DOIs | |

State | Published - May 1993 |

## Keywords

- Chou–Fasman algorithm
- Multistrategy learning
- finite-state automata
- neural networks
- protein folding
- theory refinement