AI Expert Retro Talk Part IV: Sequence Models

Lucas Martin Calderon
3 min readAug 8, 2023

--

A deep dive into how chatGPT works, and how LLMs work under the hood

Photo by Bradyn Trollip on Unsplash

What are Sequence Models?

Sequence models are specialized models used to process sequences of data, where the order of the data matters. These models are widely used in various applications, such as climate science, natural language processing, and financial markets. Here’s a detailed summary of sequence models, including different types and their differences:

Working with Sequences:

  • Sequences are ordered lists of feature vectors indexed by time steps.
  • Sequences can be massive, like streams of sensor readings, or collections of documents, patient stays, etc.
  • Unlike individual inputs, elements in a sequence are related, and the occurrence of an element depends on previous elements.
  • Sequence models can predict fixed targets, sequentially structured targets, or unsupervised density modeling.

Types of Sequence Models

Autoregressive Models:

  • These models use previous values of the same signal to predict the next value.
  • They can be used for stock price prediction, where the price at each time step is observed.
  • Challenges include varying number of inputs and handling long sequences.
  • Strategies include conditioning on a window of length and using latent autoregressive models.

Sequence Models:

  • These models estimate the joint probability of an entire sequence, often used in natural language processing.
  • They can be used to evaluate likelihood, sample sequences, and optimize for the most likely sequences.

Markov Models:

  • These models condition only on the previous time steps, rather than the entire sequence history.
  • They are characterized by orders, such as first-order or nth-order, depending on how many previous steps are considered.
  • Markov models are useful even when the condition is only approximately true.

The Order of Decoding:

  • Sequences can be factorized in different orders, such as left-to-right or right-to-left.
  • Left-to-right is preferred for language modeling as it aligns with reading direction, allows assigning probabilities to long sequences, and often represents easier predictive modeling problems.

Training and Prediction:

  • Training on synthetic data like the trigonometric sin function can be done using standard linear regression.
  • One-step-ahead predictions often look good, but multi-step-ahead predictions can fail spectacularly due to error accumulation.
  • The quality of prediction degrades as predictions are made further into the future.

Summary of Key Points:

  • Interpolation is generally easier than extrapolation.
  • Temporal order must be respected when training on sequence data.
  • Autoregressive models and latent-variable autoregressive models are popular choices.
  • Predicting the forward direction is often easier than the reverse direction.
  • Errors accumulate in multi-step-ahead predictions, leading to degradation in prediction quality.

Conclusion

In conclusion, sequence models are powerful tools for handling ordered data, with applications ranging from language processing to financial forecasting. Different types of models and strategies are employed depending on the specific requirements and characteristics of the data. Understanding these models and their underlying principles is essential for effective implementation and prediction.

--

--

Lucas Martin Calderon
Lucas Martin Calderon

Written by Lucas Martin Calderon

Founder & CEO @ Pentestify | Driving Blockchain Security with AI | SuperNova Under 30

No responses yet