The Complete Patient Picture: Gaining Insights from MIMIC-IV's Multi-modal EHR

Home / Projects Single Page

Image

Working with Encoder, Decoder, and LLMs

This Jupyter Notebook provides a hands-on guide to text generation, progressing from foundational concepts to modern Large Language Models (LLMs). It begins by demonstrating how to build a classic sequence-to-sequence (Seq2Seq) translation model from scratch using LSTMs, comparing versions with and without an attention mechanism. The tutorial then advances to using the pre-trained T5 model for tasks like summarization and translation through simple input prefixes. Finally, it explores various prompt engineering techniques for powerful LLMs, showcasing prefix prompts with Facebook's OPT, instruction-following with the Qwen3 model, and offering a comparison to ChatGPT, effectively covering the evolution from building models to prompting them.

To The Top

Image

MIMIC-IV EDA (under construction)

To The Top

Take a Chance!

“IN THE END… We only regret the chances we didn’t take, the relationships we were afraid to have,and the decisions we waited too long to make.” ― Lewis Carroll

Download Resume