Jonathan Chang - Personal Website

Jonathan Chang, machine learning engineer with focus on LLMs and generative models

Jonathan Chang

I'm an experienced machine learning engineer with a focus on LLM and generative image models. I'm passionate about AI research and open-source projects.

email: contact at jonathanc dot net

Projects

Featured

LLMProc logo - Unix-inspired framework for LLM applications

LLMProc

Tool, 2025

A Unix-inspired framework for building robust, scalable LLM applications

Visualization of Additive Rotary Embedding technique for position encoding in LLMs

Additive Rotary Embedding

Research, 2024

A competitive variant of rotary position embedding (RoPE)

Code snippet showing minLoRA implementation - minimal LoRA library in 200 lines of code

minLoRA

Code, 2023

A minimal library for LoRA (200 LoC!), supports any model in PyTorch

Notable

Santa Hat AI

Santa Hat AI

Web App, 2024

A webapp that uses MediaPipe face detection to automatically place festive santa hats on profile pictures

vFLUX

vFLUX

Code, 2024

An optimized FLUX model inference engine

AI Shell

AI Shell

Tool, 2024

A transparent shell wrapper for building context-aware AI tools

Anim·E

Anim·E

Model, 2023

State-of-the-art anime image generator at the time, before Stable Diffusion fine-tuned models

More

npx github:cccntu/wikimcp

WikiMCP

Tool, 2025

A MCP server to let Claude explore random Wikipedia pages

uvx llmcp serve

LLMCP

Tool, 2025

A minimal MCP server for LLM to query other LLMs via LiteLLM and MCP.

fork()

Forking an AI Agent

Experiment, 2025

A MVP exploring fork() pattern for AI agents

T5 FlexAttention

T5 FlexAttention

Code, 2024

T5 model optimized with FlexAttention

Multi-head Latent Attention

Multi-head Latent Attention

Implementation, 2024

I implemented Multi-head Latent Attention from deepseek-v2

Mixture of Depths

Mixture of Depths

Implementation, 2024

I implemented Mixture of Depths from Google DeepMind's paper

Flex Diffusion

Flex Diffusion

Model, 2023

I fine-tuned Stable Diffusion 2 for dynamic aspect ratio generation

DDIM inversion notebook

DDIM inversion notebook

Demo, 2022

My popular notebook demonstrating DDIM inversion using Stable Diffusion

Publications

Timeline

2022 - 2024 · Taboola

Spend some time working in algorithm team, worked on feature engineering and designed experiments. Later joined the Generative AI team, where I integrated and optimized SoTA image models into our product.

2021-2022 · BigScience Project

I contributed to the BigScience project, mainly in the metadata working group. I worked on the training codebase and conducted research experiments on using metadata to improve language model performance.

2021-2022 · ASUS AICS

I spent 6 months in the AICS department, where we collaborated with local hospitals. I worked on medication recommendation models using BERT.

2020-2021 · NTU MiuLab

I worked with Prof. Yun-Nung Chen on SoTA Dialogue System based on GPT2. We participated in the Situated Interactive MultiModal Conversations (SIMMC) Challenge (DSTC9) and achieved 3rd place.

2020 · Google

Spent a summer at Google, internship was replaced with a remote project due to COVID-19. I worked on a project using NLP to recommend relevant articles to users.

2017-2021 · National Taiwan University

Completed my undergraduate studies in Computer Science, focusing on machine learning and artificial intelligence. I was a TA for the course Applied Deep Learning.