From prototyping models to orchestrating data, services, and infra.
Model to cloud, observability included
Streams, ROS2 & fast backends
Docker, CI/CD, infra as code
Recent work & research
Hey! I’m Akilan. I am a Junior at the University of Texas at Dallas studying Computer Science. I love building where autonomy, audio, and ML meet—and then pushing those ideas to cloud scale so they actually ship.
On the autonomy side I’ve built ROS2 sensor-fusion pieces (GNSS/IMU/LiDAR) and tooling that turns experiments into stable services. On the backend side I’ve consolidated microservices into cleaner pipelines (Flask + Leaflet + MQTT) and stood up FastAPI + Postgres stacks with solid DX and observability. In audio, I’ve shipped a web stem-separator (Demucs + Celery/Redis) and a melody-prediction RNN with response-time budgets in mind, plus a CLAP study exploring data-scarce training.
Research: I presented a poster on using synthetic data to train a CLAP model—and showed that, in low-data settings, synthetic examples can actually help the model learn useful audio-text alignments.
Right now: sketching agentic workflows for customer support—triage, suggested replies, and routing—so teams can do more with fewer human agents while keeping quality and feedback loops tight.
How I work: prototype fast, productionize intentionally, ship behind flags, add traces/metrics from day one, and write docs that future-me actually thanks me for.
Outside of work it’s music, soccer, and TV shows (Ozark is peak). I also watch football—nobody’s matching the Detroit Lions’ work ethic and team culture. Music-wise I bounce between old ’80s and modern picks like Lola Young and Twenty One Pilots—I switch genres like clothes.
When I need a reset, I speed-solve Rubik’s cubes and then get back to building.
A few focused builds
Web-based stem separation + two-deck mixer. Demucs backend with Flask/Celery/Redis; React/Vite UI with Tone.js.
RNN melody generator (TensorFlow) with a simple web demo; composes short sequences from prior notes.
Unbeatable Tic-Tac-Toe using MiniMax with alpha-beta pruning; clean reference implementation.
Touch-free volume & brightness using webcam hand gestures. MediaPipe + OpenCV with a lightweight Flask bridge.
Scroll to view faster