AI/ML archive / selective public proof

Models only matter when they change the operating path.

I completed Texas McCombs / UT Austin's Post Graduate Program in AI & Machine Learning: Business Applications with project work across forecasting, classification, deep learning, computer vision, RAG, deployment, and business recommendations.

forecasting system

model

RF tuned

R2

0.932 test set

MAPE

3.8% business view

deploy

HF Docker Spaces
sales forecasting / model to API to UI

Lead artifact / SuperKart

Forecasting with an operator path.

SuperKart is the strongest public artifact because it does not stop at a notebook. It compares models, selects a tuned Random Forest, serializes the pipeline, exposes a Flask API, wraps it with Streamlit, and ships through Docker-based Hugging Face Spaces.

ship record

$ train tuned Random Forest pipeline
$ serialize superkart_sales_model.joblib
$ serve predictions with Flask
$ wrap inputs in Streamlit
$ deploy backend + frontend spaces

Selective archive

The range matters because the metrics change.

These are sanitized summaries. Raw notebook exports, local file paths, and noisy install output stay out of the public site.

retrieval system

Medical RAG assistant

01

chunk medical manuals

02

embed with sentence transformers

03

retrieve top context

04

judge grounding

sanitized proof

Medical RAG assistant

A healthcare question-answering prototype grounded in medical manuals rather than standalone model memory.

pressure
A model answer is risky when it is detached from source material and review.
installed
Chunked manuals, embeddings, retrieval, answer comparison, and grounding checks.
evidence
RAG notebook artifact with retrieved context and relevance judgments.
handoff
AI answers need source material, safeguards, and human review.

neural network

test recall

0.837 failure class

val recall

0.919 selection signal

metric

F1 tradeoff view

stack

Keras neural network

recall-first model

ReneWind maintenance

A failure-prediction project where missing a failure mattered more than producing a tidy accuracy score.

pressure
Unplanned generator failure is the expensive miss, so tidy accuracy is not enough.
installed
Neural-network comparison with recall-first evaluation and false-alarm tradeoff review.
evidence
Final selected model reached 0.837 test recall for failures.
handoff
Maintenance recommendations are framed as prioritization, not blind automation.

decision support

01prioritize cases

02flag borderline files

03monitor geography bias

04retrain with policy drift

responsible framing

EasyVisa classification

A visa certification model framed as prioritization support, not automated decision-making.

pressure
Classification support can become irresponsible if it hides imbalance, bias, or review boundaries.
installed
F1 and ROC-AUC evaluation with imbalance notes and human-review framing.
evidence
Runbook-style recommendations for prioritization, fairness monitoring, and retraining.
handoff
Keep borderline and policy-sensitive cases in human hands.

marketing model

ROC-AUC

~0.95 campaign scoring

signal

CD relationship depth

segment

>$100K income threshold

use

target not spam

segmentation proof

AllLife Bank targeting

A personal-loan targeting project focused on customer segments, model interpretation, and outreach economics.

pressure
Marketing outreach wastes money when targeting ignores segment economics and customer fit.
installed
Model comparison, interpretation, segment review, and business-impact framing.
evidence
Around 0.95 ROC-AUC with income, education, CD accounts, and card spend as signals.
handoff
Use targeting to prioritize useful outreach, not spam a list.

computer vision

helmnet-safety-vision
load image classestrain CNN baselinecompare VGG16 headreview metric tableclean final claim

visual model exploration

HelmNet safety vision

A helmet-detection exploration using CNN and VGG16 transfer-learning paths for safety monitoring.

pressure
Computer-vision safety claims need clean metrics before they become public proof.
installed
Scratch CNN, VGG16 head, and data-augmentation comparisons.
evidence
Visual model artifact presented as exploration until the final claim is cleaned.
handoff
Do not publish a best-model claim until the metric table and conclusion agree.

Model handoff

The useful pattern is problem, model, metric, ship, handoff.

That pattern is the bridge from coursework to consulting: AI that has source material, evaluation, deployment thinking, and human ownership.

01

problem

Start with business use

Forecast revenue, prioritize outreach, flag risk, answer from manuals, or detect safety state.

business context
02

model

Choose the right shape

Regression, classification, neural network, computer vision, and RAG solve different problems.

model comparison
03

metric

Make the tradeoff visible

RMSE, MAPE, F1, recall, ROC-AUC, and grounding scores tell different stories.

evaluation table
04

ship

Move toward operation

The best project connected model output to API, UI, deployment, or clear business recommendation.

API / UI / recommendation
05

handoff

Keep humans in control

The responsible pattern is decision support, monitoring, retraining, and clear ownership.

runbook logic