Uses
A comprehensive list of tools, libraries, frameworks, and platforms I use daily for machine learning, AI development, and data science work.
ML & AI Frameworks
- PyTorch is my go-to deep learning framework for building and training neural networks — I use it for NLP models, classification tasks, and custom architectures.
- TensorFlow / Keras for rapid prototyping and deploying production-ready models, especially for image and text classification pipelines.
- Scikit-learn is my standard toolkit for classical ML — random forests, SVMs, logistic regression, clustering, and preprocessing pipelines.
- XGBoost for gradient boosting on tabular data, particularly useful in financial risk forecasting and disease prediction models.
- HuggingFace Transformers for fine-tuning pre-trained language models on domain-specific NLP tasks such as summarisation and sentiment analysis.
Data Science & Visualisation
- Pandas and NumPy form the backbone of every data pipeline I build — from cleaning and transforming raw datasets to feature engineering at scale.
- Matplotlib and Seaborn for exploratory data analysis, statistical plots, and presenting insights in research papers and reports.
- Plotly and Streamlit for building interactive dashboards and deploying data apps quickly — my career aspirations predictor is live on Streamlit Cloud.
- Power BI and Tableau for business intelligence reporting and creating executive-level dashboards from structured datasets.
Development & APIs
- VS Code is my primary editor. I rely heavily on the Python, Pylance, Jupyter, and GitLens extensions for day-to-day ML development.
- Jupyter Notebooks for iterative experimentation, EDA, and sharing reproducible research — every ML project starts in a notebook before moving to a module structure.
- FastAPI is my preferred framework for wrapping ML models into production REST APIs — fast, async-friendly, and auto-generates OpenAPI docs.
- Flask and Django for lightweight APIs and full-stack web apps respectively, including the document summariser backend.
- Git and GitHub for version control, collaborative development, and CI/CD pipelines across all projects.
Currently Exploring
- Computer Vision with OpenCV, YOLO, and CNN-based architectures for object detection, image segmentation, and super-resolution — building on the deep learning work from my third internship.
- Retrieval-Augmented Generation (RAG) systems that pair large language models with vector stores (FAISS, Pinecone) to build context-aware, knowledge-grounded AI applications.
- LLM fine-tuning using HuggingFace PEFT / LoRA to adapt foundation models to domain-specific tasks efficiently and at low cost.
- Advanced Deep Learning — Transformers, diffusion models, and multi-modal architectures — through hands-on projects and the fast.ai and DeepLearning.AI curriculum.
- MLOps practices: experiment tracking with MLflow, containerisation with Docker, and CI/CD pipelines to move models from notebook to production reliably and reproducibly.
Languages & Stack
| Primary Language | Python |
|---|---|
| Other Languages | SQL, Java, JavaScript, React, Node.js |
| Relational Databases | MySQL, PostgreSQL, SQLite |
| NoSQL | MongoDB |
| Cloud | AWS, Azure (fundamentals) |
| Education | BSc Computer Science — University of Sindh (Dec 2025) |
| Experience | 3x Intern (ML, Data Science, AI/DL) at ITSOLERA PVT LTD |
