91-DOU : Day #5

Course : Python for Machine Learning and Data Science Masterclass

Video Mins completed : 29 mins

Last Video # completed : 209

Hierarchical clustering : Cluster points similar to each other. Similarity based on distance metric.

Can be used to figure out the number of clusters.

Types of Hierarchical Clustering:

  • Agglomerative :Starts as each point is one cluster and then points joined to make to form bigger clusters.
  • Divisive : Starts with all points in a single cluster and then points divided into smaller clusters.

Data scaling methods

https://scikit-learn.org/stable/auto_examples/preprocessing/plot_all_scaling.html

https://medium.com/@onersarpnalcin/standardscaler-vs-minmaxscaler-vs-robustscaler-which-one-to-use-for-your-next-ml-project-ae5b44f571b9

https://medium.com/@hhuseyincosgun/which-data-scaling-technique-should-i-use-a1615292061e

91-DOU : Day #4

Course : Python for Machine Learning and Data Science Masterclass

Video Mins completed : 126 mins

Last Video # completed : 206

K-Means clustering – Scale data if mix of numeric and encoded features present to ensure that clustering and distance measures do not get affected.

Find the correlation between the features and labels to know which features have highest bearing on the clustering

To figure out the ideal K value, check the SSD(sum sqrd dist) model.inertia_ param of the model for a range of K’s. If for the change of K the value has not dropped significantly, it indicates a cutoff value which can be taken as cluster value.

Intro to chloropleth which allows clusters to be represented on a map.

https://plotly.com/python/choropleth-maps
https://medium.com/@nirmalsankalana/k-means-clustering-choosing-optimal-k-process-and-evaluation-methods-2c69377a7ee4

91-DOU : Day #3

Course : Cluster Analysis & Unsupervised ML in Python

Video Mins completed : 27 mins

Last Video # completed : 29

Notes

KNN Cost functions –

Current metric : Distance from Cluster mean or center.

  • Does not scale well with default values.If features have varied scales, unless the data is scaled the algo will not work as the distance metric will vary wildly.
  • Does fit with large datasets
  • Sensitive to K

Another metric : Purity

Requires labels. Such methods called “external validation” methods. Examples

  • Rand Measure
  • F-measure
  • Jaccard Index
  • Normalized Mutual Info

Metric on unlabeled data : Davies Bouldin Index (DBI)

Lower DBI == better

How to choose ‘K’?

Value of K post which there is not significant change in cost will be the ideal value of K.

Course : Python for Machine Learning and Data Science Masterclass

Video Mins completed : 25 mins

Last Video # completed : 197

ACM Talk#June 27th :From ML Engineering to AI Engineering

Points captured from the talk by Chip Huyen.

AI Engg -> process of building apps with foundational models
Foundation models -> coined in Stanford

AI Engg
1) Model as a service -> Anyone can use AI
2) Open-ended evaluations -> open ended responses, harder to evaluate
How to evaluate -> Comparative evals(chatbot arena), AI-as-a judge, 5 prompts to evaluate

Evaluation is a big challenge.

Feature engg -> context construction
Problem -> Hallucination. hallucinate less when lots of context provided to models
Retreival (RAG) -> BM25, ElasticSearch,Dense passage Retrieval, Vector DB ( compute intensive)
Future-> trying to build embeddings for table
Agentic

Bigger size -> higher latency, expensive, requiring more expertise to host
Check this -> ????

Inference optimization -> h/w algo, model arch
cache
param efficient finetuning

Topics to check

Apache arrow
Debugging gen ai apps
Distributed systems for LLM

Some snapshots of the presentation made by Chip.