Having access to a consistent set of dataset features during different phases of the ML lifecycle is becoming critical. Companies that build and deploy machine learning models may need to manage hundreds of features, and they may even require using the latest features for real time prediction. Feast (Feature Store) attempts to tackle these problems by providing a standard high performing go-based SDK for retrieving features needed for distributed model serving. In this talk, attendees will learn how to build a production ready feature store on Kubernetes by using Feast which will be used to serve features to the model. Additionally, attendees will see how Feast can be used with KServe, a serverless model inferencing engine, to retrieve stored features in real time. In this talk, we hope to share how users can get started with using Feast on Kubernetes to achieve mission critical high performance inference need. Here, we set up an end-to-end demo using the Feast KServe transformer on Kubernetes to demonstrate how online features can be served to the KServe for real time inferencing.