Deploying TensorFlow Models Locally

By Bill Sharlow

Day 8 of our DIY TensorFlow Deep Learning Framework Setup

Welcome to Day 8 of our 10-Day DIY TensorFlow Deep Learning Framework Setup series! Today, we’re delving into the exciting realm of deploying TensorFlow models for local inference. Deploying your models allows you to make predictions on new data, turning your trained model into a practical tool.

Exporting a TensorFlow Model

Before deploying a model, you need to export it in a format suitable for inference. TensorFlow provides the SavedModel format for this purpose. Let’s adapt our previous script to export the trained model:

import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.regularizers import l2

# Load and preprocess the CIFAR-10 dataset
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()
train_images = train_images.astype('float32') / 255
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

# Build a CNN model with hyperparameter tuning and regularization
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu', kernel_regularizer=l2(0.01)))
model.add(layers.Dropout(0.3))
model.add(layers.Dense(10, activation='softmax'))

# Compile the model with hyperparameters
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# Train the model
model.fit(train_images, train_labels, epochs=20, batch_size=64, validation_split=0.2)

# Export the model as a SavedModel
model.save('my_model')

In this script, we added the model.save('my_model') line to save the model in the current directory with the name ‘my_model’. This creates a directory containing the SavedModel format.

Loading and Making Predictions

Now, let’s load the SavedModel and use it for making predictions on new data:

import tensorflow as tf
import numpy as np

# Load the SavedModel
loaded_model = tf.keras.models.load_model('my_model')

# Create a sample input for prediction
sample_input = np.random.rand(1, 32, 32, 3).astype('float32')

# Make predictions
predictions = loaded_model.predict(sample_input)

print('Predictions:', predictions)

Here, we use tf.keras.models.load_model to load the SavedModel. We then create a sample input and use the predict method to obtain predictions.

Web Application Deployment

To make your model accessible via a web application, you can use frameworks like Flask or FastAPI. Here’s a simple Flask example:

from flask import Flask, request, jsonify
import tensorflow as tf
import numpy as np

app = Flask(__name__)

# Load the SavedModel
loaded_model = tf.keras.models.load_model('my_model')

@app.route('/predict', methods=['POST'])
def predict():
    data = request.get_json()
    input_data = np.array(data['input'])
    predictions = loaded_model.predict(input_data)
    return jsonify({'predictions': predictions.tolist()})

if __name__ == '__main__':
    app.run(port=5000)

In this example, we create a Flask app with an endpoint ‘/predict’ that accepts POST requests containing JSON data. The model makes predictions on the input data and returns the results as JSON.

What’s Next?

You’ve successfully deployed your TensorFlow model for local inference! In the final days of our series, we’ll explore cloud deployment and serving models at scale.

Stay tuned for Day 9: Deploying TensorFlow Models on the Cloud, where we’ll guide you through deploying your model on a cloud platform. Happy coding!

Leave a Comment

Exit mobile version