We put excellence, value and quality above all - and it shows
A Technology Partnership That Goes Beyond Code
“Arbisoft has been my most trusted technology partner for now over 15 years. Arbisoft has very unique methods of recruiting and training, and the results demonstrate that. They have great teams, great positive attitudes and great communication.”
Implementing Deep Learning Models with TensorFlow, Keras, and PyTorch

Deep learning is a subset of machine learning that uses neural networks to solve complex problems such as image recognition, natural language processing, and more. TensorFlow, Keras, and PyTorch are the most popular frameworks for building deep learning models. This guide will help you get started with each, compare their approaches, and provide practical code examples.
Prerequisites
Before diving in, you should have:
- Basic Python programming knowledge
- Familiarity with NumPy
- Some understanding of linear algebra and matrix operations
Setting Up a Python Virtual Environment
1. Choose a Python Version
Make sure you have the desired Python version installed. You can list installed versions with:
ls /Library/Frameworks/Python.framework/Versions/
2. Create a Virtual Environment
Run the following command to create a virtual environment:
python -m venv myenv
3. This creates a new folder named myenv containing the virtual environment.
4. Activate the Virtual Environment
On macOS, run:
source myenv/bin/activate
5. On Windows, run:
myenv\Scripts\activate
6. Your terminal prompt should now show (myenv).
7. Install Required Libraries
Use pip to install libraries inside the virtual environment. For example, to install Matplotlib:
pip install matplotlib
8. Deactivate the Environment
When finished, deactivate with:
deactivate
TensorFlow
For most recent versions of TensorFlow (2.x), Python 3.8, 3.9, 3.10, or 3.11 are recommended and officially supported.
Check the TensorFlow installation guide for the latest compatibility details, as support for newer Python versions may be added over time.
Getting Started with TensorFlow: Code Examples and Explanations
1. Importing TensorFlow
import tensorflow as tf
print(tf.__version__)
Why?
You need to import TensorFlow to access its deep learning tools and check your installed version.
Alternatives:
- Keras: import keras (but usually use tf.keras with TensorFlow)
- PyTorch: import torch
2. Creating Tensors
import tensorflow as tf
# Create a constant tensor
a = tf.constant([[1, 2], [3, 4]])
print(a)
Why?
Tensors are the basic data structure in TensorFlow, representing multi-dimensional arrays for computation.
Alternatives:
- Keras: Uses tensors internally, but you rarely create them directly.
- PyTorch: torch.tensor([[1, 2], [3, 4]])
3. Building a Simple Neural Network (Sequential Model)
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)),
tf.keras.layers.Dense(3, activation='softmax')
])
Why?
The Sequential model is the simplest way to stack layers for most problems.
How does this help?
It allows you to quickly prototype and build feedforward neural networks.
Alternatives:
- Keras: Same as above (tf.keras.Sequential)
- PyTorch: Use torch.nn.Sequential
4. Compiling the Model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Why?
Compiling configures the model for training by specifying the optimizer, loss function, and metrics.
Alternatives:
- Keras: Same as above.
- PyTorch: You define the optimizer and loss separately, e.g.,
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.CrossEntropyLoss()
5. Training the Model
import numpy as np
# Dummy data
X_train = np.random.rand(100, 4)
y_train = np.random.randint(0, 3, 100)
model.fit(X_train, y_train, epochs=5, batch_size=8)
Why?
fit trains the model on your data.
Alternatives:
- Keras: Same as above.
- PyTorch: Use a training loop with optimizer.step() and loss.backward().
6. Evaluating the Model
loss, accuracy = model.evaluate(X_train, y_train)
print(f"Loss: {loss}, Accuracy: {accuracy}")
Why?
To measure how well your model performs on data.
Alternatives:
- Keras: Same as above.
- PyTorch: Calculate loss and accuracy manually in a loop.
7. Making Predictions
predictions = model.predict(X_train[:5])
print(predictions)
Why?
To use your trained model to make predictions on new data.
Alternatives:
Keras: Same as above.
PyTorch: Use model(input_tensor).
Summary Table
Task | TensorFlow / Keras | PyTorch Equivalent |
Import | import tensorflow as tf | import torch |
Create Tensor | tf.constant() | torch.tensor() |
Build Model | tf.keras.Sequential([...]) | torch.nn.Sequential([...]) |
Compile Model | model.compile(...) | Define optimizer/loss manually |
Train Model | model.fit(...) | Custom training loop |
Evaluate Model | model.evaluate(...) | Custom evaluation loop |
Predict | model.predict(...) | model(input) |
Keras
Keras is included as part of TensorFlow (tf.keras) and follows TensorFlow’s Python version requirements.
For standalone Keras, Python 3.8, 3.9, 3.10, or 3.11 are generally supported.
Always check the Keras release notes or TensorFlow installation guide for the latest compatibility details.
Getting Started with Keras: Code Examples and Explanations
1. Importing Keras
import keras
print(keras.__version__)
Why?
You need to import Keras to access its deep learning tools and check your installed version.
Alternatives:
TensorFlow: import tensorflow as tf (and use tf.keras)
PyTorch: import torch
2. Creating Tensors (Standalone Keras uses NumPy arrays)
import numpy as np
# Create a NumPy array (Keras models accept NumPy arrays as input)
a = np.array([[1, 2], [3, 4]])
print(a)
Why?
Keras models use NumPy arrays for input data, which are easy to create and manipulate.
Alternatives:
- TensorFlow: tf.constant([[1, 2], [3, 4]])
- PyTorch: torch.tensor([[1, 2], [3, 4]])
3. Building a Simple Neural Network (Sequential Model)
from keras.models import Sequential
from keras.layers import Dense
model = Sequential([
Dense(16, activation='relu', input_shape=(4,)),
Dense(3, activation='softmax')
])
Why?
The Sequential model is the simplest way to stack layers for most problems.
How does this help?
It allows you to quickly prototype and build feedforward neural networks.
Alternatives:
- TensorFlow: tf.keras.Sequential([...])
- PyTorch: torch.nn.Sequential([...])
4. Compiling the Model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Why?
Compiling configures the model for training by specifying the optimizer, loss function, and metrics.
Alternatives:
- TensorFlow: Same as above (model.compile(...))
- PyTorch: You define the optimizer and loss separately, e.g.,
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.CrossEntropyLoss()
5. Training the Model
import numpy as np
# Dummy data
X_train = np.random.rand(100, 4)
y_train = np.random.randint(0, 3, 100)
model.fit(X_train, y_train, epochs=5, batch_size=8)
Why?
fit trains the model on your data.
Alternatives:
- TensorFlow: Same as above (model.fit(...))
- PyTorch: Use a training loop with optimizer.step() and loss.backward()
6. Evaluating the Model
loss, accuracy = model.evaluate(X_train, y_train)
print(f"Loss: {loss}, Accuracy: {accuracy}")
Why?
To measure how well your model performs on data.
Alternatives:
- TensorFlow: Same as above (model.evaluate(...))
- PyTorch: Calculate loss and accuracy manually in a loop
7. Making Predictions
predictions = model.predict(X_train[:5])
print(predictions)
Why?
To use your trained model to make predictions on new data.
Alternatives:
- TensorFlow: Same as above (model.predict(...))
PyTorch: Use model(input_tensor)
Summary Table
Task | Keras (Standalone) | TensorFlow / tf.keras | PyTorch Equivalent |
Import | import keras | import tensorflow as tf | import torch |
Create Tensor/Input | np.array() | tf.constant() | torch.tensor() |
Build Model | Sequential([...]) | tf.keras.Sequential([...]) | torch.nn.Sequential([...]) |
Compile Model | model.compile(...) | model.compile(...) | Define optimizer/loss manually |
Train Model | model.fit(...) | model.fit(...) | Custom training loop |
Evaluate Model | model.evaluate(...) | model.evaluate(...) | Custom evaluation loop |
Predict | model.predict(...) | model.predict(...) | model(input) |
PyTorch
For recent versions of PyTorch, Python 3.8, 3.9, 3.10, or 3.11 are officially supported.
Always check the PyTorch installation guide for the latest compatibility details, as support for newer Python versions may be added over time.
Getting Started with PyTorch: Code Examples and Explanations
1. Importing PyTorch
import torch
print(torch.__version__)
Why?
You need to import PyTorch to access its deep learning tools and check your installed version.
Alternatives:
TensorFlow: import tensorflow as tf
Keras: import keras
2. Creating Tensors
import torch
# Create a tensor
a = torch.tensor([[1, 2], [3, 4]])
print(a)
Why?
Tensors are the basic data structure in PyTorch, representing multi-dimensional arrays for computation.
Alternatives:
- TensorFlow: tf.constant([[1, 2], [3, 4]])
- Keras: Uses NumPy arrays as input (np.array([[1, 2], [3, 4]]))
3. Building a Simple Neural Network (Sequential Model)
import torch.nn as nn
model = nn.Sequential(
nn.Linear(4, 16),
nn.ReLU(),
nn.Linear(16, 3),
nn.Softmax(dim=1)
)
Why?
nn.Sequential is the simplest way to stack layers for most problems.
How does this help?
It allows you to quickly prototype and build feedforward neural networks.
Alternatives:
- TensorFlow: tf.keras.Sequential([...])
- Keras: Sequential([...])
4. Defining Loss and Optimizer
import torch.optim as optim
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
Why?
You need to specify how the model learns (optimizer) and how to measure error (loss function).
Alternatives:
- TensorFlow/Keras: Use model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
5. Training the Model (Manual Training Loop)
import numpy as np
# Dummy data
X_train = np.random.rand(100, 4).astype(np.float32)
y_train = np.random.randint(0, 3, 100)
X_train_tensor = torch.tensor(X_train)
y_train_tensor = torch.tensor(y_train)
for epoch in range(5):
optimizer.zero_grad()
outputs = model(X_train_tensor)
loss = loss_fn(outputs, y_train_tensor)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}, Loss: {loss.item()}")
Why?
PyTorch gives you full control over the training process with manual loops.
How does this help?
You can customize every aspect of training, which is useful for research and advanced use cases.
Alternatives:
- TensorFlow/Keras: Use model.fit(X_train, y_train, epochs=5, batch_size=8)
6. Evaluating the Model
with torch.no_grad():
outputs = model(X_train_tensor)
predicted = torch.argmax(outputs, dim=1)
accuracy = (predicted == y_train_tensor).float().mean().item()
print(f"Accuracy: {accuracy}")
Why?
To measure how well your model performs on data.
Alternatives:
- TensorFlow/Keras: Use model.evaluate(X_train, y_train)
7. Making Predictions
with torch.no_grad():
predictions = model(X_train_tensor[:5])
print(predictions)
Why?
To use your trained model to make predictions on new data.
Alternatives:
- TensorFlow/Keras: Use model.predict(X_train[:5])
Summary Table
Task | PyTorch | TensorFlow / tf.keras | Keras (Standalone) |
Import | import torch | import tensorflow as tf | import keras |
Create Tensor | torch.tensor() | tf.constant() | np.array() |
Build Model | nn.Sequential([...]) | tf.keras.Sequential([...]) | Sequential([...]) |
Compile Model | Define optimizer/loss manually | model.compile(...) | model.compile(...) |
Train Model | Manual training loop | model.fit(...) | model.fit(...) |
Evaluate Model | Manual evaluation | model.evaluate(...) | model.evaluate(...) |
Predict | model(input) | model.predict(...) | model.predict(...) |
Which Deep Learning Library Should You Learn First?
If you are beginning your deep learning journey, you do not need to master all three libraries at once. Here’s a recommended approach:
- Start with TensorFlow (using Keras API):
- Keras (as tf.keras) is user-friendly and widely used for prototyping and learning deep learning concepts.
- TensorFlow’s integration with Keras allows you to build, train, and deploy models easily.
- Most tutorials and courses use TensorFlow/Keras, making it a great starting point.
- Move to PyTorch:
- Once you are comfortable with the basics, learning PyTorch will help you understand deep learning at a lower level.
- PyTorch is popular in research and offers more flexibility for custom models and training loops.
- Standalone Keras:
- Standalone Keras is less commonly used now, as most development has shifted to tf.keras.
- You may explore it for legacy projects or to understand its differences, but it’s not essential for most new learners.
In Summary
- Begin with TensorFlow/Keras for a gentle introduction and practical experience.
- Advance to PyTorch for deeper understanding and flexibility.
- Explore standalone Keras only if needed.
Focus on understanding the core concepts (tensors, models, training loops, evaluation, prediction) in one library first—these skills will transfer easily to the others.
...Loading Related Blogs