Deep Learning with Keras — Classifying Cats and Dogs (Part 1)

Ferhat Culfaz
2 min readJul 19, 2018

--

Here is a simple example of using supervised learning, with Keras to classify images of cats and dogs. This first part will implement a simple CNN (Convolutional Neural Network) that will be self train, achieving an accuracy of 80% on test data. This takes 5.5 hours on an Intel i7 quad core 6700k 4GHz, no GPU. It should take <10 mins on a GTX 1070 card.

The training data is 4000 images of cats, 4000 images of dogs. The test set is 1000 images of cats, 1000 images of dogs. Data augmentation is implemented to boost the training size artificially.

The input images of the cats

Cats

and the dogs

Dogs

#Import the Keras libraries

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense

# Initialising the CNN

classifier = Sequential()

# Step 1 — Convolution

classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = ‘relu’))

# Step 2 — Pooling

classifier.add(MaxPooling2D(pool_size = (2, 2)))

# Adding a second convolutional layer

classifier.add(Conv2D(32, (3, 3), activation = ‘relu’))
classifier.add(MaxPooling2D(pool_size = (2, 2)))

# Step 3 — Flattening

classifier.add(Flatten())

# Step 4 — Full connection

classifier.add(Dense(units = 128, activation = ‘relu’))
classifier.add(Dense(units = 1, activation = ‘sigmoid’))

# Compiling the CNN we shall use the Adam stochastic optimisation method, binary cross entropy loss function

classifier.compile(optimizer = ‘adam’, loss = ‘binary_crossentropy’, metrics = [‘accuracy’])

# Part 2 — Fitting the CNN to the images

First augment the data to increase the training set size

from keras.preprocessing.image import ImageDataGeneratortrain_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)

# Resize and rescale the images

test_datagen = ImageDataGenerator(rescale = 1./255)training_set = train_datagen.flow_from_directory(‘dataset/training_set’,
target_size = (64, 64),
batch_size = 32,
class_mode = ‘binary’)
test_set = test_datagen.flow_from_directory(‘dataset/test_set’,
target_size = (64, 64),
batch_size = 32,
class_mode = ‘binary’)

Binary labels must be used because earlier we use binary_crossentropy loss function.

# The classifier

classifier.fit_generator(training_set,
steps_per_epoch = 8000,
epochs = 25,
validation_data = test_set,
validation_steps = 2000)

# The result

Roughly 80% validation accuracy achieved

Not bad for the first attempt. In the next part I will discuss how this can be improved.

Bibilography

Deep Learning with Python, Francois Chollet, Manning, 2018

--

--

Ferhat Culfaz
Ferhat Culfaz

Written by Ferhat Culfaz

Dabbling with machine learning and data science. Feel free to connect on LinkedIn https://www.linkedin.com/in/ferhat-culfaz/

No responses yet