
Customer reviews are a goldmine of information for businesses. Analyzing sentiment in customer reviews helps in understanding customer satisfaction, identifying areas for improvement, and making data-driven business decisions.
In the following example, we delve into sentiment analysis using a neural network model. The code utilizes TensorFlow and Keras to create a simple neural network architecture with an embedding layer, a flatten layer, and a dense layer. The model is trained on a small labeled dataset for sentiment classification, distinguishing between positive and negative sentiments. Following training, the model is employed to classify new sentences. The provided Python code demonstrates each step, from tokenizing and padding sequences to compiling, training, and making predictions.
The following dataset is used for training on sentiment analysis:
sentences = [“I love this movie”, “This movie is terrible”, “The acting was amazing”, “The plot was confusing”]
labels = [1, 0, 1, 0] # 1 for positive, 0 for negative
We then use a tokenizer to convert the text into sequences of numbers, and then pad the sequences so that they have the same length. We then define a generative AI model with an embedding layer, a flatten layer, and a dense layer. Then, we compile and train the model on the training data. Finally, we use the trained model to classify a new sentence as either positive or negative.
Here is a complete Python code example with a dataset of four sentences labeled as positive or negative. We begin by importing libraries:
import numpy as np
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
The NumPy library is imported as np for numerical computations. The necessary modules from the TensorFlow library are imported for text preprocessing and model creation. Then we define the labeled dataset:
sentences = [“I love this movie”, “This movie is terrible”, “The acting was amazing”, “The plot was confusing”]
labels = [1, 0, 1, 0]
The sentences list contains textual sentences. The labels list contains corresponding labels where 1 represents a positive sentiment and 0 represents a negative sentiment. Next, we tokenize the text and convert it to sequences:
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
sequences = tokenizer.texts_to_sequences(sentences)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
sequences = tokenizer.texts_to_sequences(sentences)
A Tokenizer object is created to tokenize the text. The fit_on_texts method is used to fit the tokenizer on the provided sentences. The texts_to_sequences method is used to convert the sentences into sequences of tokens. Now we need to pad the sequences so they are the same length:
max_sequence_length = max([len(seq) for seq in sequences])
padded_sequences = pad_sequences(sequences, maxlen=max_sequence_length)