Search
The machine learning paradigm is one where you have data, that data is labeled, and you want to figure out the rules that match the data to the labels. The simplest possible scenario to show this in code is as follows. Consider these two sets of numbers:
X = –1, 0, 1, 2, 3, 4
Y = –3, –1, 1, 3, 5, 7
There’s a relationship between the X and Y values (for
example, if X is –1 then Y is –3, if X is 3 then Y is 5, and so on). Can you
see it?
Here’s the full code, using the TensorFlow Keras
APIs. Don’t worry if it doesn’t make sense yet; we’ll go through it line by
line:
import
tensorflow
as
tf
import
numpy
as
np
from
tensorflow.keras
import
Sequential
from
tensorflow.keras.layers
import
Dense
model
=
Sequential
([
Dense
(
units
=
1
,
input_shape
=
[
1
])])
model
.
compile
(
optimizer
=
'sgd'
,
loss
=
'mean_squared_error'
)
xs
=
np
.
array
([
-
1.0
,
0.0
,
1.0
,
2.0
,
3.0
,
4.0
],
dtype
=
float
)
ys
=
np
.
array
([
-
3.0
,
-
1.0
,
1.0
,
3.0
,
5.0
,
7.0
],
dtype
=
float
)
model
.
fit
(
xs
,
ys
,
epochs
=
500
)
print
(
model
.
predict
([
10.0
]))
f we look back at our code and look at just the first line, we’ll see that we’re defining the simplest possible neural network. There’s only one layer, and it contains only one neuron:
model
=
Sequential
([
Dense
(
units
=
1
,
input_shape
=
[
1
])])
When using
TensorFlow, you define your layers using Sequential
. Inside the Sequential
, you then specify what
each layer looks like. We only have one line inside our Sequential
, so we have only one layer.
You
then define what the layer looks like using the keras.layers
API. There are lots
of different layer types, but here we’re using a Dense
layer. “Dense” means
a set of fully (or densely) connected neurons,
you have to
tell it what the shape of the input data is. In this case our input data is our
X, which is just a single value, so we specify that that’s its shape.
The next line is where the fun really begins. Let’s look at it
again:
model
.
compile
(
optimizer
=
'sgd'
,
loss
=
'mean_squared_error'
)
If
you’ve done anything with machine learning before, you’ve probably seen that it
involves a lot of mathematics.
Armed with this knowledge, the computer can then make another guess.
That’s the job of the optimizer. This is where the heavy calculus is used,
but with TensorFlow, that can be hidden from you. You just pick the appropriate
optimizer to use for different scenarios. In this case we picked one called sgd
Next, we simply format our numbers into the data format that the
layers expect. In Python, there’s a library called Numpy that TensorFlow
can use, and here we put our numbers into a Numpy array to make it easy to
process them:
xs
=
np
.
array
([
-
1.0
,
0.0
,
1.0
,
2.0
,
3.0
,
4.0
],
dtype
=
float
)
ys
=
np
.
array
([
-
3.0
,
-
1.0
,
1.0
,
3.0
,
5.0
,
7.0
],
dtype
=
float
)
The learning process will then begin with the model.fit
command, like this:
model
.
fit
(
xs
,
ys
,
epochs
=
500
)
You
can read this as “fit the Xs to the Ys, and try it 500 times.” So, on the first
try, the computer will guess the relationship (i.e., something like Y = 10X +
10),
Our last line of code then used the trained model to get a
prediction like this:
(
model
.
predict
([
10.0
]))
Run the code for yourself to see what you get.
I got 18.977888 when I ran it, but your answer may differ slightly because when
the neural network is first initialized there’s a random element: your initial
guess will be slightly different from mine, and from a third person’s.
- Home>
- Machine Learning , Numpy , Python , Relationship , Tensorflow >
- Getting Started Machine Learning with TensorFlow
