In this figure, we have used circles to also denote the inputs to the network。 The circles labeled "+1" are called bias units, and correspond to the intercept term。 The leftmost layer of the network is called the input layer, and the rightmost layer the output layer (which, in this example, has only one node)。 The middle layer of nodes is called the hidden layer, because its values are not observed in the training set。 We also say that our example neural network has 3 input units (not counting the bias unit), 3 hidden units, and 1 output unit。
We will let nl denote the number of layers in our network; thus nl = 3 in our example。 We label layer l as Ll, so layer L1 is the input layer, and layer the output layer。 Our neural network has parameters (W, b) = (W(1), b(1), W(2), b(2)), where we write to denote the parameter (or weight) associated with the connection between unit j in layer l, and unit i in layer l + 1。 (Note the order of the indices。) Also, is the bias associated with unit i in layer l + 1。 Thus, in our example, we have , and。 Note that bias units don't have inputs or connections going into them, since they always output the value +1。 We also let Sl denote the number of nodes in layer l (not counting the bias unit)。
We will writeto denote the activation (meaning output value) of unit i in layer l。 For l = 1, we also use to denote the i-th input。 Given a fixed setting of the parameters W, b, our neural network defines a hypothesis hW,b(x) that outputs a real number。 Specifically, the computation that this neural network represents is given by:
In the sequel, we also let denote the total weighted sum of inputs to unit i in layer l, including the bias term (e。g。, ), so that。
Note that this easily lends itself to a more compact notation。 Specifically, if we extend the activation function f(·) apply to vectors in an element-wise fashion (i。e。, f([z1,z2,z3]) = [f(z1),f(z2),f(z3)]), then we can write the equations above more compactly as:
We call this step forward propagation。 More generally, recalling that we also use a(1) = x to also denote the values from the input layer, then given layer l's activations a(l), we can compute layer l + 1's activations a(l + 1) as:
By organizing our parameters in matrices and using matrix-vector operations, we can take advantage of fast linear algebra routines to quickly perform calculations in our network。
We have so far focused on one example neural network, but one can also build neural networks with other architectures (meaning patterns of connectivity between neurons), including ones with multiple hidden layers。 The most common choice is a nl-layered network where layer l is the input layer, layer nl is the output layer, and each layer l is densely connected to layer l+1。 In this setting, to compute the output of the network, we can successively compute all the activations in layer L2, then layer L3, and so on, up to layer Lnl, using the equations above that describe the forward propagation step。 This is one example of a feedforward neural network, since the connectivity graph does not have any directed loops or cycles。
Neural networks can also have multiple output units。 For example, here is a network with two hidden layers L2 and L3 and two output units in layer L4:
To train this network, we would need training examples (x(i), y(i)) where。 This sort of network is useful if there're multiple outputs that you're interested in predicting。 (For example, in a medical diagnosis application, the vector x might give the input features of a patient, and the different outputs yi's might indicate presence or absence of different diseases。)
Chapter Three Data Preprocessing
3。1 Overview
Data preprocessing plays a very important in many deep learning algorithms。 In practice, many methods work best after the data has been normalized and whitened。 However, the exact parameters for data preprocessing are usually not immediately apparent unless one has much experience working with the algorithms。 In this page, we hope to demystify some of the preprocessing methods and also provide tips (and a "standard pipeline") for preprocessing data。