* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Quiz 1 - Suraj @ LUMS
Artificial intelligence wikipedia , lookup
Neural oscillation wikipedia , lookup
Donald O. Hebb wikipedia , lookup
Single-unit recording wikipedia , lookup
Nonsynaptic plasticity wikipedia , lookup
Holonomic brain theory wikipedia , lookup
Neuroesthetics wikipedia , lookup
Eyeblink conditioning wikipedia , lookup
Gene expression programming wikipedia , lookup
Neural coding wikipedia , lookup
Pattern language wikipedia , lookup
Optogenetics wikipedia , lookup
Sparse distributed memory wikipedia , lookup
Neuropsychopharmacology wikipedia , lookup
Neural engineering wikipedia , lookup
Machine learning wikipedia , lookup
Metastability in the brain wikipedia , lookup
Channelrhodopsin wikipedia , lookup
Development of the nervous system wikipedia , lookup
Synaptic gating wikipedia , lookup
Central pattern generator wikipedia , lookup
Biological neuron model wikipedia , lookup
Neural modeling fields wikipedia , lookup
Artificial neural network wikipedia , lookup
Nervous system network models wikipedia , lookup
Catastrophic interference wikipedia , lookup
Convolutional neural network wikipedia , lookup
ROLL NO. NAME CS 537 – Neural Networks Quiz 1 Solution (Time limit: 10 minutes) 1. (6 points) Given the input x = [1, 2, 1, 2]T, weights w = [0.5, 0.5, 0.5, 0.5]T, and bias b = 1, compute the outputs of the following neurons: a. McCulloch-Pitts neuron Incorporating the bias in the formulation: x = [1, 1, 2, 1, 2]T; and w = [1, 0.5, 0.5, 0.5, 0.5]T Output y = signum(wTx) = signum(4) = 1 ( b. A linear neuron (adder + linear activation function) Output y = wTx = 4 c. A nonlinear neuron (adder + sigmoidal activation function with unity constants) Output y = 1 / (1 + exp(-wTx)) = 1/(1+exp(-4)) = 0.9820 2. (2 points) Define machine learning in the context of a neural network. List the free parameters that may be adapted during learning. A neural network is said to learn if its free parameters are adapted in response to experience in order to improve performance at learning an input-output mapping. The free parameters can be: weights Activation function parameters Architectural parameters (e.g. number of layers, number of neurons per layer, connectivity, etc) 3. (2 points) Given a linearly separable pattern, the perceptron will always find the same unique hyperplane that discriminates the pattern. True or false, and explain briefly why. False. The exact hyperplane learned will depend on the initial weights, the sequence of examples used for training, and the learning rate parameter. In general, if a pattern is linearly separable there can be multiple (possibly infinite) hyperplanes that can correctly distinguish between the two classes in the pattern. CS 537 (Sp 06-07) – Dr. Asim Karim Page 1 of 1