Download Viscoplastic Models for Polymeric Composite - MY Hussaini

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Network tap wikipedia , lookup

IEEE 1355 wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Airborne Networking wikipedia , lookup

Transcript
Viscoplastic Models for
Polymeric Composite
Mentee
Chris Rogan
Department of Physics
Princeton University
Princeton, NJ 08544
Mentors
Marwan Al-Haik & M.Y Hussaini
School of Computational Science
Florida State University
Tallahassee, FL 32306
Part-1 Explicit Model
Micromechanical
Viscoplastic Model
Explicit Model
Viscoplastic Model Proposed by Gates and Sun
t

e

=
= /E
The elastic portion of the strain is
determined by Hook’s Law, where E is
Young’s Modulus
e

+
p

p = A()n
The plastic portion of the strain is represented by
this non-linear equation, where A and n are
material constants found from experimental data
Gates, T.S., Sun, C.T., 1991. An elastic/viscoplastic constitutive model for fiber reinforced
thermoplastic composites. AIAA Journal 29 (3), 457–463.
Explicit Model
The total strain rate is composed of elastic and plastic components
dt/dt = de/dt + dp/dt
de/dt = (d/dt)/E
dvp /dt = dvp’/dt + dvp’’/dt
The elastic portion of the strain rate is the elastic component of the strain differentiated
with respect to time. The component of the strain rate is further divided into two viscoplastic
terms,
Explicit Model
The first component of the plastic strain rate is the plastic strain
differentiated with respect to time
dvp’/dt = A(n)()n-1(d/dt)
The second component utilizes the concept of ‘overstress’, or - *, where * is
the quasistatic stress and and  is the dynamic stress. K and m are material
constants found from experimental data.
dvp’’/dt = (( - *)/K)1/m
Tensile Tests
Tensile Tests at Different Temperatures
400
T= 25 C
350
T= 35 C
T= 45 C
300
Stress (MPa)
T= 50 C
T= 55 C
250
T= 60 C
T= 65 C
200
T= 75 C
150
100
50
0
0
Poly. (T=
75 C)
Poly. (T=
35 C)
Poly. (T=
25 C)
Poly. (T=
45 C)
Poly. (T=
55 C)
Poly. (T=
60 C)
Poly. (T=
65 C)
Poly. (T=
0.001
50 C)
0.002
0.003
Strain (mm/mm)
Figure 1
0.004
0.005
0.006
Methodology
Firstly, the tensile test data (above) was used to determine the material constants A,
n and E for each temperature. E was calculated first,
 = /E +
n
A()
fitting the linear portion of the tensile test curve to reflect the elastic component of
the equation as shown in Figure 2. Next, the constants A and n were calculated by
plotting Log() vs. Log( -
/E) and extracting n and A from a linear fit as the
slope and y-intercept respectively. Figure 3 displays the resulting model’s fit to the
experimental data.
Log(s) vs. Log(e - s/E) @ 45
0
-1
2.3
2.35
2.4
2.45
A-n
2.55
y = 7.1791x - 21.826
R2 = 0.925
-2
Log(s)
2.5
-3
Log( - /E) =
nLog() + LogA
-4
-5
-6
Log(e - s/E)
Figure 2
Stress vs. Strain @ 45
Stress vs. Strain @ 45
350
140
300
120
250
Stress
Stress
100
80
y = 65922x
R2 = 0.9989
60
40
200
Exper.
150
Modeled
100
50
20
0
0
0
0.0005
0.001
0.0015
Strain
Figure 3
0.002
0.0025
0
0.001
0.002
0.003
0.004
0.005
Strain
Figure 4
0.006
Table 1
Stress*
T \ %Strain
25
35
45
50
55
60
65
75
30%
124.054
101.93
85.878
81.821
69.657
68.61
63.11
58.604
40%
160.19
140.73
135.66
117.35
110.49
103.32
94.13
76.479
50%
200.82
177.8
173.91
154.97
132.5
138.24
102.65
101.95
60%
242.14
229.92
201.93
181.73
173.12
145.5
135.77
125.25
70%
285.19
276.68
224.98
196.57
189.18
187.69
150.44
136.44
80%
322.12
316.46
277.78
245.75
235.28
215.91
178.86
167.97
Load Relaxation Tests
Stress Relaxation @ 45 C
350
300
Stress [ MPa]
250
30% Strength
200
40% Strength
50% Strength
150
60% Strength
70% Strength
80% Strength
100
50
0
0
1000
2000
3000
4000
5000
6000
7000
8000
[s] was used to determine the temperatureThe data from the load relaxationTime
tests
dependent material constants K and m. For each temperature, the load relaxation test
was conducted at 6 different stress levels, as shown in Figure 4.
Curve Fitting of Load Relaxation
Figure 5
Firstly, the data from each different strain level at each temperature was isolated. The noise
in the data was eliminated to ensure that the stress is monotonically decreasing, as dictated
by the physical model (Figure 5). The data was then fit into two different trends; exponential
and polynomial of order 9 functions (Figures 6 and 7).
Figure 6
Figure 7
0 = d/dt = (d/dt)/E + (( - *)/K)1/m =>
Log(-(d/dt)/E) = (1/m)(Log( - *) – (1/m)Log K
From the exponential fits the constants K and m were calculated by plotting Log((d/dt)/E) vs. Log( - *), and calculating the linear fit, as shown in Figures 8 and 9. The
tabulated material constants for each temperature are pictured below.
Table 2
Temp (Deg)
25
35
45
50
55
60
65
75
A (Mpa)
10^-12.479 10^-6.8025 10^-12.156 10^-41.478 10^-19.4 10^-19.257 10^-28.563 10^-9.1163
n
3.6026
1.296
3.2313
15.305
6.3367
6.6161
10.165
2.5115
K
1.44E+07 9.21E+06 1.80E+13 3.50E+11 2.24E+07 4.39E+07 3.61E+06 4.00E+07
m
0.64654 0.74623
1.1965
1.0403 0.58915 0.71173 0.54771
0.8108
E(Mpa)
81081
72514
65922
65224
62014
60527
58331
46611
Figure 8
Figure 9
 = */E + A(*)n
For each temperature and strain level, the quasistatic stress was found by solving the above nonlinear equation using Newton’s method. The quasistaitc stress values are displayed in Table 1.
Simulation of Explicit Model
-(d/dt)/E = (( -
1/m
*)/K)
The total strain rate is zero
during the load relaxation test,
leading to the differential equation
above. The explicit model solution
was generated by solving this
differential equation using the
fourth order Runge-Kutta method.
Different step-sizes were
experimented with, and an
example solution is shown in
Figure 10.
Figure 10
Part 2: Implicit Model
Generalizing an Implicit
Stress Function Using
Neural Networks
Neural Networks (NN)
The Implicit Model consists of
creating an implicit, generalized stress function,
dependent on vectors of temperature, strain
level and time data. A generalized neural
network and one specific to this model are
shown in Figure 11. A neural network consists
of nodes connected by links. Each node is a
processing element which takes weighted
inputs from other nodes, sums them, and then
maps this sum with an activation function, the
result of which becomes the neuron’s output.
This output is then propagated along all the
links exiting the neuron to subsequent neurons.
Each link has a weight value to which traveling
outputs are multiplied.
Procedures for NN
Based on the three phases of neural networks functionality
(training, validation and testing),the data sets from the load
relaxation tests were split into three parts. The data sets for
three temperatures were set aside for testing. The other five
temperatures were used for training, excluding five specific
combinations of temperature and strain levels used for
validation.
Pre-processing
Before training, the data vectors were put into
random order and were normalized by the equation
Training NN
Training a feed-forward backpropagating neural
network consists of giving the network a
vectorized training data set each epoch. Each
individual vector’s inputs (temperature, strain
level, time) are propagated through the
network, and the output is incorporated with
the vector’s experimental output in the error
equation above. Training the network consists
of minimizing this error function in weight’s
space, adjusting the network’s weights using
unconstrained local optimization methods. An
example of a training session’s graph is shown
in Figure 12, in this case using a gradient
descent method with variable learning rate and
momentum terms to minimize the error
function.
Figure 12
2 Hidden Layers NN
The architecture of the neural network is difficult to
decide. Research by Hornik et al. (1989) suggests that
a network with two hidden layers can approximate any
function, although there is no indication as to how many
neurons to put in each of the hidden layers. Too many
neurons causes ‘overfitting’; the network essentially
memorizes the training data and becomes a look-uptable, causing it to perform poorly with the validation
and training data that it has not seen before. Too few
neurons leads to poor performance for all of the data.
Hornik, K., Stinchocombe, M., White, H., 1989. Multilayer feedforward networks are universal
approximators. Neural Networks, 359–366.
Error Surface
Figure 13 shows the resulting mean
squared error performance values for
neural networks with different
numbers of neurons in each hidden
layer after 1000 epochs of training.
Figure 13
Figure 14
Figure 15
Figures 14 and 15 display similar data, except that only random data points are used in
the neuron space and a cubic interpolation is employed in order to distinguish trends in
the neuron space. As figure 15 shows, there appears to be a minimum in the area of
about 10 neurons in the first hidden layer and 30 in the second. A minimum did in fact
occur with a [10 31 1] network.
Genetic Algorithm (GA) Pruning
A genetic algorithm was used to try to
determine an optimal network
architecture. Based on the results of
earlier exhaustive methods, a domain
from 1 to 15 and 1 to 35 was used for
the number of neurons in the first and
second hidden layers respectively.
A population of random networks in this
domain was generated, each network
encoded as a binary chromosome. The
probability of a particular network’s
survival is a linear function of its rank in
the population.
Stochastic remainder selection without
replacement was used in population
selection. For crossovers, a two-point
crossover of chromosomes’ reduced
surrogates was used as shown in Figure
16.
Figure 16
GA-Pruning
This method allows pruning of not only neurons
but links, as each layers of neurons is not
necessarily completely connected to the
next, and connections between nonadjacent layers is permitted. The genetic
algorithm was run with varying parameter
values and two different objective functions;
one seeking to minimize only the training
performance error of the networks and
another minimizing both the performance
error and the number of neurons and links.
Figure 17 displays an optimal network when
only the performance error is considered.
Figure 18 shows and optimal network when
the number of neurons and links was taken
into account.
Figure 17
Figure 18
GA-Performance
Figure 19 shows the results of an exhaustive architecture search in a smaller
domain than earlier, the first arrow pointing to a minimum that coincides with
the network architecture displayed in Figure 17.
Figure 19
Results of NN Implicit Model
Figure 21
Figure 20
A network architecture of [10 31 1] was used for the training and testing of the
neural networks. Several different minimization algorithms were tested and compared
for the training of the network and are listed in Figures 20 and 21. These two figures
display the training performance error and gradient over 1000 epochs.
Training- Validation & Testing of Final
NN Structure
Figure 23
Figure 22 shows the testing, validation and training performance for the
Gradient Descent algorithm while Figure 23 shows the plot of a linear least
squares regression between the experimental data and network outputs for
the Polack Ribiere Conjugate Gradient method.
Figure 22
Figure 23
Comparing
Explicit
and
Implicit
Models
Figure 24
Figure 24 displays the final performance of both models
compared to the experimental data. The Quasi-Newton
BFGS algorithm was used for the Implicit model, as it
performed the best. The Implicit model ultimately
outperformed the Explicit, and required only the load
relaxation data to generate the solution.
Conclusion
The Implicit model(NN+GA) ultimately
outperformed the Explicit( Gates),
and required only the load relaxation
data to generate the solution.