Posts for the month of April 2011

Module Open TURNS " Integral Compound Poisson Distribution"

Thanks to Anne Dutfoy, a new Open TURNS module is now available. It aims to create a compound Poisson distribution. It is the sum of a Poisson-distributed number of independent identically-distributed random variables.

I invite you to read the documentation shared with the sources of the module to understand the theory and how to use it.

Sources are viewable here: http://trac.openturns.org/browser/openturns-modules/integralcompoundpoisson/trunk

Svn repository is here: https://svn.openturns.org/openturns-modules/integralcompoundpoisson/trunk

Amélioration du chaos - Postprocessing - « Pretty print » des résultats

Ce travail porte sur une présentation lisible des résultats principaux d’un calcul par chaos polynomial : affichage des moments statistiques de la réponse, coefficients du développement en fonction des indices des polynômes, indices de Sobol’.

La fonction printchaos.py (cf. attachments) permet de réaliser un affichage clair des différents résultats du chaos. Elle contient les fonctions :

  • print_moments : dont l'argument est un objet FunctionalChaosResult. En sortie, la fonction affiche la valeur des moments.
import printchaos as pc
...
polynomialChaosAlgorithm.run()
polynomialChaosResult = polynomialChaosAlgorithm.getResult()
pc.print_moments(polynomialChaosResult)
  • print_coefficients : dont l'argument est un objet FunctionalChaosResult. En option, il est possible de spécifier l'objet ComposedDistribution qui a servi à réaliser le calcul par chaos. En sortie, la fonction affiche le type de polynôme relatif à chaque variable si l'objet ComposedDistribution a été spécifié :
    • He : polynôme d'Hermite
    • Le : polynôme de Legendre
    • Ja : polynôme de Jacobi
    • La : polynôme de Laguerre

La fonction affiche ensuite l'ensemble des degrés des polynômes unidimensionnels de chaque variable formant la base de développement du chaos et la valeur du coefficient associée.

import printchaos as pc
...
polynomialChaosAlgorithm.run()
polynomialChaosResult = polynomialChaosAlgorithm.getResult()
pc.print_coefficients(polynomialChaosResult, PM = ProbaModel)
  • print_plot_sensitivities : dont l'argument est un objet FunctionalChaosResult. En option, il est possible de spécifier l'objet ComposedDistribution qui a servi à réaliser le calcul par chaos si les variables aléatoires ont été nommées (méthode setName). La fonction affiche la liste de tous les indices de Sobol’ groupés par ordre en précisant à quelles variables ils sont associés. Enfin la fonction trace un diagramme circulaire des indices de Sobol’ totaux normalisés.
import printchaos as pc
...
polynomialChaosAlgorithm.run()
polynomialChaosResult = polynomialChaosAlgorithm.getResult()
pc.print_plot_sensitivities(polynomialChaosResult, PM = ProbaModel)

Sobol.png, janv. 2011: /extra/oldshare/AmeliorationChaos/Sobol.png

  • print_plot_regression : dont les arguments sont l’échantillon de sortie (NumericalSample) obtenu par à la main (OT 0.13.2) ou par la méthode getProjectionStrategy().getOutputSample() (OT version supérieure) et l’objet FunctionalChaosResult associé au plan d’expérience. La fonction trace un « scatter plot » des résultats du plan d’expérience avec en abscisse les valeurs du vrai modèle et en ordonnées les valeurs du méta-modèle. Dans le terminal, les coefficients de détermination R2 et Q2 sont donnés.
import printchaos as pc
...
polynomialChaosAlgorithm.run()
polynomialChaosResult = polynomialChaosAlgorithm.getResult()
out = polynomialChaosAlgorithm.getProjectionStrategy().getOutputSample()
pc.print_plot_regression(out, polynomialChaosResult)

Model_vs_MetaModel.png, janv. 2011: /extra/oldshare/AmeliorationChaos/Model_vs_MetaModel.png

Amélioration du chaos - Postprocessing - Obtention de la distribution de la réponse d’un chaos polynomial

Il serait utile de pouvoir obtenir, pour la réponse du chaos, un objet Distribution, soit à partir d'un FunctionalChaosResult, soit d'un FunctionalChaosRandomVector. L'objectif est de pouvoir utiliser la distribution de la sortie comme une entrée pour un autre chaos, ou calcul en général. Cette réponse n’est possible qu'avec un lissage à noyaux (KernelSmoothing).

La fonction kernelchaos.py (cf. attachments) permet de créer la distribution de la variable de sortie développée sur le chaos. Elle contient la fonction getOutputDistribution dont l’argument est un objet FunctionalChaosResult. En sortie la fonction retourne une liste de distribution :

  • la distribution lissée par un noyau gaussien,
  • La meilleure distribution usuelle d’après le test de Kolmogorov si le test est validé.
import kernelchaos as kc
...
polynomialChaosAlgorithm.run()
polynomialChaosResult = polynomialChaosAlgorithm.getResult()
ouputDistributions = kc.getOutputDistribution(polynomialChaosResult)

En outre, elle crée 3 graphiques :

  • Output_distribution_kernel_smoothing.png : présentant un échantillon de la variable de sortie sous la forme d’un histogramme ainsi que le lissage à noyau de l’échantillon

Output_distribution_kernel_smoothing.png|C|Output_distribution_kernel_smoothing.png, janv. 2011: /extra/oldshare/AmeliorationChaos/Output_distribution_kernel_smoothing.png

  • Output_distribution_Kolmogorov_test.png : présentant un échantillon de la variable de sortie sous la forme d’un histogramme ainsi que la distribution usuelle validant au mieux le test de Kolmogorov, dans le cas où le test est validé pour un « factory ».

Output_distribution_Kolmogorov_test.png|C|Output_distribution_Kolmogorov_test.png, janv. 2011: /extra/oldshare/AmeliorationChaos/Output_distribution_Kolmogorov_test.png

  • Output_distribution_KK.png : présentant la comparaison entre la distribution lissée par noyau et la distribution usuelle résultat du test de Kolmogorov si ce dernier est validé pour un « factory ».

Output_distribution_KK.png, janv. 2011 /extra/oldshare/AmeliorationChaos/Output_distribution_KK.png

Présentations de la journée utilisateur #3

Bonjour,

Voici les supports des présentations exposées lors de la troisième journée utilisateur:

  • présentation principale
  • présentation des réseaux bayésiens
  • présentation du module mixmod

Le fichier PDF de la présentation des réseaux bayésiens a été coupé en 6 parties. Faire cat Expose_LIP6_ot2010_BN.pdf.* > Expose_LIP6_ot2010_BN.pdf sous linux, ou utiliser hjsplit sous windows pour reconstruire le fichier

Bonne lecture!

Anne

edit par François : voici le lien pour une version light de l'exposé sur les réseaux bayésiens :

openturns-agrum: a new module for OpenTURNS, that allows to create a distribution from a Bayesian Network using the aGrUM library

Hi, in collaboration with Pierre-Henri WUILLEMIN (LIP6, aGrUM team member) and Anne DUTFOY (EDF R&D, OpenTURNS team member), I have written a new OpenTURNS module that allows to create 1D OpenTURNS distributions as conditional marginal distributions of a Bayesian network built using aGrUM (http://agrum.lip6.fr/doku.php). It is very useful when one has conditional information about an uncertain quantity but not statistical data directly related to this quantity.

This module is compatible with OpenTURNS 0.13.2 and aGrUM revision 382.

Here an example of a python script to use this module:

from openturns import * from pyAgrum import * from otagrum import *

  1. Create an empty BN

myBN = BayesNet()

  1. Create light variable

light = LabelizedVar("Light", "Light", 0) light.addLabel("Dim") light.addLabel("Bright") lightSize = len(light)

  1. Create moisture vriable

moisture = LabelizedVar("Moisture", "Moisture", 0) moisture.addLabel("Dry") moisture.addLabel("Wet") moistureSize = len(moisture)

  1. Create height variable

height = DiscretizedVar("Height", "Height") data = NumericalPoint(range(10, 100, 10)) heightWhenDimAndDry = Uniform(0.0, 40.0) heightWhenDimAndWet = Uniform(10.0, 60.0) heightWhenBrightAndDry = Uniform(0.0, 20.0) heightWhenBrightAndWet = Uniform(40.0, 100.0) data = BayesNetAgrum.AdaptGrid(DistributionCollection(heightWhenDimAndDry, heightWhenDimA...), data) heightSize = data.getSize() - 1 for i in list(data):

height.addTick(i)

  1. Add variables to the net

indexLight = myBN.add(light) indexMoisture = myBN.add(moisture) indexHeight = myBN.add(height)

  1. Create arcs

myBN.insertArc(indexLight, indexMoisture) myBN.insertArc(indexLight, indexHeight) myBN.insertArc(indexMoisture, indexHeight)

  1. Create conditional probability tables

myBN.cpt(indexLight):= 0.5, 0.5 myBN.cpt(indexMoisture){'Light' : 'Dim'} = 0.2, 0.8 myBN.cpt(indexMoisture){'Light' : 'Bright'} = 0.6, 0.4 myBN.cpt(indexHeight){'Light': 'Dim', 'Moisture': 'Dry'} = BayesNetAgrum().Discretize(heightWhenDimAndDry, data) myBN.cpt(indexHeight){'Light': 'Bright', 'Moisture': 'Dr... = BayesNetAgrum().Discretize(heightWhenBrightAndDry, data) myBN.cpt(indexHeight){'Light': 'Dim', 'Moisture': 'Wet'} = BayesNetAgrum().Discretize(heightWhenDimAndWet, data) myBN.cpt(indexHeight){'Light': 'Bright', 'Moisture': 'We... = BayesNetAgrum().Discretize(heightWhenBrightAndWet, data)

  1. Create a BayesNetAgrum object

otbn = BayesNetAgrum(myBN)

  1. Export to BIF file

otbn.exportToBIFFile("test.bif")

  1. Perform inference

for evidenceValue in "Bright", "Dim":

   # First, set evidence
   otbn.setEvidence("Light", evidenceValue)
   # Second, get a marginal distribution of interrest
   heightDistribution = otbn.getMarginal("Height")
   heightDistribution.setName("Height_Light_" + evidenceValue)
   # Use the distribution the way you want
   print "heightDistribution=", heightDistribution
   heightDistribution.drawPDF(-10.0, 110.0, 512).draw("PDF_" + heightDistribution.getName())
   heightDistribution.drawCDF(-10.0, 110.0, 512).draw("CDF_" + heightDistribution.getName())

New version of Eficas for Open TURNS 0.13.2

Here are tarball of Eficas for use with Open TURNS 0.13.2 or later:

This tool will help you in writing python script to run uncertainty studies with Open TURNS 0.13.2 or later.

You can install it this way:

  • decompress your tarball:

either tar -xzvf EficasV2.0.1.1_OpenTURNS-0.13.2-source.tar.gz

or tar -xjvf EficasV2.0.1.1_OpenTURNS-0.13.2-source.tar.bz2

  • change to newly created directory EficasV2.0.1.1_OpenTURNS-0.13.2

cd EficasV2.0.1.1_OpenTURNS-0.13.2

  • configure the installation (we make the assumption that you plan to install Eficas in <EficasDIR>, to use Python 2.4 and that OpenTURNS is installed in <OpenTURNSDIR>:

cmake . -DCMAKE_INSTALL_PREFIX=<EficasDIR> -DWITH_OPENTURNS=ON -DOPENTURNS_DIR= <OpenTURNSDIR> -DPYTHON_EXECUTABLE=/usr/bin/python2.4

  • build and install:

make make install

  • run Eficas to make wrapper description files:

python2.4 <EficasDIR>/Openturns_Study/qtEficas_openturns_study.py

  • run Eficas to make probabilistic studies:

python2.4 <EficasDIR>/Openturns_Wrapper/qtEficas_openturns_wrapper.py

See the README file for more installation instructions (if you don't have cmake)

Ivan DUTKA-MALEN

Building a metamodel with Polynomial Chaos Expansion using a database

Hello all,

Recently, a colleague of mine asked me if it was possible to build a Polynomial Chaos Expansion metamodel in OpenTURNS with only a database of inputs versus outputs. After a little bit of scratching on the top of my head, I reached Regis and after a small chat with him, I obtained the following script. I have not tested it with such decoration (name of the files, etc.) so maybe it will not work straightforwardly but the core of the problem is treated here.

Enjoy!

# -*- coding: iso-8859-15 -*-
from openturns import *
from math import *
 
#This script allows to build the metamodel by Polynomial Chaos Expansion using a database as input. 
#Y = f(X)
#For the moment, it is mandatory to go through an intermediate variable Z
#The hypothesis is that each variable varies in an interval [a,b]
#Z = phi(X), such that each Zj is Uniform(0.,1.). phi is affine
#Y = g(Z)
#We do the PCE on g and finally f = metamodel o phi
 
#The input file in InputFile.txt
 
 
 
#this function takes a input in the database Z and returns the associated output
def norm2(X,Y):
  norm = 0.
  for i in range(Y.getDimension()):
    norm+=(X[i]-Y[i])**2
  return sqrt(norm)
 
 
#this function takes a input in the database Z and returns the associated output
class statModel(OpenTURNSPythonFunction):
    def __init__(self):
        OpenTURNSPythonFunction.__init__(self, globals()["dim"], 1)
    def f(self,X):
      for i in range(globals()["siz"]):
	if norm2(X,globals()["Z"][i])<1.e-6:
	  v=i
	  break
      try:
	Y = globals()["spOutputTR"][v][0]
      except:
	Y = 0.
      return [Y]
 
 
 
#this function is the affine funtion that transforms X into Z
class affineTransform(OpenTURNSPythonFunction):
	def __init__(self):
	  OpenTURNSPythonFunction.__init__(self, globals()["dim"], globals()["dim"])
	  #change input dimension
	def f(self,X):
	  interval = globals()["interv"]
	  Z = []
	  for j in range(len(X)):
	    Z.append(X[j] - interval[j][0])/(interval[j][1]-interval[j][0])
	  return Z
 
 
 
 
# Get training input from InputFile.txt
inVar = NumericalSample.ImportFromCSVFile("InputFile.txt")
inVar.setName("Input Variable")
 
 
#get the dimension and size of the sample
dim = inVar.getDimension()
siz = inVar.getSize()
 
# Define the intervals where the input variables are defined
#list of min values for each variable, to be specified by user
a1 = 0.
a2 = 0.
a3 = 0.
a4 = 0.
Min = [a1,a2,a3,a4]
#list of max values for each variable, to be specified by user
b1 = 1.
b2 = 2.
b3 = 3.
b4 = 4.
Max = [b1,b2,b3,b4]
 
interv = []
for i in range(len(Min)):
  interv.append([Min[i],Max[i]])
 
# Create Z the transformed input vector
Z = NumericalSample(siz,dim)
for i in range(siz):
  for j in range(dim):
    Z[i][j] = (inVar[i][j] - interv[j][0])/(interv[j][1]-interv[j][0])
 
# Get training output from OutputFile.txt
outVar = NumericalSample.ImportFromCSVFile("OutputFile.txt")
outVar.setName("Output Variable")
 
# Create Polynomial Chaos Expansion of outVar = g(Z)
polyColl = PolynomialFamilyCollection(dim)
 
legendreFamily = LegendreFactory()
for i in range(dim):
  polyColl[i] = OrthogonalUniVariatePolynomialFamily(legendreFamily)
 
multivariateBasis = OrthogonalProductPolynomialFactory(polyColl,EnumerateFunction(dim))
maximumConsideredTerms = 500
mostSignificant = 10
significanceFactor = 1.e-6
truncatureBasisStrategy = CleaningStrategy(OrthogonalBasis(multivariateBasis),maximumConsideredTerms, mostSignificant, significanceFactor, True)
evaluationCoeffStrategy = LeastSquaresStrategy(FixedExperiment(Z))
model = NumericalMathFunction(statModel())
polynomialChaosAlgorithm = FunctionalChaosAlgorithm(model, Distribution(dist), AdaptiveStrategy(truncatureBasisStrategy), ProjectionStrategy(evaluationCoeffStrategy))
 
polynomialChaosAlgorithm.run()
polynomialChaosResult = polynomialChaosAlgorithm.getResult()
 
coefficients = polynomialChaosResult.getCoefficients()
metaModel = polynomialChaosResult.getMetaModel()
 
# Get Response Surface : composition between Y = g(Z) and Z = aff(X) => our model is (g o aff)
proj = NumericalMathFunction(affineTransform())
modelfull = NumericalMathFunction(metaModel,proj)
 
# Get estimation input from InputCheck.txt
inpCheck = NumericalSample.ImportFromCSVFile("InputCheck.txt")
inpCheck.setName("Input variable for checking")
 
 
# Get estimation output
outCheck = modelfull(inpCheck)
outCheck.exportToCSVFile("OutputCheck.csv"," ")
 
fOutputCheck = open("OutputCheck.txt", 'w')
 
for i in range(outCheck.getSize()):
    fOutputCheck.write("%f\n" % (outCheck[i])[0])
 
fOutputCheck.close()

ot-mixmod: a new OT-module to create a Mixture Factory composed of Normal Distributions

Hi, I have written a small OpenTURNS module to create a Mixture Factory composed of Normal Distributions from a Numerical Sample. The sources of the module are available in the openturns-module repository. However, you can download here the version 0.1.0.

This module uses MixMod with the default parameters. This module requires version 0.13.2 of OpenTURNS and MixMod executable.

Here an example of a python script to use this module:

from openturns import * from otmixmod import * draw = True dim = 2 size = 2000 coll = DistributionCollection(0) R = CorrelationMatrix(dim) for i in range(dim-1):

R[i,i+1] = 0.1

coll.add(Distribution(Normal(NumericalPoint(dim, -2.0), NumericalPoint(dim, 1.2), R))) for i in range(dim-1):

R[i,i+1] = -0.2

coll.add(Distribution(Normal(NumericalPoint(dim, 2.0), NumericalPoint(dim, 0.8), R))) mean = NumericalPoint(dim, 5.0) mean[0] = -5.0 coll.add(Distribution(Normal(mean, NumericalPoint(dim, 1.4), CorrelationMatrix(dim)))) mixture = Mixture(coll) sample = mixture.getNumericalSample(size) factory = MixtureFactory(3) estimatedDistribution = factory.buildImplementation(sample) print "Estimated distribution=", estimatedDistribution if draw:

g = estimatedDistribution.drawPDF() c = Cloud(sample) c.setColor("red") c.setPointStyle("bullet") ctmp = g.getDrawable(0) g.setDrawable(Drawable(c), 0) g.addDrawable(ctmp) g.draw("testMixMod")

Automatic quantification of a marginal based on a 1D sample

Hello all,

I propose you the following python script. It allows to simply quantify a 1D variable from a sample. First, you create the family of parametric distribution on which you want to test your sample. If the best distribution does not fulfill the quality criteria, you go to a non parametric quantification by Kernel Smoothing. The output of the function is the distribution and the value of the test.

# -*- coding: iso-8859-15 -*-
from openturns import *
 
def quantification(sample1D):
  # Creation of a factory collection
  factoryCollection = FactoryCollection()
  factoryCollection.add(DistributionFactory(UniformFactory()))
  factoryCollection.add(DistributionFactory(NormalFactory()))
  factoryCollection.add(DistributionFactory(BetaFactory()))
  #factoryCollection.add(DistributionFactory(WeibullFactory()))
  #factoryCollection.add(DistributionFactory(LogNormalFactory()))
  #factoryCollection.add(DistributionFactory(BetaFactory()))
  #factoryCollection.add(DistributionFactory(GammaFactory()))
  #factoryCollection.add(DistributionFactory(ExponentialFactory()))
  #factoryCollection.add(DistributionFactory(GumbelFactory()))
  #factoryCollection.add(DistributionFactory(WeibullFactory()))
  #factoryCollection.add(DistributionFactory(TruncatedNormalFactory()))
  #factoryCollection.add(DistributionFactory(StudentFactory()))
 
  try:
    BestDistributionMarg = Distribution(FittingTest().BestModelKolmogorov(sample1D, factoryCollection))
    print BestDistributionMarg
  except:
    print "impossible to do the parametric quantification"
## Study of the result by comparing the empirical law of the sample and the distribution obtained by parametric
## if the test is ok, we keep the parametric law, if not, go to non parametric quantification
  try:
    TestMarg = FittingTest_Kolmogorov(SampleInputMarg,BestDistributionMarg).getBinaryQualityMeasure()
    print "Parametric quantification"
  except:
    TestMarg = False
## test not ok
  if TestMarg == False :
    kernel=KernelSmoothing()
    BestDistributionMarg = Distribution(kernel.buildImplementation(sample1D))
    print "Quantification by kernel smoothing"
  ## optional visual tests
  QQPlot = VisualTest().DrawQQplot(sample1D,BestDistributionMarg,1000)
  QQPlot.draw("QQPlotMargin", 640, 480)
  QQPlot.getBitmap()
 
  return BestDistributionMarg, TestMarg

Annonce workshop OPUS le 25 novembre 2009 à Suresnes

Dans le cadre du projet ANR OPUS, un workshop intitulé "Méthodes spectrales et Chaos polynomial" se tiendra le 25 novembre 2009 chez EADS à Suresnes. Seront traités au cours de cette journée aussi bien des aspects théoriques (le matin) que des problématiques d'implémentations logicielles (l'après-midi).

A partir de 9h30, Olivier Lemaitre (LIMSI) et Fabio Nobile (Politecnico di Milano) feront deux présentations sur la théorie. L'après midi, Géraud Blatman et Thierry Crestaux, deux doctorants, présenteront leurs travaux de thèse en rapport avec le sujet. Enfin, sur les problèmes d'implémentations informatiques, Michael Bodin (Scilab) et Jean-Marc Martinez (CEA) présenteront leur toolbox Scilab NISP ; Marc Berveiller (EDF) et Régis Lebrun (EADS) présenteront ensuite les développements sur le chaos fonctionnel dans OpenTURNS.

Un plan d'accès à EADS Suresnes est disponible. Pour des raisons de sécurité inhérente à l'hôte de ce workshop, il est obligatoire de se pré-inscrire auprès de Jayant Sen Gupta (jayant.sengupta_at_eads_dot_net) en précisant le nom, le prénom et la société. De plus, le jour du workshop, il sera demandé de présenter une pièce d'identité (carte d'identité ou passeport).

Eficas is a Python/QT4 GUI to ease writing uncertainty studies and wrapper description files

Here is a tarball of Eficas

This tool will help you in writing python script to run uncertainty studies with Open TURNS 0.13.2 or later. See the README file for installation instructions.

Ivan

How to use LHSExperiment ?

I want to use the method LHSExperiment from OpenTURNS version 0.13.1.

I use LHSExperiment(distribution, size) under this form but I have an error (wrong number of argument) and I don't understand my mistake.

How to get back a model from a saved study?

I want to obtain a model that was saved in a Study Object and I do not know how to do it. The use case guide indicates how to do it for a NumericalPoint but not for a model. I do not achieve to derive it for a NumericalMathFunction. Thanks for your help

Fabien

Présentation "Chaos Polynômial" de la deuxième journée utilisateur

Courte introduction au chaos polynômial et à sa mise en oeuvre dans Open TURNS

Planches de la journée utilisateur #2 à EADS Innovation Works

Convergence des moments statistiques de la fonction d'Ishigami en fonction de la taille du plan d'expériences avec le chaos polynomial

Testé avec la version 0.12.3 d'OpenTURNS

# Chargement des librairies
from openturns import *
from math import *
from openturns_viewer import ViewImage
 
# Initialisation de la graîne pour les simulations de Monte-Carlo
RandomGenerator().SetSeed(0)
 
# Définition de la fonction factorielle
def factorielle(x):
	if x==0:
		return 1
	else:
		return x*factorielle(x-1)
 
# Définition des constantes intervenant dans le fonction d'Ishigami
A = 7.0
B = 0.1 
 
#Modele physique : fonction d'Ishigami
class myfunction(OpenTURNSPythonFunction):
	def __init__(self):
		OpenTURNSPythonFunction.__init__(self, 3, 1)
	def f(self, X):
		H = sin(X[0]) + A*(sin(X[1])**2) + B*(X[2]**4)*sin(X[0])
		Y=[H]
		return Y
 
function=myfunction()
g = NumericalMathFunction(function)
 
# Nombre de variables aléatoires d'entrées
dim = 3
 
#Modele probababiliste
mean=NumericalPoint(3)
mean[0]=0.0
mean[1]=0.0
mean[2]=0.0
 
L1=Uniform(-pi,pi)
L2=Uniform(-pi,pi)
L3=Uniform(-pi,pi)
 
Collection=DistributionCollection(dim)
Collection[0]=Distribution(L1)
Collection[1]=Distribution(L2)
Collection[2]=Distribution(L3)
 
Copule=IndependentCopula(dim)
Modelproba=ComposedDistribution(Collection,Copula(Copule))
 
# Definition des polynômes unidimensionels du chaos polynomial
legendre = OrthogonalUniVariatePolynomialFamily(LegendreFactory())
polyColl = PolynomialFamilyCollection(dim)
polyColl[0]=legendre
polyColl[1]=legendre
polyColl[2]=legendre
 
# Affichage d'un polynôme de la base
enumFunc = EnumerateFunction(dim)
MultiIndice_1 = enumFunc(0)
print "Polynome 1 de la base : ", MultiIndice_1
 
productPoly = OrthogonalProductPolynomialFactory(polyColl,enumFunc)
basis = OrthogonalBasis(productPoly)
 
mesure = basis.getMeasure()
################################################################	
# Calcul du nombre de termes dans le dvpt sur CP
# On fixe le degré du chaos polynomial
p=5
P=factorielle(dim+p)/(factorielle(dim)*factorielle(p))
print "Taille du chaos polynomial pour p=3 : ", P
 
 
################################################################	
nb_calculs=range(0,3)
 
moyenne=[0]
variance=[0]
taille_plan=[0]
taille_base_finale=[0]
for i in nb_calculs:
	sampleSize = P+10*i
	taille_plan.append(sampleSize)
	print "Taille du plan d experiences : ", sampleSize
	regressionStrategy = RegressionStrategy(MonteCarloExperiment(sampleSize))
	#regressionStrategy = RegressionStrategy(LHSExperiment(sampleSize))		
 
	#Fixed Strategy
	basisSize = P
	#print "Taille de la base", basisSize
	fixedStrategy= FixedStrategy(basis,basisSize)
	#polynomialChaosAlgorithm = FunctionalChaosAlgorithm(g,Distribution(Modelproba),AdaptiveStrategy(fixedStrategy),ProjectionStrategy(regressionStrategy))
 
	#SequentialStrategy:	
	sequentialStrategy=SequentialStrategy(basis,basisSize)	
	#polynomialChaosAlgorithm = FunctionalChaosAlgorithm(g,Distribution(Modelproba),AdaptiveStrategy(sequentialStrategy),ProjectionStrategy(regressionStrategy))
 
	#CleaningStrategy:		
 
	maxIndex = 3*P #Numero du dernier polynome Psi que l on peut utiliser
	basisSize_cleaning = P #ici le basisSise_cleaning est la taille max de la base toleree
	significanceFactor = 1.0e-3	
	cleaningStrategy = CleaningStrategy(basis,maxIndex,basisSize_cleaning , significanceFactor,True)	
	polynomialChaosAlgorithm = FunctionalChaosAlgorithm(g,Distribution(Modelproba),AdaptiveStrategy(cleaningStrategy),ProjectionStrategy(regressionStrategy))
 
 
	polynomialChaosAlgorithm.run()
 
	#result	
	polynomialChaosResult = polynomialChaosAlgorithm.getResult()	
 
	#Get the polynomial chaos coefficients	
	coefficients = polynomialChaosResult.getCoefficients()	
 
	#Get the metamodel as a NumericalMathFunction	
	metaModel = polynomialChaosResult.getMetaModel()	
 
	#Get the basis		
	basisCollection = polynomialChaosResult.getBasis()	
	print "Taille de la base finale : ", polynomialChaosResult.getBasis().getSize()
 
	taille_base_finale.append(polynomialChaosResult.getBasis().getSize())
	#Get the distribution of variables U	
	mesure = polynomialChaosResult.getMeasure()	
 
	#Get the composedmodel which is the model of the reduced variables U	
	composedModel = polynomialChaosResult.getComposedModel	
 
 
	# Calcul des moments a partir des coefficients
	print "Moyenne a partir des coefficients : " , polynomialChaosResult.getCoefficients()[0]
	print "Erreur sur la moyenne : " , (polynomialChaosResult.getCoefficients()[0]-3.5)/3.5
	moyenne.append(polynomialChaosResult.getCoefficients()[0])
	# Calcul de la variance a partir des coefficients
	variance_coeffs=polynomialChaosResult.getCoefficients().norm2()-	polynomialChaosResult.getCoefficients()[0]**2
	print  "variance a partir des coefficients : " , variance_coeffs
	VarianceTheorique = (A**2)/8.0 + (B)*(pi**4)/5.0 + (B**2)*(pi**8)/18.0 +1.0/2.0
	print  "Erreur sur la variance : " , (variance_coeffs-VarianceTheorique)/VarianceTheorique,"\n"
	#print  "ecart-type a partir des coefficients : " , sqrt(variance_coeffs)
	variance.append(variance_coeffs)
 
fic_cv=open('cv_Ishigami_cleaning_5','w')
for i in nb_calculs:
	fic_cv.write(str(taille_plan[i+1])+' '+str(taille_base_finale[i+1])+' '+str(moyenne[i+1])+' '+str(variance[i+1])+'\n')
fic_cv.close()
 
 
print moyenne
print variance
# Variance theorique
VarianceTheorique = (A**2)/8.0 + (B)*(pi**4)/5.0 + (B**2)*(pi**8)/18.0 +1.0/2.0
print "variance theorique : ", VarianceTheorique
 
fic_coeff=open('coefficients_Ishigami_cleaning_5','w')
nb_coeffs =  polynomialChaosResult.getCoefficients().getDimension()
affichage_coefficients=range(0,nb_coeffs)
for i in affichage_coefficients:
	#print enumFunc(i), polynomialChaosResult.getCoefficients()[i]
	fic_coeff.write(str(enumFunc(i))+' '+str(polynomialChaosResult.getCoefficients()[i])+'\n')
fic_coeff.close()

Trapezoidal distribution using truncature and mixture

# this is a script to create a trapezoid like distribution
# ========================================================
# version OT 0.12.3


from openturns import *
from openturns_viewer import *
from math import *


# The four next lines are the only one the user has to fill in. (change the values)
# warning : the values must respect the following ordre : a<b<c<d

a=1.0 # lower bound
b=2.0 # lower bound of the uniform distribution 
c=5.0 # upper bound of the uniform distribution 
d=7.0 # upper bound

######################################################
####      Trapezoid shape distribution       #########
######################################################

#Creation of the distributions:
triang=Triangular(a,(a+d)/2,d) # regular triangular distribution
myTruncatedTriang1 = TruncatedDistribution(Distribution(triang),b,TruncatedDistribution.UPPER) # truncated dist. on the right
myTruncatedTriang2=TruncatedDistribution(Distribution(triang),c,TruncatedDistribution.LOWER) # truncated dist. on the left
unif=Uniform(b,c) # uniform distribution

aCollection=DistributionCollection(3)  # mixture of these distribution : see UseCasesGuide manual from OT documentation

aCollection[0]=Distribution(myTruncatedTriang1)
aCollection[1]=Distribution(unif)
aCollection[2]=Distribution(myTruncatedTriang2)


aCollection[0].setWeight((b-a)/2) # weight of each distribution in order to have a continuous final distribution
aCollection[1].setWeight(c-b)
aCollection[2].setWeight((d-c)/2)

myTrapezoid=Mixture(aCollection) #final distribution

OneHundredrealisations=myTrapezoid.getNumericalSample(1000000) # graph of the obtained distribution
sampleHistOpt=VisualTest.DrawHistogram(OneHundredrealisations)
sampleHistOpt.draw("sampleHistOpt")
Show(sampleHistOpt)
ViewImage(sampleHistOpt.getBitmap())

Uncertainity propagation and reliability analysis calculation functions

I made general calculation scripts based on a general model of input data. The syntax of the input data has to follow the following model of script (Attachments: model.py) :

Model of script for input models

  1. The user has to define several model parameters :
  • The number of input random variables;
  • The number of ouput random variables;
  • Finite differente step factor: in Open TURNS 0.12.3 the finite difference step is equal for every variable. In order to take into account the unit of the random variables, I use a finite difference step based on the mean value of each random variable and I define the step as: dx = k_fd*mean(x).

Others parameters can be defined.

...
# --------------------------------------------------------------
# Model parameters
# --------------------------------------------------------------

#Number of input random variables (NbVA)
NbVA=4
#Number of output variables of the physical model (NbVS)
NbVS=1
#Finite differente step factor
k_fd=0.001
#...


#Other parameters
binf=0.0
#...
...
  1. The user has to define his physical model thanks to :
  • wrappers ;
  • OpenTURNSPythonFunction which can use wrappers.

The user has to name his function : PhysModel.

...
# --------------------------------------------------------------
# Physical model
# --------------------------------------------------------------

#Declaration of wrapper(s)
#PhysModel=NumericalMathFunction("my_wrapper")

#Use of OpenTURNSPythonFunction
class myfunction(OpenTURNSPythonFunction):

	def __init__(self):
		OpenTURNSPythonFunction.__init__(self, NbVA, NbVS)

	def f(self, X):
		d=5.0/384.0*X[3]*X[0]**4/(X[1]*X[2])
		return [d]

PhysModel = NumericalMathFunction(myfunction())
...
  1. The user has to define his probabilistic model following several recommandations :
  • each marginal has to be named thanks to the method setName();
  • I only use the independent copula.

The user has to name his composed distribution : ProbaModel.

...
# --------------------------------------------------------------
# Probabilistic model
# --------------------------------------------------------------

# 1. definition of the marginals (1D distribution)

L0=LogNormal(5000.0,50.0,binf,LogNormal.MUSIGMA)
L0.setName('L')
L1=LogNormal(30000.0,4500.0,binf,LogNormal.MUSIGMA)
L1.setName("E")
L2=LogNormal(1.0e9,1.0e8,binf,LogNormal.MUSIGMA)
L2.setName("I")
L3=LogNormal(10.0,3.0,binf,LogNormal.MUSIGMA)
L3.setName("p")
#...

# 2. definition of the distribution collection

CollectionD=DistributionCollection(NbVA)
CollectionD[0]=Distribution(L0)
CollectionD[1]=Distribution(L1)
CollectionD[2]=Distribution(L2)
CollectionD[3]=Distribution(L3)
#...

# 3. Copula definition

IC = IndependentCopula(NbVA)

# 4. Composed distribution definition

ProbaModel = ComposedDistribution(CollectionD,Copula(IC))

# 5. Mean point definition

mean=ProbaModel.getMean()
...
  1. The user can define the types of calculation performed at the beginning of the script (Calculation 1/2). The corresponding functions will run thanks to the second part of the calculation definition (Calculation 2/2). The user has to define the character string related to the numbers of the calculation.
...
# --------------------------------------------------------------
# Calculation 1/2
# --------------------------------------------------------------
# 0: Single evaluation of the physical model at mean point

# 11: Quadratic cumul analysis
# 12: Uncertainity propagation using Monte Carlo simulations

# 121: Basic statistics on Monte Carlo results

# in the future
# 13: Polynomial chaos expansion

# 21: Reliability analysis using FORM
# 22: Reliability analysis using SORM

# 31: Reliability analysis using Monte Carlo simulations
# 32: Reliability analysis using latin hypercube sampling
# 33: Reliability analysis using directional simulations

# 211: Reliability analysis using importance sampling


Calculation=["0","12","121"]

for i in range(len(Calculation)):
	if (Calculation[i]=="11"):
		from fc_CQ import Calcul_CQ
	if (Calculation[i]=="12"):
		from fc_MC import Calcul_MC

	if (Calculation[i]=="121"):
		from fc_BasicStat import Calcul_BasicStat

...

# --------------------------------------------------------------
# Calculation 2/2
# --------------------------------------------------------------

#Modification of the finite difference step value
step=NumericalPoint(NbVA)
for i in range(NbVA):
	step[i]=mean[i]*k_fd
myGradient = NonCenteredFiniteDifferenceGradient(step,PhysModel.getEvaluationImplementation())
myHessian = CenteredFiniteDifferenceHessian(step,PhysModel.getEvaluationImplementation())
PhysModel.setGradientImplementation(myGradient)
PhysModel.setHessianImplementation(myHessian)

for i in range(len(Calculation)):
	if (Calculation[i]=="0"):
		Mean_evaluation=PhysModel(mean)
		for j in range(NbVS):
			print ""
			print "Evaluation of the physical model at mean point - output",j+1,"/",NbVS,":", Mean_evaluation[j]
	if (Calculation[i]=="11"):
		resCQ=Calcul_CQ(PhysModel,ProbaModel)
	if (Calculation[i]=="12"):
		resMC=Calcul_MC(PhysModel,ProbaModel)

	if (Calculation[i]=="121"):
		resBasicStat=Calcul_BasicStat(resMC)
...

The calculation functions are independent files, except Calcul_BasicStat. Each file contains a python function based on Open TURNS. Each function can be divided into 2 parts :

  • the calculation definition and run;
  • the display of the results.

Function Quadratic cumul analysis

Attachements: fc_CQ.py

The function requires:

  • a NumericalMathFunction or physical model
  • a ComposedDistribution or probabilistic model

Note : In Open TURNS 0.12.3, the calculation of the importance factors is only available for scalar NumericalMathFunction.

...
# Quadratic cumul analysis
	RV_ProbabilisticModel = RandomVector(Distribution(ProbabilisticModel))
	Function = RandomVector(PhysicalModel,RV_ProbabilisticModel)
	QuadraticC = QuadraticCumul(Function)

	if (NbVS==1):
		IF=NumericalPoint(NbVA)
		for i in range(NbVA):
			IF[i] = QuadraticC.getImportanceFactors()[i]

	Mean1=QuadraticC.getMeanFirstOrder()
	Mean2=QuadraticC.getMeanSecondOrder()

	CovG=QuadraticC.getCovariance()

	stdev=NumericalPoint(NbVS)
	for i in range(NbVS):
		stdev[i] = sqrt(CovG[i,i])
...

Function Uncertainity propagation using Monte Carlo simulations

Attachements: fc_MC.py

The function requires:

  • a NumericalMathFunction or physical model
  • a ComposedDistribution or probabilistic model
...
# Monte Carlo
	In = ProbabilisticModel.getNumericalSample(10000)
	Out = PhysicalModel(In)

	Moyenne = Out.computeMean()
	standarddeviation=Out.computeStandardDeviationPerComponent()
	Skewness = Out.computeSkewnessPerComponent()
	Kurtosis = Out.computeKurtosisPerComponent()
...

Function Basic statistics

Attachements: fc_BasicStat.py

The function requires:

  • a NumericalSample (result of uncertainity propagation using Monte Carlo simulations.

This kind of calculation has to be performed just after an uncertainity propagation using Monte Carlo simulations. This function will be able to be improved.

Last modifications : 12/06/2009

First script

you can test Open TURNS with this simple script :

from openturns import *
 
dist = Normal( 0, 1 )
sample = dist.getNumericalSample(10)
print sample

Now you have printed a sample !

Enjoy.