Recent posts (max 20) - Browse or Archive for more

User day #10

User day # 10 will be held the 6th June 2017 at EDF Chatou in Paris.

This year the theme will be scikit-learn/openturns.

To register send a mail to anne.dutfoy[at]edf.fr

  • Posted: 2017-03-21 09:18 (Updated: 2017-03-24 11:28)
  • Author: schueller
  • Categories: (none)
  • Comments (0)

OpenTURNS User's Day Number 9

Hi

here are all the presentations of the OpenTURNS User's Day #9.

Anne

  • Posted: 2016-06-28 15:28 (Updated: 2016-06-28 15:33)
  • Author: anne.dutfoy
  • Categories: (none)
  • Comments (0)

Radar-shaped statistic to check the uniformity of a design of experiments

Here is a Python translation of a R function allowing to check the uniformity of a design of experiments obtained by OpenTURNS (as a NumericalSample object). It is an implementation of the "Radar-shaped statistic" method detailed in the PhD thesis of J. Franco (2008).

The program requires a C shared library 'unif_radar_lib.so' that can be compiled from a source file 'C_rss.c', displayed below. For example the following commands can be used in Linux console with the gcc compiler:

gcc -c -Wall -Werror -fpic -std=c99 C_rss.c
gcc -shared -o unif_radar.so C_rss.o

Then the 'unif_radar.py' file can be executed directly or called from another user-defined function. Calling directly the enclosed version will run an example with a design based on a low-discrepancy sequence.

File 'unif_radar.py'

#!/usr/bin/env python
# coding = utf-8

"""
Check the uniformity of a design of experiments using the radar-shaped 
statistic.

The uniformity of a multidimensional design is checked in each plane defined 
by a couple (Xi, Xj) of input parameters. In every plane, the distribution of 
the orthogonal projections of the design points onto an axis with an angle 
'theta' with respect to the Xi and Xj axes is studied. The projection 
distribution is compared to the uniform distribution by means of a given 
distance, e.g. the Kolmogorov-Smirnov one. The greatest discrepancy is 
retained and a statistical test is performed to challenge the uniformity 
assumption. Moreover, the design is shown in the "worst" plane (Xi, Xj).

The method was designed by J. Franco (2008). This code is a Python translation 
of a R function in the R library DiceDesign (Franco et al., 2015). 

References
----------
Franco J. (2008). Planification d'experiences numeriques en phase exploratoire 
pour la simulation des phenomenes complexes, PhD Thesis, Ecole Nationale 
Superieure des Mines de Saint-Etienne.

Franco J., Dupuy D., Roustant O. (2015). DiceDesign: Designs of Computer 
Experiments. R package version 1.7, 
URL http://CRAN.R-project.org/package=DiceDesign.
"""

import numpy as np
import ctypes
import matplotlib.pyplot as plt
import openturns as ot


def scale_data(x):
    """Transform the data into uniformly distributed values over [-1,1]"""
    x = ot.NumericalSample(x)
    xs = []
    for i in range(x.getDimension()):
        xi = x[:,i]
        xs.append( [2.*xi.computeEmpiricalCDF(xij)-1. for xij in xi] )
    return ot.NumericalSample(np.array(xs).T)


def ks_stat(x):
    """Evaluate the Kolmogorov-Smirnov statistic"""
    n = float(len(x))
    ordered_sample = np.sort(x)
    Dplus = np.max( (1.+np.arange(n))/n - ordered_sample )
    Dminus = np.max( ordered_sample - np.arange(n)/n )
    stat = np.max([Dplus, Dminus])
    return stat
    
    
def ks_quantile(n, alpha = 0.05, eps=np.finfo(float).eps):
    """Evaluate the Kolmogorov-Smirnov alpha-quantile"""
    index, = np.where(np.abs([0.1, 0.05, 0.025, 0.01] - alpha*np.ones(4)) < eps)
    q = [1.224, 1.358, 1.48, 1.628][index]/(np.sqrt(n)+0.12+0.11/np.sqrt(n))
    
    return q


def check_design(x, n_theta=360, alpha=0.95, c_lib='./unif_radar_lib.so', \
                 eps=np.finfo(float).eps):
    """
    Check the uniformity of the data using the radar-shaped statistic.
    
    Plots showing the greatest distance to the uniformity in the "worst" plane 
    of inputs (Xi, Xj) are created. The results of the statistical test are 
    printed.
    
    Parameters
    ----------
    x: OpenTURNS NumericalSample
        Design of experiments.
    
    n_theta: integer, optional
        Number of angular positions for the projection axis (default is 360).
    
    alpha: float, optional
        Level of the statistical uniformity test (default is 0.95).
    
    c_lib: string, optional
        Name of the C shared library used to speed up the iterations through 
        the pairs of input variables (default is the location in the current 
        working directory './unif_radar_lib.so').
    
    eps: float, optional
        Tolerance for the small axis orientation angle (default is the numpy 
        value 'np.finfo(float).eps')
    """
    
    print("")
    print("2D radial scanning statistic (RSS) using Kolmogorov-Smirnov statistic")
    print("Discretization step (in degree): %.1f" %(180./n_theta))
    print("Maximum of RSS values (global statistic) per pair of dimensions\n")
    
    design = np.array(scale_data(x))
    n, d = design.shape
    
    # Discretization of the angular position
    theta_degree = np.linspace(0., 180., n_theta+1)[0:-1]
    theta = theta_degree*np.pi/180. # deg -> rad    
    cos_theta, sin_theta = np.cos(theta), np.sin(theta)
    subdiv_halfcircle = np.vstack((cos_theta, sin_theta)).T
    
    # Specify the C code library and types
    my_library = ctypes.CDLL(c_lib)
    my_library.C_rss2Dloop.argtypes = \
        [ctypes.POINTER(ctypes.c_double)]*4 + [ctypes.c_int]*2
    my_library.C_rss2Dloop.restype = \
        ctypes.POINTER(ctypes.c_double)
    
    global_stat = np.zeros((d, d))
    global_stat_max = 0
    
    # Iterate through pairs of input parameters
    
    for i in range(d-1):
        x1 = design[:,i]
        
        for j in range(i+1, d):
            
            x2 = design[:,j]
            
            x1_c = (ctypes.c_double * n)()
            for index, value in enumerate(x1):
                x1_c[index] = value                
            x2_c = (ctypes.c_double * n)()
            for index, value in enumerate(x2):
                x2_c[index] = value                
            ax_c = (ctypes.c_double * n_theta)()
            for index, value in enumerate(cos_theta):
                ax_c[index] = value                
            ay_c = (ctypes.c_double * n_theta)()
            for index, value in enumerate(sin_theta):
                ay_c[index] = value                
            n_c = ctypes.c_int(n)          
            n_theta_c = ctypes.c_int(n_theta)
            
            out = my_library.C_rss2Dloop(x1_c, x2_c, ax_c, ay_c, n_c, n_theta_c)
            
            projections2 = np.array(out[:n*n_theta])
            projections2 = projections2.reshape((n, n_theta), order="F")
            
            # Iterate through the angle positions
            
            anglewise_stat_ij = np.zeros(n_theta)
            for i_theta in range(n_theta):
                anglewise_stat_ij[i_theta] = ks_stat(projections2[:,i_theta])
    
            # Store the worst pvalue found for the parameters (Xi, Xj)
            global_stat_ij = np.max(anglewise_stat_ij)
            global_stat[i,j] = global_stat_ij
            global_stat[j,i] = global_stat_ij
            
            # Update the worst pvalue found for all parameter couples
            if global_stat_ij > global_stat_max:
                global_stat_max = global_stat_ij.copy()
                pair_worst = [i,j]
                anglewise_stat = anglewise_stat_ij.copy()
            
            print("Plane (%d,%d)  - Greatest observed Kolmogorov distance: %.4f" \
                  % (i+1, j+1, global_stat_ij))

    # Plot results
    
    rss_curve_x = anglewise_stat*subdiv_halfcircle[:,0]
    rss_curve_y = anglewise_stat*subdiv_halfcircle[:,1]
    
    gof_test_stat = ks_quantile(n, 1.-alpha)
    
    design_worst = design[:,pair_worst]
    anglewise_stat_max = np.max(anglewise_stat)
    index_max = np.argmax(anglewise_stat)
    cos_theta_max = subdiv_halfcircle[index_max,0]
    sin_theta_max = subdiv_halfcircle[index_max,1]
    dir_max = subdiv_halfcircle[index_max,:]
    
    plt.figure()
    plt.subplot(111, aspect='equal')
    plt.plot(design_worst[:,0], design_worst[:,1], 'ob')
    plt.xlabel(r"$X_{"+str(pair_worst[0]+1)+"}$", size=16)
    plt.ylabel(r"$X_{"+str(pair_worst[1]+1)+"}$", size=16)
    plt.title("Worst plane", size=16)
    
    plt.figure()
    plt.subplot(111, aspect='equal')
    rx = np.hstack((rss_curve_x, -rss_curve_x, rss_curve_x[0]))
    ry = np.hstack((rss_curve_y, -rss_curve_y, rss_curve_y[0]))
    plt.plot(rx, ry, '-b', label="KS statistic")
    theta_aux = np.linspace(0., 2.*np.pi+0.1, 80)
    plt.plot(gof_test_stat*np.cos(theta_aux), gof_test_stat*np.sin(theta_aux), \
             '-k', label="KS rejection threshold")
    plt.legend(loc="best")
    plt.title(r"Plane (X%d , X%d) - Worst angle: %d$^{\circ}$" \
              % (pair_worst[0]+1, pair_worst[1]+1, theta_degree[index_max]), \
              size = 16)
    
    plt.figure()
    plt.subplot(111, aspect='equal')
    plt.plot(design_worst[:,0], design_worst[:,1], 'ob')
    projections = np.dot(design_worst, dir_max)
    plt.plot(projections*dir_max[0], projections*dir_max[1], 'or')
    if np.abs(cos_theta_max) < eps:
        plt.plot([0,0], [-1,1], "-r")
    else:
        plt.plot([-1,1], np.array([-1,1])*sin_theta_max/cos_theta_max, "-r")
    for i in range(n):
        plt.plot([design_worst[i,0], projections[i]*cos_theta_max], \
                 [design_worst[i,1], projections[i]*sin_theta_max], 'r:')
    plt.xlabel(r"$X_{"+str(pair_worst[0]+1)+"}$", size=16)
    plt.ylabel(r"$X_{"+str(pair_worst[1]+1)+"}$", size=16)
    plt.xlim([-1,1])
    plt.ylim([-1,1])
    plt.title(r"$\theta=%d^{\circ}$" % (theta_degree[index_max]), size=20)
    
    print("\n\nKolmogorov-Smirnov quantile: %.4f\n" % global_stat_max)
    print("Kolmogorov-Smirnov threshold: %.2f\n" % gof_test_stat)
    if global_stat_max < gof_test_stat:
        print("Uniformity hypothesis NOT REJECTED")
    else:
        print("Uniformity hypothesis REJECTED at level %.2f\n" % alpha)


if __name__ == "__main__":
    
    """Test with a low discrepancy sequence"""
    
    plt.ion() # interactive mode ON
    
    n = 60
    d = 20
    
    mySobolSeq = ot.SobolSequence(d)
    design = 2*np.array( mySobolSeq.generate(n) )-1.
    
    check_design(design)

File 'C_rss.c'

#include <math.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>

#define precision 0.000000000001

#define SQUARE(x) ((x)*(x))
#define CUBE(x) ((x)*(x)*(x))
#define MAX(a,b) (a<b ? b:a)


double C_uniform_cdf(const double *p) {

	return(0.5 * (MAX(*p + 1., 0.) - MAX(*p - 1., 0.)));

}


double C_sumof2uniforms_cdf(const double *p, const double *ax, const double *ay, const int *k) {
	
	double s = 0.;
	
	if ((fabs(ax[*k]) > precision) && (fabs(ay[*k]) > precision)) {
		s += SQUARE(MAX(*p - ax[*k] - ay[*k], 0.));
		s += SQUARE(MAX(*p + ax[*k] + ay[*k], 0.));
		s -= SQUARE(MAX(*p - ax[*k] + ay[*k], 0.));
		s -= SQUARE(MAX(*p + ax[*k] - ay[*k], 0.));
		return(s / (8 * ax[*k] * ay[*k]));
	} 
	else return(C_uniform_cdf(p));

}


double C_sumof3uniforms_cdf(const double *p, const double *ax, const double *ay, const double *az, const int *k) {
	
	double s = 0.;
	
	if ((fabs(ax[*k]) > precision) && (fabs(ay[*k]) > precision) && (fabs(az[*k]) > precision)) {
		s += CUBE(MAX(*p + ax[*k] + ay[*k] + az[*k], 0.));
		s += CUBE(MAX(*p + ax[*k] - ay[*k] - az[*k], 0.));
		s += CUBE(MAX(*p - ax[*k] + ay[*k] - az[*k], 0.));
		s += CUBE(MAX(*p - ax[*k] - ay[*k] + az[*k], 0.));
		s -= CUBE(MAX(*p - ax[*k] - ay[*k] - az[*k], 0.));
		s -= CUBE(MAX(*p - ax[*k] + ay[*k] + az[*k], 0.));
		s -= CUBE(MAX(*p + ax[*k] - ay[*k] + az[*k], 0.));
		s -= CUBE(MAX(*p + ax[*k] + ay[*k] - az[*k], 0.));
		return(s / (48 * ax[*k] * ay[*k] * az[*k]));
	} 
	else if ((fabs(ax[*k]) <= precision)) return(C_sumof2uniforms_cdf(p, ay, az, k));
	else if ((fabs(ay[*k]) <= precision)) return(C_sumof2uniforms_cdf(p, ax, az, k));
	else return(C_sumof2uniforms_cdf(p, ax, ay, k));
}


void C_rss2Dloop(const double *x, const double *y, const double *ax, const double *ay, const int *n, const int *n_theta, double *ans) {
	
	double p;
	
	for (int i_theta=0; i_theta < *n_theta; i_theta++) {
			
		for (int i=0; i < *n; i++) {
			p = x[i]*ax[i_theta] + y[i]*ay[i_theta];
			ans[i + *n * i_theta] = C_sumof2uniforms_cdf(&p, ax, ay, &i_theta);
		}
		
	}

}


void C_rss3Dloop(const double *x, const double *y, const double *z, const double *ax, const double *ay, const double *az, const int *n, const int *n_theta, const int *n_phi, const int *i_phi, double *ans) {
	
	double p;
	int k;
	
	for (int i_theta=0; i_theta < *n_theta; i_theta++) {
			
		for (int i=0; i < *n; i++) {
			k = i_theta + *n_theta * *i_phi;
			p = x[i]*ax[k] + y[i]*ay[k] + z[i]*az[k];
			ans[i + *n * i_theta] = C_sumof3uniforms_cdf(&p, ax, ay, az, &k);
		}
		
	}
}

Simple OpenTURNS add-in for Excel

Hello,

A proof-of-concept is available at https://github.com/dbarbier/ot-xll

There is documentation to build, install and test this add-in. Source code is released under the same license as OpenTURNS, and is commented so that adding new functions should be pretty straightforward. It has been tested with Excel 2010 and 2013, but should work with any version >= 2007.

Your feedback is very welcome, please use github tracker if there are any issues, and let us know if you write your own XLL.

OpenTURNS for OSX

Hi,

Here is the (non-official) OpenTURNS 1.6 for OS X.

https://drive.google.com/file/d/0B1eixXAL6e2KZTlBWERHTWxRTmM/view?usp=sharing

The tarball contains OpenTURNS and its dependencies :

  • The muParser library,
  • The TBB library,
  • OpenBLAS,
  • HMat (hmat-oss)

User should read the help file (README) provided for the installation instructions.

Currently, the compiled wrappers (poutre,..) segfault. As this technology will be removed in the next release, there is no workaround.

Please report any other problem encountered.

Thank you for your feedback.

Enjoy,

Sofiane

Slides of the 8th User's day

Hi,

Here are the slides of the 8th OpenTURNS User's day held on 2015/06/12 at EDF R&D.

J.

Implementation of the Iterative Generalized Polynomial Chaos (i-gPC)

Hi,

Here is an OpenTurns-based implementation of the so-called i-gPC method designed by Poette & Lucor (2012). It allows the approximation of strongly nonlinear responses of models depending on random parameters. The strategy was originally applied in the field of fluid dynamics, precisely to stochastic partial differential equations modelling shocks, e.g. Burgers equations.

The i-gPC procedure consists in first approximating the input (X)-output (Y) relationship by a classical chaos expansion (called Y0), and then to construct a new chaos (Y1) to approximate the mapping between the residuals (Y-Y0) and the response Y using an orthonormal basis with respect to the measure of (Y-Y0). Then a new chaos is sought to mimick the relationship between (Y-Y1) and (Y), and so on.

The implementation is successful in some cases, as for the 1D Heaviside function example provided with the OT script when some "good" algorithm parameters are selected. Nonetheless it may suffer severe numerical instability in some cases. This problem arises when creating at eash igPC step an ad-hoc orthonormal basis from a kernel estimate of the density functions of the current residuals.

This drawback should be overcome in a future version. Any suggestion/comment is welcome!

Main script "igPC.py"

# -*- coding: utf-8 -*-

""" 
Performs iterative generalized polynomial chaos approximation.

This is an implementation of the approach designed by [Poette, 
Lucor (2012)]. This approach was shown to be more accurate than 
ordinary chaos expansion without requiring additional computer 
experiments. It makes it possible to approximate strongly nonlinear 
model responses and to tackle the Gibbs phenomenon in the presence 
of discontinuities.

Reference
---------
G. Poette, D. Lucor, Non intrusive iterative stochastic spectral
representation with application to compressible gas dynamics,
Journal of Computational Physics 231 (2012), pp. 3587-3609
"""

import matplotlib.pyplot as plt
from numpy import loadtxt, c_, linspace, array
from openturns import *


class IterativeGPC(object):
    """ 
    Iterative generalized polynomial chaos study
    
    Parameters
    ----------
    in_filename: string
        Name of the file containing the input sample.
        
    out_filename: string
        Name of the file containing the output sample.
    
    myDistribution: *ComposedDistribution* openturns object.
        Joint distribution from which the input sample was 
        generated.
    
    pc_degree: integer
        Total degree of the polynomial chaos expansions.
    
    n_iter: integer, optional
        Number of igPC iterations. If `n_iter` is not given 
        (default 4).
    
    n_kernel_pdf: integer, optional
        Size of the sample that is generated from the 
        several chaos approximations in order to estimate the 
        density using kernel smoothing (default 5,000).
    
    Attributes
    ----------
    data: dict
        Dictionary that contains the input and the output samples 
        (*NumericalSample* objects) as well as their (common) 
        size and their respective dimensions.
    
    alg_opts: dict
        Dictionary of the parameters to pass to the `perform_igpc` 
        method.
        
    metamodels: dict
        List of metamodels (*NumericalMathFunction* objects) 
        obtained at each iteration of igPC.
    """
    
    def __init__(self, in_filename, out_filename, myDistribution,
                 pc_degree, n_iter=4, n_kernel_pdf=5000):
        
        # Import the input and the output data
        xSample, ySample = ( NumericalSample( c_[ loadtxt(fic) ] )
                             for fic in(in_filename, out_filename) )
        (n_sample_in, dim_in), (n_sample_out, dim_out) = ((sample.getSize(), 
                     sample.getDimension()) for sample in (xSample, ySample))
        if n_sample_in != n_sample_out:
            raise ValueError("The sizes of the input and the output "
                             + "samples must be equal.")
        else:
            self.data = {"input": xSample, "output": ySample, 
                         "size": n_sample_in, "dim_in": dim_in, 
                         "dim_out": dim_out}
        
        # Get the input joint distribution
        if dim_in != myDistribution.getDimension():
            raise ValueError("The dimensions of the input sample and "
                             + "the input joint distribution must be "
                             + "equal.")
        else:
            self.distribution = myDistribution
            
        # Get the igPC options
        self.alg_opts = {"pc_degree": pc_degree, "n_iter": n_iter,
                         "n_kernel_pdf": n_kernel_pdf}
        
        

    def perform_igpc(self):
        """
        Performs iterative generalized polynomial chaos approximation.
        """
        
        # Extract attributes
        myDistribution = self.distribution
        dim = self.data["dim_in"]
        n_sample = self.data["size"]
        xSample, ySample = self.data["input"], self.data["output"]
        pc_degree, n_iter, n_kernel_pdf = (self.alg_opts[key] for key 
                           in ("pc_degree", "n_iter", "n_kernel_pdf"))
        
        # Specify a multivariate polynomial orthonormal basis
        univariatePolyList = []
        for i in range(dim):
            dist = myDistribution.getMarginal(i)
            univariatePolyList.append( StandardDistributionPolynomialFactory(dist) )
        polyColl = PolynomialFamilyCollection(univariatePolyList)
        enumerateFunction = EnumerateFunction(dim)
        multivariateBasis = OrthogonalProductPolynomialFactory(polyColl, enumerateFunction)
        
        # Define the LAR strategy
        basisSequenceFactory = LAR()
        fittingAlgorithm = CorrectedLeaveOneOut()
        approximationAlgorithm = LeastSquaresMetaModelSelectionFactory(basisSequenceFactory, fittingAlgorithm)
        
        # Define the truncation strategy
        P = enumerateFunction.getStrataCumulatedCardinal(pc_degree)
        truncatureBasisStrategy = FixedStrategy(OrthogonalBasis(multivariateBasis), P)
        evalStrategy = LeastSquaresStrategy(myDistribution, approximationAlgorithm)
        weights = [1]*n_sample
        
        # Construct the chaos approximation
        polynomialChaosAlgorithm = FunctionalChaosAlgorithm(xSample, weights, ySample, 
                                                            myDistribution, truncatureBasisStrategy, 
                                                            evalStrategy)
        polynomialChaosAlgorithm.run()
        
        # Get the chaos results
        result = polynomialChaosAlgorithm.getResult()
        metamodel = result.getMetaModel()
        chaosRV = FunctionalChaosRandomVector(result)
        zSample = metamodel(xSample)
        
        # Carry out the i-gPC procedure
        #
        metamodel_list = [metamodel]
        for k in range(n_iter):
            #
            # KLUDGE: Approximate the chaos output distribution using a kernel PDF
            # estimator.
            # This may be a crude approach possibly leading to numerical instabilities. 
            # This should be improved in a future version.
            zBigSample = chaosRV.getSample(n_kernel_pdf)
            kernel = KernelSmoothing()
            myBandwith = kernel.computeSilvermanBandwidth(zBigSample)
            dist_Z = kernel.build(zBigSample, myBandwith)
            myDistribution_Z = ComposedDistribution([dist_Z])
            #
            # Construct an ad-hoc orthonormal polynomial basis of the chaos measure
            polyColl_Z = PolynomialFamilyCollection([StandardDistributionPolynomialFactory(dist_Z)])
            enumerateFunction = EnumerateFunction(1)
            multivariateBasis_Z = OrthogonalProductPolynomialFactory(polyColl_Z, enumerateFunction)
            truncatureBasisStrategy_Z = FixedStrategy(OrthogonalBasis(multivariateBasis_Z), pc_degree+1)
            evalStrategy_Z = LeastSquaresStrategy(dist_Z, approximationAlgorithm)
            #
            # Compute a chaos approximation of the relationship between the previous chaos output and 
            # the acutal output.
            polynomialChaosAlgorithm = FunctionalChaosAlgorithm(zSample, weights, ySample, 
                                                                myDistribution_Z, truncatureBasisStrategy_Z, 
                                                                evalStrategy_Z)
            polynomialChaosAlgorithm.run()
            #
            # Get results
            result = polynomialChaosAlgorithm.getResult()
            metamodel = result.getMetaModel()
            chaosRV = FunctionalChaosRandomVector(result)
            zSample = metamodel(zSample)
            metamodel_list.append(metamodel)
        
        self.metamodels = metamodel_list
    
    
    def __call__(self, xSample):
        pass
    
    
    def plot_results(self, i_input, i_output, model=None):
        """
        Plot the chaos approximations obtained at the several 
        iterations of IgPC.
        """
        
        xSample = self.data["input"].getMarginal(i_input)
        ySample = self.data["output"].getMarginal(i_output)
        x = linspace(xSample.getMin()[0], xSample.getMax()[0], 80)
        kernel = KernelSmoothing()
        
        fig1, fig2 = plt.figure(1), plt.figure(2)
        ax1, ax2 = (fig.add_subplot(111) for fig in (fig1, fig2))
        
        xBigSample = self.distribution.getSample(10000)
        
        if model != None:
            ax1.plot(x, model(c_[x]), '--b',
                    label="Actual response")
            yBigSample = model(xBigSample)
            kernelDist = kernel.build(yBigSample)
#            ax2.plot(x, kernelDist.computePDF(NumericalSample(c_[x])),
#                     '--b', label="Actual response")
            ax2.hist(array(yBigSample).ravel(), bins=100,
                     label="Actual response", normed=True)
        
        zSample = NumericalSample(c_[x])
        zBigSample = xBigSample
        for i, metamodel in enumerate(self.metamodels):
            zSample  = metamodel(zSample)
            zBigSample = metamodel(zBigSample)
            kernelDist = kernel.build(zBigSample)
            ax1.plot(x, array(zSample).ravel(), 
                     label=("i-gPC - Iter %d" %i))
            ax2.plot(x, kernelDist.computePDF(NumericalSample(c_[x])), 
                     label=("i-gPC - Iter %d" %i))
            
        [[ax.legend(loc=0), ax.set_xlabel("x")] for ax in (ax1, ax2)]
        ax1.set_ylabel("y=f(x)")
        ax2.set_ylabel("Probability density function")

        return fig1, fig2
        
        
    
    
if __name__ == "__main__":
    
    
    import numpy as np
    import pylab as plt
    from get_sample import get_sample
    
    plt.close("all")
    RandomGenerator_SetSeed(63)
    
    # Create a data set based on the Heaviside function
    
    class modelePYTHON(OpenTURNSPythonFunction):
        def __init__(self) :
            OpenTURNSPythonFunction.__init__(self,1,1)
        # that following method gives the implementation of modelePYTHON
        def _exec(self,x) :
            """ Univariate Heaviside function """
            if x[0] <= 0.2:
                return [1.]
            else:
                return [0.]

    model = NumericalMathFunction(modelePYTHON())

    dist_X = Uniform(-1,1)
    myDistribution = ComposedDistribution([dist_X])
    myRandomVector = RandomVector(myDistribution)
    
    n_sample = 100
    xSample = get_sample(myRandomVector, n_sample, "QMC") # generate a quasi-Monte Carlo sample
    ySample = model(xSample)    
    np.savetxt("input.txt", np.array(xSample))
    np.savetxt("output.txt", np.array(ySample))
    
    n_kernel_pdf = 5000
    n_iter = 6
    pc_degree = 8
    
    study = IterativeGPC("input.txt", "output.txt", myDistribution,
                    pc_degree, n_iter=n_iter, n_kernel_pdf=10000)
                    
    study.perform_igpc()
    map(plt.show, study.plot_results(0, 0, model=model))

Auxiliary function "get_sample.py"

from numpy import array
from openturns import LowDiscrepancyExperiment, \
                      SobolSequence

def get_sample(vectX, N, method="MC"):
    """
    Generate a sample according to a given 
    distribution.
    """
    if method == "MC":
        X = vectX.getSample(N)
    elif method == "QMC":
        dim_in = vectX.getDimension()
        sequence = LowDiscrepancyExperiment(SobolSequence(dim_in), 
                   vectX.getDistribution(), N)
        X = array(sequence.generate())
    return X
  • Posted: 2014-07-15 17:10 (Updated: 2014-07-29 13:23)
  • Author: blatman
  • Categories: (none)
  • Comments (0)

Slides of the 7th User's day

Hi,

Here are all the presentations of the 7th OpenTURNS User's day. Only good things inside.

A++

Régis

Slides of the 6th User's day

Hi,

Here are the slides of the 6th OpenTURNS User's day. A little bit last, but worthwhile!

Regards,

Régis

Distributing Python NumericalMathFunction with IPython.parallel

  • Posted: 2014-03-07 15:35 (Updated: 2014-06-19 10:31)
  • Author: dubourg
  • Categories: (none)
  • Comments (0)

Bayesian updating for a simple concrete creep model

Hello,

Here is my first try with bayesian updating in openturns. This is open for discussion !

The model is a concrete creep model (called beton_burger_fp in Code_Aster) for an uniaxial creep test during 60 years. Data are available for 10 years.

6 inputs are k_rs, k_rd, eta_rs, eta_rd, eta_id, kappa.

Outputs are:

  1. a priori versus a posteriori inputs distribution
  2. a priori and a posteriori mean response and 95%-quantile of the response.

The main script is actu_bayes.py. fluage.data and StepB.py are used by actu_bayes.py.

fluage.data:

 0.              0.
2.46E-09       -1.71E-19
3.74E+03       2.20E-05
1.95E+04       1.15E-04
1.29E+05       3.54E-05
1.34E+05       6.47E-05
2.69E+05       1.40E-04
1.01E+06       1.99E-04
1.88E+06       2.35E-04
2.12E+06       2.43E-04
2.37E+06       2.54E-04
2.74E+06       2.60E-04
3.11E+06       2.78E-04
4.46E+06       2.99E-04
5.32E+06       3.07E-04
5.94E+06       3.18E-04
6.18E+06       3.37E-04
7.53E+06       3.44E-04
8.15E+06       3.50E-04
8.64E+06       3.56E-04
9.62E+06       3.65E-04
1.07E+07       3.83E-04
1.21E+07       4.03E-04
1.32E+07       4.26E-04
1.53E+07       4.23E-04
1.60E+07       4.38E-04
1.85E+07       4.40E-04
1.99E+07       4.50E-04
2.26E+07       4.78E-04
2.55E+07       4.85E-04
2.59E+07       5.07E-04
2.94E+07       5.07E-04
3.34E+07       5.27E-04
3.44E+07       5.39E-04
3.72E+07       5.57E-04
4.07E+07       5.71E-04
4.90E+07       6.08E-04
7.55E+07       6.92E-04
9.46E+07       7.60E-04
1.14E+08       8.25E-04
1.32E+08       8.58E-04
1.61E+08       9.87E-04
1.99E+08       1.12E-03
2.27E+08       1.21E-03
2.55E+08       1.31E-03
2.84E+08       1.41E-03
3.12E+08       1.51E-03

StepB.py:

  # -*- coding: utf-8 -*-

from openturns import *

import numpy as np

dim_in  = 7        # Nombre de variables d'entree
dim_out = nb_data+nb_disc  # Nombre de variables de sortie

# Fonction de transfert

class myfunction(OpenTURNSPythonFunction):
        def __init__(self):
                OpenTURNSPythonFunction.__init__(self, dim_in, dim_out)
        def _exec(self, X):     
                        
                v_KRS   = X[0];
                v_KRD   = X[1];
                v_ETARS = X[2];
                v_ETARD = X[3];
                v_ETAID = X[4];
                v_KAPPA = X[5];

                nu = 0.2; #coefficient de poisson elastique

                T_FIN = 1.892E9; #60 ans

                TEMPS = np.zeros(dim_out)
  
                for i in range(0,len(temps)):
                  TEMPS[i] = np.array(temps[i])

                T_FIN_data = TEMPS[nb_data-1]

                SIGMA_0 = 12E6;

                ETA_IS = (1+nu)/(1-2*nu)*v_ETAID
                B = v_KAPPA/(np.sqrt(1.+2.*nu**2));
                C = SIGMA_0*np.sqrt(1.+2.*nu**2)/(v_KAPPA*(1.+nu)*v_ETAID);
                C2 = SIGMA_0*np.sqrt(1.+2.*nu**2)/(v_KAPPA*(1.-2.*nu)*ETA_IS);
                D = -v_KRD/v_ETARD;
                D2 = -v_KRS/v_ETARS;

                EPSA = [];
                EPS1 = [];

                for i in range(1,nb_disc+1):
                        TEMPS[nb_data+i-1] = T_FIN_data+i*(T_FIN-T_FIN_data)/nb_disc
                for i in range(dim_out):
                        EPSA.append(-SIGMA_0/3.*(1./v_KRS*(1.-np.exp(D2*TEMPS[i]))+2./v_KRD*(1.-np.exp(D*TEMPS[i])))-B*np.log(1.+(3.*SIGMA_0*np.sqrt(1.+2.*nu**2)*TEMPS[i])/(v_KAPPA*(ETA_IS*(1.-2.*nu)+2.*v_ETAID*(1.+nu)))));
                        EPS1.append(-EPSA[i]);


                return EPS1

model = NumericalMathFunction(myfunction())

actu_bayes.py

# -*- coding: utf-8 -*-
from openturns import *
import numpy as np
import pylab as pl
from scipy.stats import norm
import time
import scipy.optimize as so
import scipy.stats as st

###################
# Parametres civaux
###################

#Proprietes elastiques
nu = 0.2; #coefficient de poisson elastique
sigma_0 = 12E6;

#########
# Instants des donnees experimentales
#########

data       = open('fluage.data','r')

liste = []
liste = data.readlines()
nb_data = len(liste)
data.close()

temps = NumericalSample(nb_data,1)

i=0
for j in range(len(liste)):
  ligne1 = liste[j].split()
  temps[i,0]=float(ligne1[0])
  i = i+1

######
# Definition du modele
######

def buildModel(temps):

  # Express eta_0_is in terms of the other parameters
  eta_0_is = "eta_0_id * (" + str((nu + 1.0) / (1.0 - 2.0 * nu)) + ")"
  # Convert all the fixed numerical values into strings
  # Protect them with parenthesis to avoid troubles with negative values
  sigma_0_str = str(sigma_0)
  t_str = "(" + str(temps) + ")"
  factor = np.sqrt(1.0 + 2.0 * nu**2)
  term0 = "(" + str(factor) + ")"
  term1 = "(" + str(3.0 * sigma_0 * factor) + ")"
  term2 = "(" + str(1.0 - 2.0 * nu) + ")"
  term3 = "(" + str(1.0 + nu) + ")"
  # The expression, at last...
  return NumericalMathFunction(["k_rs", "k_rd", "eta_rs", "eta_rd", "eta_0_id", "kappa", "log_sigma"], [sigma_0_str + " / 3 * (1.0 / k_rs * (1.0 - exp(-k_rs / eta_rs * " + t_str + ")) + 2 / k_rd * (1.0 - exp(-k_rd / eta_rd * " + t_str + "))) + kappa / " + term0 + " * log(1.0 + " + term1 + " / (kappa * (" + eta_0_is + " * " + term2 + " + 2.0 * eta_0_id * " + term3 + ")) * " + t_str + ")", "exp(log_sigma)"])

# Version Numpy du modele 

T = np.array(temps).ravel()

def modeleDeformation(theta):
    k_rs,k_rd,eta_rs,eta_rd,eta_id,kappa, log_sigma = theta

    eta_is = (1+nu)/(1-2*nu)*eta_id
    B   = kappa/(np.sqrt(1+2*nu*nu))
    C   = sigma_0*np.sqrt(1+2*nu*nu)/(kappa*(1+nu)*eta_id)
    C2  = sigma_0*np.sqrt(1+2*nu*nu)/(kappa*(1-2*nu)*eta_is)
    D   = -k_rd/eta_rd
    D2  = -k_rs/eta_rs

    return np.hstack((sigma_0/3*(1/k_rs*(1-np.exp(D2*T))+2/k_rd*(1-np.exp(D*T))) \
            + B*np.log(1+(3*sigma_0*np.sqrt(1+2*nu**2)*T)/(kappa*(eta_is*(1-2*nu)+2*eta_id*(1+nu)))),\
            np.exp(log_sigma)))

###
# Mesures de deformation
###

yobs= [ 0., -1.71E-19, 2.20E-05, 1.15E-04, 3.54E-05, 6.47E-05, 1.40E-04, 1.99E-04, \
 2.35E-04, 2.43E-04, 2.54E-04, 2.60E-04, 2.78E-04, 2.99E-04, 3.07E-04, 3.18E-04, \
 3.37E-04, 3.44E-04, 3.50E-04, 3.56E-04, 3.65E-04, 3.83E-04, 4.03E-04, 4.26E-04, \
 4.23E-04, 4.38E-04, 4.40E-04, 4.50E-04, 4.78E-04, 4.85E-04, 5.07E-04, 5.07E-04, \
 5.27E-04, 5.39E-04, 5.57E-04, 5.71E-04, 6.08E-04, 6.92E-04, 7.60E-04, 8.25E-04, \
 8.58E-04, 9.87E-04, 1.12E-03, 1.21E-03, 1.31E-03, 1.41E-03, 1.51E-03]

obsSize = len(yobs)

yobs = np.atleast_2d(yobs).T
yobs = NumericalSample(yobs)

modelCollection=NumericalMathFunctionCollection()

for iaux in range(nb_data):
  modelCollection.add(buildModel(temps[iaux,0]))

model = NumericalMathFunction(modelCollection)

conditional = Normal()

# reference
Moyenne = [2.26E10, 3.46E10, 1.97E17, 1.E17, 2.51E18, 2.51E-3]

# on donne plus ou moins de poids au modele : 1/sigma^2
# variable tres importante
StdDev = NumericalPoint([elt for elt in NumericalPoint(np.hstack(([0.15*elt for elt in Moyenne],-np.log(.00025))))])
CorrMatrix = CorrelationMatrix(7)

for iaux in range(7):
  CorrMatrix[iaux,iaux] = StdDev[iaux]*StdDev[iaux]

Prior = Normal(NumericalPoint(np.hstack((Moyenne,.00025))),CorrMatrix)

Theta0 = Prior.getMean()

print "Moyenne prior " , NumericalPoint(np.hstack((Moyenne,.00025)))
print "Prior.getMean : " , Prior.getMean()


proposal = DistributionCollection()
for iaux in range(len(StdDev)):
    width = 0.15*StdDev[iaux]
    proposal.add(Uniform(-width,width))

RWMHsampler = RandomWalkMetropolisHastings(Prior,conditional,model,yobs,Theta0,proposal)

strategy = CalibrationStrategyCollection()
aStrategy = CalibrationStrategy(Interval(0.,1.))
for iaux in range(7):
  strategy.add(aStrategy)

RWMHsampler.setCalibrationStrategyPerComponent(strategy)

RWMHsampler.setVerbose(True)
RWMHsampler.setThinning(1)
RWMHsampler.setBurnIn(100)

sampleSize = 200

t1 = time.time()
sample = RWMHsampler.getSample(sampleSize)
t2 = time.time()
print 'acceptance rate = ',
print RWMHsampler.getAcceptanceRate()


myRandomVector = PosteriorRandomVector(RWMHsampler)
# AUGMENTER A 100000 pour avoir des distributions tres plates
n = 5000

posterior_sample = myRandomVector.getSample(n)
np_posterior_sample = np.array(posterior_sample)
#np_posterior_sample = np.array(sample)

post_mean = np_posterior_sample.mean(axis=0)

#Posterior plot
nsample = 1000

pl.clf()
pl.figure(999)
for i in xrange(nsample):
    bb = pl.plot(T/3600./24./365.,modeleDeformation(np_posterior_sample[np.random.randint(n)])[:-1], 'r:')

cc = pl.plot(T/3600./24./365.,modeleDeformation(post_mean)[:-1], 'k', lw=2)
aa = pl.scatter(T/3600./24./365.,yobs, s=20, c='g',zorder=10)

pl.legend((aa, bb, cc), ("obs", "Loi a posteriori", "Moyenne a posteriori"), loc="upper left")
pl.xlabel("Temps (annees)")
pl.ylabel("Deformation differee")
pl.savefig("AjustementMoyenneAPosteriori.pdf")
pl.xlim(0,3.)
pl.ylim(0,0.001)
pl.savefig("AjustementMoyenneAPosterioriZoom.pdf")


pl.figure()
for i in range(7):
    #pl.figure(50+i)
    pl.subplot(7,1,i+1)
    pl.plot(range(n), np_posterior_sample[:,i])

filename = "cv_posterior_var.pdf"
pl.savefig(filename)

pl.clf()

t3 = time.time()

kernel = KernelSmoothing()

# Prior input PDFs (for comparison)
for i in range(7):
  priorPDF = Prior.getMarginal(i).drawPDF()
  priorPDFdrawable = priorPDF.getDrawable(0)
  posteriordrawable = kernel.build(posterior_sample.getMarginal(i)).drawPDF().getDrawable(0) 
  posteriordrawable.setLegendName('posterior')
  posteriordrawable.setColor('blue')
  priorPDF.add(posteriordrawable)
  repe = "prior_vs_posterior_var_entree_"+str(i+1)+"_sampleSize"+str(sampleSize)+"_n_"+str(n)
  if not(repe in os.listdir('.')) :
    os.mkdir(repe)
  os.chdir(repe)
  filename = "prior_vs_posterior_var_entree_"+str(i+1)+"_sampleSize"+str(sampleSize)+"_n_"+str(n)
  priorPDF.draw(filename)
  os.chdir('../.')

np.savetxt('echantillon_'+str(n),np_posterior_sample)
print 'calcul de la loi a posteriori : ',t2-t1
print 'calcul d un echantillon a posteriori : ',t3-t2

print "LogVraisemblance(Theta0) = ", RWMHsampler.computeLogLikelihood(Theta0)
print "LogVraisemblance(10*Theta0) = ", RWMHsampler.computeLogLikelihood(10*Theta0)

pl.clf()
pl.figure(7)
####
# Impression des donnees
####
data       = open('fluage.data','r')
temps_data = []
eps1_data  = []

liste = []
liste = data.readlines()
data.close()

for j in range(1,len(liste)):
  ligne1 = liste[j].split()
  temps_data.append(float(ligne1[0]))
  eps1_data.append(float(ligne1[1]))
for ind in range(len(temps_data)):
  temps_data[ind] = temps_data[ind]/(3600.*24.*365.)

pl.plot(temps_data, eps1_data,'bo' ,label="Donnees essais exp.")
pl.legend(bbox_to_anchor=(.05, 1), loc=2, borderaxespad=0.)
pl.xlabel("Temps (annees)")
pl.ylabel("Deformation differee")
pl.title("Comparaison actu bayes et essais")

###
# Impression du posterior et de l intervalle de confiance a 95 %
####
# discretisation temporelle au dela des instants pour lesquels on dispose de
# donnees experimentales
nb_disc = 600

execfile('StepB.py')

sizeX = n

SampleMC     = model(posterior_sample)
priorSample  = Prior.getSample(sizeX)
sample_model_prior = model(priorSample)

meanMC       = SampleMC.computeMean()  
covarianceMC = SampleMC.computeCovariance()
meanMCprior       = sample_model_prior.computeMean()  
covarianceMCprior = sample_model_prior.computeCovariance()

# calcul de l'intervalle de confiance a 95% de l'estimateur meanMC
borneinf_meanMC=[]
bornesup_meanMC=[]

alpha = 0.95

lower_bnds=np.array(SampleMC.computeQuantilePerComponent((1-alpha)/2.) )
upper_bnds=np.array(SampleMC.computeQuantilePerComponent((1+alpha)/2.) )
lower_bnds_pr=np.array(sample_model_prior.computeQuantilePerComponent((1-alpha)/2.) )
upper_bnds_pr=np.array(sample_model_prior.computeQuantilePerComponent((1+alpha)/2.) )

for i in range(dim_out):
  borneinf_meanMC.append(meanMC[i]-1.96*np.sqrt(covarianceMC[i, i])/np.sqrt(sizeX))
  bornesup_meanMC.append(meanMC[i]+1.96*np.sqrt(covarianceMC[i, i])/np.sqrt(sizeX))

########
# On ramene en annees
########

TEMPS   = [];

# Instant final 60 ans
T_FIN = 1.892E9;
TEMPS = np.zeros(dim_out)
for i in range(0,len(temps)):
    TEMPS[i] = np.array(temps[i])
T_FIN_data = TEMPS[nb_data-1]
for i in range(1,nb_disc+1):
    TEMPS[nb_data+i-1] = T_FIN_data+i*(T_FIN-T_FIN_data)/nb_disc

for ind in range(len(TEMPS)) :
  TEMPS[ind] = TEMPS[ind]/(3600.*24.*365.)

pl.clf()
pl.figure(100)
pl.xlabel("Temps (annees)")
pl.ylabel("Deformation differee")
pl.title("Comparaison actu bayes et essais")
pl.plot(temps_data, eps1_data,'bo' ,label="Donnees essais exp.")
pl.plot(TEMPS, meanMC,'g', label="Moyenne MC posterior")
pl.plot(TEMPS, meanMCprior,'k', label="Moyenne MC prior")
pl.plot(TEMPS,lower_bnds,'b',label="quantile 2.5 posterior")
pl.plot(TEMPS,upper_bnds,'b',label="quantile 97.5 posterior")
pl.plot(TEMPS,lower_bnds_pr,'r',label="quantile 2.5 prior")
pl.plot(TEMPS,upper_bnds_pr,'r',label="quantile 97.5 prior")
filename = "estimation_cv_MC_taille_"+str(sizeX)
pl.legend(loc='best')
pl.savefig(filename)
pl.xlim(0,3.)
pl.ylim(0,0.001)
filename="estimation_cv_MC_taille_restreint_"+str(sizeX)
pl.savefig(filename)

for ind in range(len(TEMPS)) :
  TEMPS[ind] = TEMPS[ind]*(3600.*24.*365.)

print 'moyenne de l echantillon posteriori = ',posterior_sample.computeMean()
print 'ecart type de l echantillon posteriori = ',posterior_sample.computeStandardDeviationPerComponent()

for i in range(6):
  print 'coefficient de variation variable '+str(i+1)+' : ',posterior_sample.computeStandardDeviationPerComponent()[i]/posterior_sample.computeMean()[i]

How to add a post

To add a post one must have a Trac account. If you do not have already one, ask one on users@… mailing list.

python-randomfields: a Python module for identifying non-stationary non-Gaussian random fields

Hi all,

On the occasion of the Material Ageing Institute Scientific Network (MAI-SN) partnership (initiated by EdF, TEPCO and EPRI), Phimeca Engineering, EdF R&D and Institut Navier (ENPC) implemented a prototype Python module for the simulation and the identification of non-stationary non-Gaussian random fields based on the Karhunen-Loève expansion representation. Provided some suitable measurements are made this could help modelling the randomness in the material properties of ageing structures.

It is our pleasure to announce this module is available for using and contributing. In this respect, a dedicated github repos has been created:

https://github.com/dubourg/python-randomfields

Please feel free to fork it or simply clone/download the code.

The main page of the repos displays a README file whose reading is highly recommended. It also features a non-exhaustive todo list. Contributions are welcome!

The module is essentialy based on Numpy and the ever-improving Python bindings to OpenTURNS (Thanks J!). It implements a single object named KarhunenLoeveExpansion which is self-documented using python dosctrings. The object discretizes the random field at instanciation by solving the Fredholm integral equation using a Legendre-Galerkin scheme. Integrals are numerically approximated using Gauss-Legendre quadrature (from OpenTURNS). Other Galerkin schemes could possibly be used (eg. wavelet-based Galerkin schemes) but the Legendre scheme is the only scheme implemented yet. The instanciated object is callable and takes the field index vector (eg space, time, potatoes, ...) and the seed as input. See the references in the README file if this already sounds like gibberish to you!

The repos contains 4 scripts in addition to the module for illustrating how the KLE object can be used for simulation and identification. Regarding identification, once the sample paths have been transformed into a sample of coefficients vector, we are back into basic statistics! Hence, OpenTURNS can be used for identifying the joint distribution of this coefficients vector. Karhunen-Loève theory states they are zero-mean, unit-variance and uncorrelated (by construction) but their joint distribution is generally unknown if the field is not assumed to be Gaussian. Identifying such large vectors is still a technically (if not scientifically) openned question though so...

Enjoy and give us feedback!

Marine Marcilhac & Vincent Dubourg

PS: Thank Marc Berveiller and Géraud Blatman at EdF for the very first feedback which helped reaching this first public version.

Easy fitting using OpenTURNS

The development of python functionalities to easily fit statistical univariate distributions from 1D samples is proposed.

These functionalities are available through two subpackages:

  • FitContinuousDistribution1D: fitting tests for continuous distributions.
  • FitDiscreteDistribution1D: fitting tests for discrete distributions

In the two cases, the sample is tested using a catalog of all distribution factories implemented in the OpenTURNS library. The fitting object result gives access to several services:

  • Get the list of all distributions for which the test has been done, sorted according to BIC or Kolmogorov criterion.
  • Get the list of all distributions for which the test has been accepted according to the Kolmogorov criterion.

This list is sorted according to BIC or Kolmogorov criterion;

  • Get the list of distributions for which the test could not be done and the reason.
  • Pretty print (Tested distributions, accepted distributions, not tested distributions).

Here is an example of use :

import openturns as ot
from easyfitting import FitContinuousDistribution1D
# Continuous case
x = ot.Normal(0, 1).getSample(20)
pvalue = 0.10
fitting = FitContinuousDistribution1D(x, pvalue)
# Print all distributions tested and ranked according to BIC criterion
fitting.printTestedDistribution('BIC')
# Get this list
all_distributions_tested = fitting.getTestedDistribution()
# Print all distributions accepted and ranked according to KS criterion
fitting.printAcceptedDistribution('KS')
# Get this list
all_distributions_accepted = fitting.getAcceptedDistribution('KS')
# Print distributions that could not be tested and the reason
fitting.printExceptedDistribution()

The sources could be downloaded here

We welcome any comments.

Regards,

Denis
Sofiane

ps: For the discrete case, current OpenTURNS version is limited. Indeed, the library has not been updated due to recent changes in rot package. Thus only Poisson and Geometric distributions have correctly been tested.

First binary release of the OpenTURNS-MIXMOD module

Hi,

Using the new cross-compilation framework for the modules, I built the first binary version of the openturns-mixmod module for Windows. It is compatible with the official 1.0 release of OpenTURNS and should run both on 32bits and 64bits versions of Windows, from XP to seven.

Please open tickets if you find any bug.

Regards,

Régis

Slides of the fifth OpenTURNS User's day

Hi,

Here are part of the slides of the fifth OpenTURNS User's day. Some slides were not included due to author's restrictions.

Regards,

Régis LEBRUN

New distributed python wrapper

Hello,

a new python wrapper is available on the souchaud branch : svn co https://svn.openturns.org/branches/souchaud (version 2691 is ok)

characteristic of the new wrapper :

  • ease-of-use: use python language and propose a module (coupling_tools) that helps manipulate templates
  • transparent compute distribution on each core of the local machine.
  • transparent compute distribution on each core of several hosts (needs a ssh server on each remote hosts, not yet implemented on windows).
  • transparent compute distribution using a remote cluster (not yet implemented)

An example is available in the branch: wrappers/WrapperTemplates/wrapper_python_distributed.
If you want to distribute computing onto several hosts, the python-paramiko module must be installed on the machine that launch the openturns script.

For further info on the DistributedPythonFunction and CouplingTools interface :

This wrapper should be tested in order to give us enough feedback. It is still under development and some part of its code has to be improved/cleanup...

This wrapper overhead is around 0.05s per point (constant from 1 to 8 cores), so external code that last less than 0.1s will not be speed up as fast as theoricaly. (e.g. 1000 point to compute. each point last 0.1s -> total computing time using one core : 1000*(0.1+0.05) = 150s (would be 100s without overhead)). Tested on i7 2.3Ghz*8 w/oHT + SSD.
The current C wrapper (e.g. in wrappers/WrapperTemplates/wrapper_calling_shell_command) overhead is around 0.01s per point (dependent to the number of core used: 1core->overhead=0.007s, 8cores->overhead=0.02s).

If you've got a huge number of point that last less that 0.01s, the best is to implement _exec_sample function directly (using C wrapper or OpenTURNSPythonFunction). If needed, the new wrapper could in the future, permit to implement _exec_sample in order to reduce the overhead.

If you've got some ideas or remarks, do not hesitate to make a comment, but check first it is not already in the todo file (in wrappers/WrapperTemplates/wrapper_python_distributed/TODO.txt).

Regards.

Mathieu Souchaud

Two and half implementations of a scilab wrapper without any compilation

Hi,

In a post on the user list, one asks for an easy way to wrap a scilab code into an OpenTURNS function. Here are two possibilities:

  • using the generic wrapper
  • using the Python wrapping of OpenTURNS function, in two flavours

There are other possibilities I didn't have time to illustrate.

Even with this simple function (the sum of two floating point values), the generic wrapper is faster than the python wrapper by 34% on a quad-core computer.

Best regards,

Régis LEBRUN

How to use the openturns-mixmod module to create mixture of experts meta-models

Hi,

Using the new classification capabilities of the openturns-mixmod module, it is possible to create mixture of experts meta-models. These capabilities will be presented during the next OpenTURNS Users day (12th of June 2012) and you can find the presentation attached, as well as the script.

Regards,

Régis

OT function decorator

Hi,

Here is an easier way to pass python functions to openturns

First define once this decorator for python functions (could be proposed in a future version):

def FunctionDecorator(n, p):
  class otfunc(OpenTURNSPythonFunction):
      
    def __init__(self, func):
      OpenTURNSPythonFunction.__init__(self, n, p) 
      self.__class__.__name__ = func.__name__
      self._exec = func 
        
  return otfunc

Then apply it to your function by adding this one line:

@FunctionDecorator(2, 1)
def SIMPLEFUNC(X):
    Y = [ X[0] + X[1] ]    
    return Y

Then use it through OpenTURNS as usual:

myFunc = NumericalMathFunction( SIMPLEFUNC )

This is open for discussion ...

J.