Why Python?

adventurescoldΛογισμικό & κατασκευή λογ/κού

7 Νοε 2013 (πριν από 3 χρόνια και 11 μήνες)

159 εμφανίσεις

Why Python?
Hans Petter Langtangen
Center for Biomedical Computing
Simula Research Laboratory
Dept.of Informatics,University of Oslo
May,2010
Why should you consider Python for scientific computing?
The answer is in this talk!
Contents of the talk:
What Python can be used for
What Python code looks like
What about computational efficiency?
Pure Python and Python combined with
Fortran,C,or C++
Real-world high-performance applications
http://folk.uio.no/hpl/WhyPython.pdf
Why Python?
Python is a dynamic object-oriented programming language that
can be used for many kinds of software development.It offers
strong support for integration with other languages and tools,
comes with extensive standard libraries,and can be learned in a
few days.Many Python programmers report substantial
productivity gains and feel the language encourages the
development of higher quality,more maintainable code.
—python.org
Popularity of major programming languages
0
5
10
15
20
C
Java
C++
PHP
VB
C#
Python
Perl
JavaScript
Ratings in percent (TIOBE Index, April 2010)
Python is a convenient programming environment
Python is a convenient programming environment
Python is a convenient programming environment
Python is a convenient programming environment
Python is a convenient programming environment
Python is a convenient programming environment
Python is a convenient programming environment
Python is a convenient programming environment
Python is used for a variety of applications
In science and engineering,we can use Python to...
Replace Matlab (by a more powerful programming model)
Create new user-friendly interfaces to old codes
Write new simulation codes from scratch
Glue existing codes togheter (automate manual work)
Build problem solving environments
Python is often used as a better Unix shell language to
glue programs and manage files and directories
One can learn this in a few hours!
Real-world problem solving involves a lot of components
Python can be used to glue all these components
Python as a Matlab-like computing environment
numpy:array computing with (almost) the same syntax as
Matlab
matplotlib:high-quality graphics (+ some numerics)
Example:plot y(t) = t
2
e
−t
2
for t ∈ [0,3],using 51 points:
t = linspace(0,3,51)
y = t**2*exp(-t**2)
plot(t,y)
scipy:rich set of numerical functionality (Netlib etc.)
sympy:symbolic math
scitools:unified interface to the graphics packages
Matplotlib,Gnuplot,Matlab,Vtk,OpenDX,...,using a
Matlab-like syntax
See http://mathesaurus.sourceforge.net/for
Matlab–Python–R–IDL translation tables
Python has a simple,minimal,clean syntax
Python
print"Hello,World!"
Java
class HelloWorld {
public static void main (String argv[])
{
System.out.println("Hello,World!");
}
}
C/C++
#include <stdio.h>
int main(char** argv,int argc) {
printf("Hello,World!\n");
}
Example:find the roots of a quadratic equation
Solve ax
2
+bx +c = 0
Python (as Matlab) has transparent complex/real arithmetics
from numpy.lib.scimath import sqrt
def roots(a,b,c):
q = sqrt(b**2 - 4*a*c)#q is real or complex
return (-b + q)/(2*a),(-b - q)/(2*a)
x1,x2 = roots(-1,10,3.1)
Java:use class Complex,no operator overloading;Complex
and double require very different implementations
C:in principle as Java
C++:
template function <class T>
void roots(T a,T b,T c,
std::complex<double>& r1,std::complex<double>& r2) {
std::complex<double> q = sqrt(b**2 - 4*a*c);...
}
Problem:r1,r2 are always complex,even when roots are real
Example:find the roots of a quadratic equation
Solve ax
2
+bx +c = 0
Python (as Matlab) has transparent complex/real arithmetics
from numpy.lib.scimath import sqrt
def roots(a,b,c):
q = sqrt(b**2 - 4*a*c)#q is real or complex
return (-b + q)/(2*a),(-b - q)/(2*a)
x1,x2 = roots(-1,10,3.1)
Java:use class Complex,no operator overloading;Complex
and double require very different implementations
C:in principle as Java
C++:
template function <class T>
void roots(T a,T b,T c,
std::complex<double>& r1,std::complex<double>& r2) {
std::complex<double> q = sqrt(b**2 - 4*a*c);...
}
Problem:r1,r2 are always complex,even when roots are real
Example:find the roots of a quadratic equation
Solve ax
2
+bx +c = 0
Python (as Matlab) has transparent complex/real arithmetics
from numpy.lib.scimath import sqrt
def roots(a,b,c):
q = sqrt(b**2 - 4*a*c)#q is real or complex
return (-b + q)/(2*a),(-b - q)/(2*a)
x1,x2 = roots(-1,10,3.1)
Java:use class Complex,no operator overloading;Complex
and double require very different implementations
C:in principle as Java
C++:
template function <class T>
void roots(T a,T b,T c,
std::complex<double>& r1,std::complex<double>& r2) {
std::complex<double> q = sqrt(b**2 - 4*a*c);...
}
Problem:r1,r2 are always complex,even when roots are real
Example:find the roots of a quadratic equation
Solve ax
2
+bx +c = 0
Python (as Matlab) has transparent complex/real arithmetics
from numpy.lib.scimath import sqrt
def roots(a,b,c):
q = sqrt(b**2 - 4*a*c)#q is real or complex
return (-b + q)/(2*a),(-b - q)/(2*a)
x1,x2 = roots(-1,10,3.1)
Java:use class Complex,no operator overloading;Complex
and double require very different implementations
C:in principle as Java
C++:
template function <class T>
void roots(T a,T b,T c,
std::complex<double>& r1,std::complex<double>& r2) {
std::complex<double> q = sqrt(b**2 - 4*a*c);...
}
Problem:r1,r2 are always complex,even when roots are real
Example:find the roots of a quadratic equation
Solve ax
2
+bx +c = 0
Python (as Matlab) has transparent complex/real arithmetics
from numpy.lib.scimath import sqrt
def roots(a,b,c):
q = sqrt(b**2 - 4*a*c)#q is real or complex
return (-b + q)/(2*a),(-b - q)/(2*a)
x1,x2 = roots(-1,10,3.1)
Java:use class Complex,no operator overloading;Complex
and double require very different implementations
C:in principle as Java
C++:
template function <class T>
void roots(T a,T b,T c,
std::complex<double>& r1,std::complex<double>& r2) {
std::complex<double> q = sqrt(b**2 - 4*a*c);...
}
Problem:r1,r2 are always complex,even when roots are real
Easy:1) several functions in a file,2) math functions as
arguments to functions
Python:compute f

(x) ≈ (f (x +h) −f (x))/h
def differentiate(f,x,h=1.0E-9):
return (f(x+h) - f(x))/h
def g(x):
return x**2*sin(2*x+1)
dg = differentiate(g,x=0.1)
Matlab version:
% file differentiate.m:
function df = differentiate(f,x,h)
f = fcnchk(f)
df = (feval(f,x+h) - feval(f,x))/h#old syntax
df = (f(x+h) - f(x))/h#new syntax
% file g.m:
function y = g(x)
y = t*t*sin(2*t+1)
% file main.m:
result = differentiate(@g,0.1,1E-9)
The differentiate function in other languages
Representation of an f (x) argument
Fortran:EXTERNAL variable (function pointer)
C:function pointer
C++:class with operator() function
Java:class derived from a base class for function,with some
”eval” method
A main reason for Python’s clean sytax:
A variable (or function argument) in Python can hold any object,
incl.function,class definition,file,number,socket,...
The Java code for the differentiate function
interface Func {//base class for functions f(x)
public double eval (double x);
}
class G implements Func {
public double eval (double x)
{ return Math.pow(x,2)*Math.sin(2*x+1);}
}
class ComputeDerivative {
public static double differentiate
(Func f,double x,double h=1.0E-9)
{
return (f.eval(x+h) - f.eval(x))/h;
}
}
class Demo {
public static void main (String argv[])
{
G g = new G();
result = ComputeDerivative.differentiate(g,0.1);
System.out.println(result);
}
}
Arithmetic expressions are automatically vectorized
Consider an arithmetic expression like exp(-pi*x)*x
Python can evaluate this expression for scalar or array x
The array version is much more efficient than loop with scalar
Recall:Matlab requires modified operators when x is array:
exp(-pi.*x).*x
With a smart class we can have ”automatic differentiation”
Goal:given f (x),create automatically g(x) ≈ f

(x)
def f(x):
return x**5
g = Derivative(f)
#now g(x) behaves as an ordinary function 5*x**4 (approx)
value = g(0.1)
Implementation in terms of a class:
class Derivative:
def __init__(self,f,h=1E-9):#constructor
self.f = f
self.h = float(h)
def __call__(self,x):#operator ()
f,h = self.f,self.h#make short forms
return (f(x+h) - f(x-h))/(2*h)
With functional programming we can have ”automatic
differentiation”
Goal:given f (x),create automatically g(x) ≈ f

(x)
def f(x):
return x**5
g = derivative(f)
#now g(x) behaves as an ordinary function 5*x**4 (approx)
value = g(0.1)
Implementation in terms of a function returning a function:
def derivative(f,h=1E-9):
def df(x):
return (f(x+h) - f(x-h))/(2*h)#df remembers h and f
return df
Is Python slow?Yes and no...
Loops over long arrays run slowly
Such loops should be vectorized using numpy,or migrated to
compiled code (Fortran,C,C++)
Many other tasks run quite fast (e.g.,text processing)
Example:
Read 100,000 (x,y) data points from an ASCII file and write
out x and f (y) = y
5
sin(y)
Pure Python:4 s
Pure Perl:3 s
Pure C (fscanf/fprintf):1 s
Pure C++ (iostream):3.6 s
Pure C++ (buffered streams):2.5 s
Numerical Python:2.2 s
A case study on numerical Python programming

2
u
∂t
2
= ∇ (k(x,y)∇u) with finite differences
Plain Python code:
def timeloop(t,t_stop,dt,dx,dy,u,um,up,k):
while t <= t_stop:
t += dt
up = calculate_u(dt,dx,dy,u,um,up,k)
um[:] = u
u[:] = up
def calculate_u(dt,dx,dy,u,um,up,k):
for i in xrange(1,u.shape[0]-1):
for j in xrange(1,u.shape[1]-1):
up[i][j] = 2*u[i][j] - um[i][j] +
(dt/dx)**2*(
(0.5*(k[i+1][j] + k[i][j])*(u[i+1][j] - u[i][j]) -
0.5*(k[i][j] + k[i-1][j])*(u[i][j] - u[i-1][j]))) +
(dt/dy)**2*(
(0.5*(k[i][j+1] + k[i][j])*(u[i][j+1] - u[i][j]) -
0.5*(k[i][j] + k[i][j-1])*(u[i][j] - u[i][j-1])))
return up
A case study on numerical Python programming

2
u
∂t
2
= ∇ (k(x,y)∇u) with finite differences
Plain Python code:
def timeloop(t,t_stop,dt,dx,dy,u,um,up,k):
while t <= t_stop:
t += dt
up = calculate_u(dt,dx,dy,u,um,up,k)
um[:] = u
u[:] = up
def calculate_u(dt,dx,dy,u,um,up,k):
for i in xrange(1,u.shape[0]-1):
for j in xrange(1,u.shape[1]-1):
up[i][j] = 2*u[i][j] - um[i][j] +
(dt/dx)**2*(
(0.5*(k[i+1][j] + k[i][j])*(u[i+1][j] - u[i][j]) -
0.5*(k[i][j] + k[i-1][j])*(u[i][j] - u[i-1][j]))) +
(dt/dy)**2*(
(0.5*(k[i][j+1] + k[i][j])*(u[i][j+1] - u[i][j]) -
0.5*(k[i][j] + k[i][j-1])*(u[i][j] - u[i][j-1])))
return up
A case study on numerical Python programming

2
u
∂t
2
= ∇ (k(x,y)∇u) with finite differences
Plain Python code:
def timeloop(t,t_stop,dt,dx,dy,u,um,up,k):
while t <= t_stop:
t += dt
up = calculate_u(dt,dx,dy,u,um,up,k)
um[:] = u
u[:] = up
def calculate_u(dt,dx,dy,u,um,up,k):
for i in xrange(1,u.shape[0]-1):
for j in xrange(1,u.shape[1]-1):
up[i][j] = 2*u[i][j] - um[i][j] +
(dt/dx)**2*(
(0.5*(k[i+1][j] + k[i][j])*(u[i+1][j] - u[i][j]) -
0.5*(k[i][j] + k[i-1][j])*(u[i][j] - u[i-1][j]))) +
(dt/dy)**2*(
(0.5*(k[i][j+1] + k[i][j])*(u[i][j+1] - u[i][j]) -
0.5*(k[i][j] + k[i][j-1])*(u[i][j] - u[i][j-1])))
return up
Python + compiled loops = high-performance computing

2
u
∂t
2
= ∇ (k(x,y)∇u) with finite differences
Implementation arith.harm.
Pure Fortran 1.0 1.0
scipy.weave 1.2 1.0
Instant 1.2 1.0
f2py 1.2 1.0
Cython 1.3 1.0
Vectorized code (shifted slices) 13 2.9
Plain Python 760 80
Python + F2PY (1)
calculate_u_code ="""
subroutine calculate_u(dt,dx,dy,u,um,up,k,n,m)
integer n,m
real*8 u(0:n,0:m),um(0:n,0:m)
real*8 up(0:n,0:m),k(0:n,0:m)
real*8 dt,dx,dy,h
Cf2py intent(in) u,up,k
Cf2py intent(out) up
integer i,j
h = (dt/dx)*(dt/dx)
do j = 1,m-1
do i = 1,n-1
up(i,j) = 2*u(i,j) - um(i,j) +
& h*((0.5*(k(i+1,j) + k(i,j))*(u(i+1,j) - u(i,j)) -
& 0.5*(k(i,j) + k(i-1,j))*(u(i,j) - u(i-1,j)))) +
& h*((0.5*(k(i,j+1) + k(i,j))*(u(i,j+1) - u(i,j)) -
& 0.5*(k(i,j) + k(i,j-1))*(u(i,j) - u(i,j-1))))
end do
end do
return
end
"""
Python + F2PY (2)
import f2py
f2py.compile(calculate_u_code,modulename=’fortranloop’,
verbose=0,extra_args=compiler)
from fortranloop import calculate_u
def timeloop(t,t_stop,dt,dx,dy,u,um,up,k):
while t <= t_stop:
t += dt
up = calculate_u(dt,dx,dy,u,um,up,k)
um[:] = u
u[:] = up
Python + Weave (1)
Note:
All forthcoming snippets are taken from a 2D version with
arithmetic mean to simplify and shorten the code
def calculate_u(dt,dx,dy,u,um,up,k):
n,m = u.shape
c_code=r’’’
int i,j;
double dt_ = dt;
double h = pow((dt_/dx),2);
for (i=1;i<n-1;i++)
for (j=1;j<m-1;j++)
up[i*n+j] = 2*u[i*n+j] - um[i*n+j] +
h*((0.5*(k[(i+1)*n+j] +
k[i*n+j])*(u[(i+1)*n+j] - u[i*n+j]) - 0.5*(k[i*n+j] + k[(i-1)*n+j])*(u[i*n+j]
’’’
err = weave.inline(c_code,[’n’,’m’,’dt’,’dx’,’dy’,
’u’,’um’,’up’,’k’])
return up
Python + Weave (2)
def timeloop(t,t_stop,dt,dx,dy,u,um,up,k):
while t <= t_stop:
t += dt
up = calculate_u(dt,dx,dy,u,um,up,k)
um[:] = u
u[:] = up
Python + Instant (1)
calculate_u_code=r’’’
void calculate_u(double dt,double dx,double dy,
int nu,int* pu,double* u,
int num,int* pum,double* um,
int nup,int* pup,double* up,
int nk,int* pk,double* k){
int i=0,j=0,n = pu[0],m = pu[1];
double h = pow((dt/dx),2);
for (i=1;i<n-1;i++)
for (j=1;j<m-1;j++){
up[i*n+j] = 2*u[i*n+j] - um[i*n+j] +
h*((0.5*(k[(i+1)*n+j] +
k[i*n+j])*(u[(i+1)*n+j] - u[i*n+j]) -
0.5*(k[i*n+j] + k[(i-1)*n+j])*
(u[i*n+j] - u[(i-1)*n+j]))) +...
}
}
’’’
Python + Instant (2)
import instant
instant.build_module(code=calculate_u_code,
system_headers=["numpy/arrayobject.h"],
include_dirs=[get_include()],
init_code="import_array();",
modulename="calculate_u",
arrays=[[’nu’,’pu’,’u’],
[’num’,’pum’,’um’],
[’nup’,’pup’,’up’],
[’nk’,’pk’,’k’]])
from calculate_u import calculate_u
def timeloop(t,t_stop,dt,dx,dy,u,um,up,k):
while t <= t_stop:
t += dt
calculate_u(dt,dx,dy,u,um,up,k)
um[:] = u
u[:] = up
The idea of Cython:first consider a plain Python code
Trapezoidal integration:
￿
b
a
f (x)dx ≈ h
￿
1
2
f (a) +
1
2
f (b) +
n−1
￿
i =1
f (a +ih)
￿
,h =
b −a
n
Python code:
def f(x):
return 2*x*x + 3*x + 1
def trapez(a,b,n):
h = (b-a)/float(n)
sum = 0.5*(f(a) + f(b))
for i in range(n):
sum += f(a + i*h)
return sum*h
The idea of Cython:add type declarations++
Cython code,30 times faster (≈ speed of Fortran/C):
cdef f(double x):
return 2*x*x + 3*x + 1
cpdef trapez(double a,double b,int n):
cdef double h = (b-a)/float(n)
cdef double sum = 0.5*(f(a) + f(b))
cdef int i
for i in range(n):
sum += f(a + i*h)
return sum*h
Incremental optimization:
Optimize,don’t rewrite
Only the pieces you need
Python + Cython
import numpy as np
cimport numpy as np
cimport cython
DTYPE = np.float
ctypedef np.float_t DTYPE_t
@cython.boundscheck(False)
def calculate_u(float dt,float dx,float dy,
np.ndarray[DTYPE_t,ndim=2,negative_indices=False,mode=’c’] u,
np.ndarray[DTYPE_t,ndim=2,negative_indices=False,mode=’c’] um,
np.ndarray[DTYPE_t,ndim=2,negative_indices=False,mode=’c’] up,
np.ndarray[DTYPE_t,ndim=2,negative_indices=False,mode=’c’] k):
cdef int n = u.shape[0]-1
cdef int m = u.shape[1]-1
cdef int i,j,start = 1
for i in xrange(start,n):
for j in xrange(start,m):
up[i,j] = 2*u[i,j] - um[i,j] + (dt/dx)**2*((0.5*(k[i+1,j] +
k[i,j])*(u[i+1,j] - u[i,j]) - 0.5*(k[i,j] + k[i-1,j])*
(u[i,j] - u[i-1,j]))) + (dt/dy)**2*((0.5*(k[i,j+1] +
k[i,j])*(u[i,j+1] - u[i,j]) - 0.5*(k[i,j] + k[i,j-1])*
(u[i,j] - u[i,j-1])))
return up
Python-F77 as a better C++ or F95/F2008?
Most new scientific codes are written in C++ (or Fortran 90)
C++ (F95) combines the efficiency of C (F77) with the power
of user-defined data types (classes/modules)
Python-F77 codes represent the same duality,but
Python classes are more powerful and flexible than C++
classes or F95 modules
F77 gives utlimate performance for the CPU-critical parts of
the code
Parameterization via templates is unnecessary
The overall code is shorter,cleaner,more flexible,easier to
read,easier to modify
mpi4py can parallelize Python codes
High-level interface to MPI:
Arbitrary Python objects can be sent via MPI
Efficient treatment of arrays
Can parallelize ”black-box” legacy codes via
domain decomposition
Older alternatives:pypar,PyMPI
A glimpse of parallel pypar code
Task:send/receive boundary data u(x = const,y,z)
#Extract boundary data:
outdata = u[nx-1,:,:]
#Send to neighboring domain/processor:
pypar.send(outdata,destination=neighbor_id,bypass=True)
#Receive data from the same neighbor:
pypar.receive(neighbor_id,buffer=indata,bypass=True)
#Insert received data in u array:
u[nx,:,:] = indata
Pypar performance is good
3D test example:

2
u
∂t
2
= ∇ (a(x,y,z)∇u)
Speedup results:
Mixed Python-C
Pure C
P
Time Speedup
Time Speedup
1
1050.81 N/A
2
532.849 1.96
532.715 1.97
4
271.093 3.86
268.518 3.91
8
140.105 7.46
136.611 7.69
16
73.599 14.20
70.147 14.98
32
38.743 26.99
38.816 27.07
64
21.556 48.49
21.162 49.66
128
11.960 87.39
11.459 91.70
Parallelizing legacy codes (1)
We had an old F77 code for solving the Boussinesq wave
equations for tsunami simulation:
∂η
∂t
+∇ q = 0
q = (H +αη)∇φ +ǫH
￿
1
6
∂η
∂t

1
3
∇H  ∇φ
￿
∇H
∂φ
∂t
+
α
2
∇φ  ∇φ +η −
ǫ
2
H∇
￿
H∇
∂φ
∂t
￿
+
ǫ
6
H
2

2
∂φ
∂t
= 0
The code applies sophisticated numerics in tricky code that
would be hard to parallelize
Parallelizing legacy codes (2)
Ideas:
Apply overlapping domain decomposition for parallelizing the
mathematical problem - code that algorithm in Python
Let Python call the old F77 code as subdomain solver,fed
with right boundary conditions
No modifications of the old F77 code
All parallelization ”outside”,in Python
Speedup results:
#cores 1 4 16 64 128 512
wall time 234 63 22 5.8 3.8 2.2
speedup 3.7 11 41 92 106
40 time steps,2000 ×2000 cells
python 2.6,numpy 1.3,pypar 2.1.4
Some scientific software projects for large-scale simulation
General FE PDE solution:
FEniCS
,www.fenics.org
General FE PDE solution:
PyADH
,
https://adh.usace.army.mil/pyadh
Discontinuous Galerkin FE PDE solver:
hedge
,
mathema.tician.de/software/hedge
Quantum mechanics:
GPAW
,wiki.fysik.dtu.dk/gpaw
N-body dynamics:
pNbody
,obswww.unige.ch/˜revaz/pNbody
FEniCS:simulation code in Python w/autogenerated C++
code
Specify equation/model as a variational problem in Python:
a(u,v) = L(v)
Run-time generation of tailored C++ code
Link to general finite element and linear algebra libraries
Major result:generality + efficiency!
Support for ”exotic” finite elements
http://www.fenics.org
Hedge:Python + PyCUDA + MPI for PDE solution
Discontinuous Galerkin methods for PDEs
Written in Python with small pieces in C++
Runs on CPU+GPU (via PyCUDA)
Runs on multiple CPUs+GPUs via MPI
PDEs:Compressible Navier-Stokes,Maxwell’s,Euler,...
Hedge:16 GPUs vs 64 Xeons
PyADH:Python-based generic PDE solver environment
Written in Python with slow pieces ported to C
Parallelization in Python
FEM,disc.Galerkin,mixed methods,variational multiscale,...
PDEs:Richard’s,two-phase porous media,Navier-Stokes,
RANS,free surface,elasticity,...
Scales to (at least) 2048 processors
Machines:Cray XT3/4s,SGI Altix ICE,...
Python has strong support for text processing (1)
Read a complex number from a text:
text = ’ ( -3,1.4) ’
Python code:
re,im = text.strip()[1:-1].split(’,’)
#re and im are strings,turn them to real numbers:
re,im = float(re),float(im)
Python has convenient tools for
String manipulation
Regular expressions
XML
Parsing
Python has strong support for text processing (2)
Some output from a compiler optimization test:
f95 -c -O0 versions/main_wIO.f F77WAVE.f
f95 -o app -static main_wIO.o F77WAVE.o -lf2c
CPU-time:255.97
f95 -c -O1 versions/main_wIO.f F77WAVE.f
f95 -o app -static main_wIO.o F77WAVE.o -lf2c
CPU-time:252.47
f95 -c -O2 -floop-optimize2 versions/main_wIO.f F77WAVE.f
f95 -o app -static main_wIO.o F77WAVE.o -lf2c
CPU-time:252.40
...
Graph CPU-time against optimization flags:
output = open(’output.dat’,’r’)
CPU_vs_opt = {}#start with empty hash table
for line in output:
if re.search(r’^f95\s+-c\s+’):#regular expression
words = line.split()
optimizations = ’ ’.join(words[2:-2])
CPU_vs_opt[optimizations] = 0#init hash key
if line.startswith(’CPU-time:’):
CPU_vs_opt[optimizations] = float(line.split(’:’)[1])
Turning text files into code
Code can be constructed and executed at run time
Consider an input file with syntax like
set heat conduction = 5.0
set dt = 0.1
set rootfinder = bisection
set bc = sin(x)*sin(y)*exp(-0.1*t) is function of (x,y,t)
set source = V*exp(-q*t) is function of (t) with V=0.1,q=1
We want to read such files and define (in this example)
heat
conduction:float with value 5.0
rootfinder:string with value ’bisection’
bc(x,y,t):function
source(t,V=0.1,q=1):function
Can we?
A solution can be accomplished in a few lines of code!
With about 40 lines of Python code (using some advanced
constructions) we can read such type of files and turn the
information into new variables
Possible next step:automatic generation of a GUI for setting
the variables
See the Python Scripting for Computational Science book for
complete details (ch.11 and 12)
Keyword arguments allow flexible function interfaces
User-friendly environments (Matlab,Maple,R,...) allow
flexible function interfaces – Python does so too
Novice user:
plot(f)#f is some object with data
More control of the plot:
plot(f,legend=’f’,xlabel=’time’)
Even more fine-tuning:
plot(f,legend=’f’,xlabel=’time’,ylabel=’response’,
title=’Damped System’,font=’Times 18’,
hardcopy=’plot.eps’)
Testing a variable’s type
def plotdata(x,y,xrange=None):
#xrange can get be the range of x (default)
#xrange can be a list [min,max]
#xrange can be a float,leading to range [0,xrange]
if x == None:
xmin,xmax = min(x),max(x)
elif isinstance(xrange,(list,tuple)):
xmin = xrange[0];xmax = xrange[1]
elif isinstance(xrange,(float,int)):
xmin = 0;xmax = xrange
else:
raise TypeError,’xrange has wrong type %s’ %
...
Python is well suited for coding graphical user interfaces
Support for many GUI toolkits:Tk,Qt,Gtk,WxWidgets,
MFC,java.swing,...
Extensive support for CGI scripts for Web interfaces
Comprehensive Web applications can be written with Plone,
TurboGears,Django,...
Conclusions on ”Why Python?”
Python...
is an easy-to-learn,fun,clean,flexible,
popular and very productive
programming language
is a better Unix shell on
Windows/Mac/Linux
may act as a Matlab replacement
can glue existing programs
brings new life to old Fortran,C or
C++ codes
can be used to write new HPC codes