# FeenoX Software Design Specification

Abstract.&nbps;This Software Design Specifications (SDS) document applies to an imaginary Software Requirement Specifications (SRS) document issued by a fictitious agency asking for vendors to offer a free and open source cloud-based computational tool to solve engineering problems. The latter can be seen as a request for quotation and the former as an offer for the fictitious tender. Each section of this SDS addresses one section of the SRS. The original text from the SRS is shown quoted at the very beginning before the actual SDS content.

# 1 Introduction

A cloud-based computational tool (herein after referred to as the tool) is required in order to solve engineering problems following the current state-of-the-art methods and technologies impacting the high-performance computing world. This (imaginary but plausible) Software Requirements Specification document describes the mandatory features this tool ought to have and lists some features which would be nice the tool had. Also it contains requirements and guidelines about architecture, execution and interfaces in order to fulfill the needs of cognizant engineers as of 2021 (and the years to come) are defined.

On the one hand, the tool should allow to solve industrial problems under stringent efficiency ([@sec:efficiency]) and quality ([@sec:qa]) requirements. It is therefore mandatory to be able to assess the source code for

• independent verification, and/or
• performance profiling, and/or
• quality control

by qualified third parties from all around the world, so it has to be open source according to the definition of the Open Source Initiative.

On the other hand, the initial version of the tool is expected to provide a basic functionality which might be extended ([@sec:objective] and [@sec:extensibility]) by academic researchers and/or professional programmers. It thus should also be free—in the sense of freedom, not in the sense of price—as defined by the Free Software Foundation. There is no requirement on the pricing scheme, which is up to the vendor to define in the offer along with the detailed licensing terms. These should allow users to solve their problems the way they need and, eventually, to modify and improve the tool to suit their needs. If they cannot program themselves, they should have the freedom to hire somebody to do it for them.

Besides noting that software being free (regarding freedom, not price) does not imply the same as being open source, the requirement is clear in that the tool has to be both free and open source, a combination which is usually called FOSS. This condition leaves all of the well-known non-free (i.e. wrongly-called “commercial”) finite-element solvers in the market (NASTRAN, Abaqus, ANSYS, Midas, etc.) out of the tender.

FeenoX is licensed under the terms of the GNU General Public License version 3 or, at the user convenience, any later version. This means that users get the four essential freedoms:1

1. The freedom to run the program as they wish, for any purpose.
2. The freedom to study how the program works, and change it so it does their computing as they wish.
3. The freedom to redistribute copies so they can help others.
4. The freedom to distribute copies of their modified versions to others.

So a free program has to be open source, but it also has to explicitly provide the four freedoms above both through the written license and through the mechanisms available to get, modify, compile, run and document these modifications. That is why licensing FeenoX as GPLv3+ also implies that the source code and all the scripts and makefiles needed to compile and run it are available for anyone that requires it. Anyone wanting to modify the program either to fix bugs, improve it or add new features is free to do so. And if they do not know how to program, the have the freedom to hire a programmer to do it without needing to ask permission to the original authors.

It should be noted that not only is FeenoX free and open source, but also all of the libraries it depends (and their dependencies) are. It can also be compiled using free and open source build tool chains running over free and open source operating systems. In addition, the FeenoX documentation is licensed under the terms of the GNU Free Documentation License v1.3 (or any later version).

## 1.1 Objective

The main objective of the tool is to be able to solve engineering problems which are usually casted as differential-algebraic equations (DAEs) or partial differential equations (PDEs), such as

• heat conduction
• mechanical elasticity
• structural modal analysis
• frequency studies
• electromagnetism
• chemical diffusion
• process control dynamics
• computational fluid dynamics

on one or more mainstream cloud servers, i.e. computers with hardware and operating systems (futher discussed in [@sec:architecture]) that allows them to be available online and accessed remotely either interactively or automatically by other computers as well. Other architectures such as high-end desktop personal computers or even low-end laptops might be supported but they should not the main target.

The initial version of the tool must be able to handle a subset of the above list of problem types. Afterward, the set of supported problem types, models, equations and features of the tool should grow to include other models as well, as required in [@sec:extensibility].

The choice of the initial supported features is based on the types of problem that the FeenoX’ precursor codes (namely wasora, Fino and milonga, referred to as “previous versions” from now ow) already have been supporting since more than ten years now. It is also a first choice so scope can be bounded. A subsequent road map and release plans can be designed as requested. FeenoX’ first version includes a subset of the required functionality, namely

• open and closed-loop dynamical systems
• Laplace/Poisson/Helmholtz equations
• heat conduction
• mechanical elasticity
• structural modal analysis
• multi-group neutron transport and diffusion

FeenoX is designed to be developed and executed under GNU/Linux, which is the architecture of more than 95% of the internet servers which we collectively call “the cloud.” It should be noted that GNU/Linux is a POSIX-compliant version of UNIX and that FeenoX follows the rules of UNIX philosophy for the actual computational implementation. Besides POSIX, as explained further below, FeenoX also uses MPI which is a well-known industry standard for massive parallel executions of processes, both in multi-core hosts and multi-hosts environments. Finally, if performance and/or scalability are not important issues, FeenoX can be run in a (properly cooled) local PC or laptop.

The requirement to run in the cloud and scale up as needed rules out some the open source solver CalculiX. There are some other requirements in the SRS that also leave CalculiX out of the tender.

## 1.2 Scope

The tool should allow users to define the problem to be solved programmatically. That is to say, the problem should be completely defined using one or more files either…

1. specifically formatted for the tool to read such as JSON or a particular input format (historically called input decks in punched-card days), and/or
2. written in an high-level interpreted language such as Python or Julia.

It should be noted that a graphical user interface is not required. The tool may include one, but it should be able to run without needing any interactive user intervention rather than the preparation of a set of input files. Nevertheless, the tool might allow a GUI to be used. For example, for a basic usage involving simple cases, a user interface engine should be able to create these problem-definition files in order to give access to less advanced users to the tool using a desktop, mobile and/or web-based interface in order to run the actual tool without needing to manually prepare the actual input files.

However, for general usage, users should be able to completely define the problem (or set of problems, i.e. a parametric study) they want to solve in one or more input files and to obtain one or more output files containing the desired results, either a set of scalar outputs (such as maximum stresses or mean temperatures), and/or a detailed time and/or spatial distribution. If needed, a discretization of the domain may to be taken as a known input, i.e. the tool is not required to create the mesh as long as a suitable mesher can be employed using a similar workflow as the one specified in this SRS.

The tool should define and document ([@sec:documentation]) the way the input files for a solving particular problem are to be prepared ([@sec:input]) and how the results are to be written ([@sec:output]). Any GUI, pre-processor, post-processor or other related graphical tool used to provide a graphical interface for the user should integrate in the workflow described in the preceding paragraph: a pre-processor should create the input files needed for the tool and a post-processor should read the output files created by the tool.

Indeed, FeenoX is designed to work very much like a transfer function between one (or more) files and zero or more output files:

                             +------------+
mesh (*.msh)  }             |            |             { terminal
data (*.dat)  } input ----> |   FeenoX   |----> output { data files
input (*.fee) }             |            |             { post (vtk/msh)
+------------+

Technically speaking, FeenoX can be seen as a UNIX filter designed to read an ASCII-based stream of characters (i.e. the input file, which in turn can include other input files or read data from mesh and/or data files) and to write ASCII-formatted data into the standard output and/or other files. The input file can be created either by a human or by another programs. The output stream and/or files can be read by either a human and/or another programs. A quotation from Eric Raymond’s The Art of Unix Programming helps to illustrate this idea:

Doug McIlroy, the inventor of Unix pipes and one of the founders of the Unix tradition, had this to say at the time:

1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

2. Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.

[…]

He later summarized it this way (quoted in “A Quarter Century of Unix” in 1994):

• This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

Keep in mind that even though the quotes above and many FEA programs that are still mainstream today date both from the early 1970s, fifty years later they still

• Do not make just only one thing well.
• Do complicate old programs by adding new features.
• Do not expect the their output to become the input to another.
• Do clutter output with extraneous information.
• Do use stringently columnar and/or binary input (and output!) formats.
• Do insist on interactive output.

For example, let us consider the famous chaotic Lorenz’ dynamical system. Here is one way of getting an image of the butterfly-shaped attractor using FeenoX to compute it and gnuplot to draw it. Solve

\begin{cases} \dot{x} &= \sigma \cdot (y - x) \\ \dot{y} &= x \cdot (r - z) - y \\ \dot{z} &= x y - b z \\ \end{cases}

for 0 < t < 40 with initial conditions

\begin{cases} x(0) = -11 \\ y(0) = -16 \\ z(0) = 22.5 \\ \end{cases}

and \sigma=10, r=28 and b=8/3, which are the classical parameters that generate the butterfly as presented by Edward Lorenz back in his seminal 1963 paper Deterministic non-periodic flow.

The following ASCII input file ressembles the parameters, inital conditions and differential equations of the problem as naturally as possible:

PHASE_SPACE x y z     # Lorenz attractor’s phase space is x-y-z
end_time = 40         # we go from t=0 to 40 non-dimensional units

sigma = 10            # the original parameters from the 1963 paper
r = 28
b = 8/3

x_0 = -11             # initial conditions
y_0 = -16
z_0 = 22.5

# the dynamical system's equations written as naturally as possible
x_dot = sigma*(y - x)
y_dot = x*(r - z) - y
z_dot = x*y - b*z

PRINT t x y z        # four-column plain-ASCII output

Indeed, when executing FeenoX with this input file, we get four ASCII columns (t, x, y and z) which we can then redirect to a file and plot it with a standard tool such as Gnuplot. Note the importance of relying on plain ASCII text formats both for input and output, as recommended by the UNIX philosophy and the rule of composition: other programs can easily create inputs for FeenoX and other programs can easily understand FeenoX’ outputs. This is essentially how UNIX filters and pipes work.

As already stated, FeenoX is designed and implemented following the UNIX philosophy in general and Eric Raymond’s 17 Unix Rules ([sec:unix]) in particular. One of the main ideas is the rule of separation that essentially implies to separate mechanism from policy, that in the computational engineering world translates into separating the frontend from the backend. The usage of FeenoX to compute and of gnuplot to plot is a clear example of separation. Even though most FEA programs eventually separate the interface from the solver up to some degree, there are cases in which they are still dependent such that changing the former needs updating the latter.

From the very beginning, FeenoX is designed as a pure backend which should nevertheless provide appropriate mechanisms for different frontends to be able to communicate and to provide a friendly interface for the final user. Yet, the separation is complete in the sense that the nature of the frontends can radically change (say from a desktop-based point-and-click program to a web-based immersive augmented-reality application) without needing the modify the backend. Not only far more flexibility is given by following this path, but also develop efficiency and quality is encouraged since programmers working on the lower-level of an engineering tool usually do not have the skills needed to write good user-experience interfaces, and conversely.

In the very same sense, FeenoX does not discretize continuous domains for PDE problems itself, but relies on separate tools for this end. Fortunately, there already exists one meshing tool which is FOSS (GPLv2) and shares most (if not all) of the design basis principles with FeenoX: the three-dimensional finite element mesh generator Gmsh. Strictly speaking, FeenoX does not need to be used along with Gmsh but with any other mesher able to write meshes in Gmsh’s format .msh. But since Gmsh also

• is free and open source,
• works also in a transfer-function-like fashion,
• runs natively on GNU/Linux,
• has a similar (but more comprehensive) API for Python/Julia,
• etc.

it is a perfect match for FeenoX. Even more, it provides suitable domain decomposition methods (through other FOSS third-party libraries such as Metis) for scaling up large problems.

Let us solve the linear elasticity benchmark problem NAFEMS LE10 “Thick plate pressure.” Assuming a proper mesh has already been created in Gmsh, note how well the FeenoX input file matches the problem statement:

# NAFEMS Benchmark LE-10: thick plate pressure
PROBLEM mechanical DIMENSIONS 3
READ_MESH nafems-le10.msh   # mesh in millimeters

BC upper    p=1      # 1 Mpa

# BOUNDARY CONDITIONS:
BC DCD'C'   v=0      # Face DCD'C' zero y-displacement
BC ABA'B'   u=0      # Face ABA'B' zero x-displacement
BC BCB'C'   u=0 v=0  # Face BCB'C' x and y displ. fixed
BC midplane w=0      #  z displacements fixed along mid-plane

# MATERIAL PROPERTIES: isotropic single-material properties
E = 210e3   # Young modulus in MPa
nu = 0.3    # Poisson's ratio

SOLVE_PROBLEM   # solve!

# print the direct stress y at D (and nothing more)
PRINT "sigma_y @ D = " sigmay(2000,0,300) "MPa"

The problem asks for the normal stress in the y direction \sigma_y at point “D,” which is what FeenoX writes (and nothing else, rule of economy):

$feenox nafems-le10.fee sigma_y @ D = -5.38016 MPa$ 

Also note that since there is only one material there is no need to do an explicit link between material properties and physical volumes in the mesh (rule of simplicity). And since the properties are uniform and isotropic, a single global scalar for E and a global single scalar for \nu are enough.

For the sake of visual completeness, post-processing data with the scalar distribution of \sigma_y and the vector field of displacements [u,v,w] can be created by adding one line to the input file:

WRITE_MESH nafems-le10.vtk sigmay VECTOR u v w

This VTK file can then be post-processed to create interactive 3D views, still screenshots, browser and mobile-friendly webGL models, etc. In particular, using Paraview one can get a colorful bitmapped PNG (the displacements are far more interesting than the stresses in this problem).

See https://www.caeplex.com for a mobile-friendly web-based interface for solving finite elements in the cloud directly from the browser.

Even though the initial version of FeenoX does not provide an API for high-level interpreted languages such as Python or Julia, the code is written in such a way that this feature can be added without needing a major refactoring. This will allow to fully define a problem in a procedural way, increasing also flexibility.

# 2 Architecture

The tool must be aimed at being executed unattended on remote cloud servers which are expected to have a mainstream (as of the 202s) architecture regarding operating system (GNU/Linux variants and other UNIX-like OSes) and hardware stack, such as

• a few Intel-compatible CPUs per host
• a few levels of memory caches
• a few gigabytes of random-access memory
• several gigabytes of solid-statee storage

It should successfully run on

• bare-metal
• virtual servers
• containerized images

using standard compilers, dependencies and libraries already available in the repositories of most current operating systems distributions.

Preference should be given to open source compilers, dependencies and libraries. Small problems might be executed in a single host but large problems ought to be split through several server instances depending on the processing and memory requirements. The computational implementation should adhere to open and well-established parallelization standards.

Ability to run on local desktop personal computers and/laptops is not required but suggested as a mean of giving the opportunity to users to test and debug small coarse computational models before launching the large computation on a HPC cluster or on a set of scalable cloud instances. Support for non-GNU/Linux operating systems is not required but also suggested.

Mobile platforms such as tablets and phones are not suitable to run engineering simulations due to their lack of proper electronic cooling mechanisms. They are suggested to be used to control one (or more) instances of the tool running on the cloud, and even to pre and post process results through mobile and/or web interfaces.

FeenoX can be seen as a third-system effect, being the third version written from scratch after a first implementation in 2009 and an second one which was far more complex and had far more features circa 2012–2014. The third attempt explicitly addresses the “do one thing well” idea from UNIX.

Furthermore, not only is FeenoX itself both free and open-source software but, following the rule of composition, it also is designed to connect and to work with other free and open source software such as

• Gmsh for pre and/or post-processing
• ParaView for post-processing
• Gnuplot for plotting
• Pyxplot for plotting
• Pandoc for creating tables and documents
• TeX for creating tables and documents

and many others, which are readily available in all major GNU/Linux distributions.

FeenoX also makes use of high-quality free and open source mathematical libraries which contain numerical methods designed by mathematicians and programmed by professional programmers. In particuar, it depends on

Therefore, if one zooms in into the block of the transfer function above, FeenoX can also be seen as a glue layer between the input file and the mesh defining a PDE-casted problem and the mathematical libraries used to solve the discretized equations. This way, FeenoX bounds its scope to do only one thing and to do it well: to build and solve finite-element formulations of thermo-mechanical problems. And it does so on high grounds, both

1. ethical: since it is free software, all users can

1. run,
2. share,
3. modify, and/or
4. re-share their modifications.

If a user cannot read or write code to make FeenoX suit her needs, at least she has the freedom to hire someone to do it for her, and

2. technological: since it is open source, advanced users can detect and correct bugs and even improve the algorithms. Given enough eyeballs, all bugs are shallow.

FeenoX’ main development architecture is Debian GNU/Linux running over 64-bits Intel-compatible processors. All the dependencies are free and/or open source and already available in Debian’s official repositories, as explained in @sec:deployment.

The POSIX standard is followed whenever possible, allowing thus FeenoX to be compiled in other operating systems and architectures such as Windows (using Cygwin) and MacOS. The build procedure is the well-known and mature ./configure && make command.

FeenoX is written in plain C conforming to the ISO C99 specification (plus POSIX extensions), which is a standard, mature and widely supported language with compilers for a wide variety of architectures. For its basic mathematical capabilities, FeenoX uses the GNU Scientifc Library. For solving ODEs/DAEs, FeenoX relies on Lawrence Livermore’s SUNDIALS library. For PDEs, FeenoX uses Argonne’s PETSc library and Universitat Politècnica de València’s SLEPc library. All of them are

• free and open source,
• written in C (neither Fortran nor C++),
• mature and stable,
• actively developed and updated,
• very well known in the industry and academia.

Moreover, PETSc and SLEPc are scalable through the MPI standard. This means that programs using both these libraries can run on either large high-performance supercomputers or low-end laptops. FeenoX has been run on

• Raspberry Pi
• Laptop (GNU/Linux & Windows 10)
• Macbook
• Desktop PC
• Bare-metal servers
• Vagrant/Virtualbox
• Docker/Kubernetes
• AWS/DigitalOcean/Contabo

Due to the way that FeenoX is designed and the policy separated from the mechanism, it is possible to control a running instance remotely from a separate client which can eventually run on a mobile device ([@fig:caeplex-ipad]).

The following example illustrates how well FeenoX works as one of many links in a chain that goes from tracing a bitmap with the problem’s geometry down to creating a nice figure with the results of a computation:

Say you are Homer Simpson and you want to solve a maze drawn in a restaurant’s placemat, one where both the start and end are known beforehand as show in @fig:maze-homer. In order to avoid falling into the alligator’s mouth, you can exploit the ellipticity of the Laplacian operator to solve any maze (even a hand-drawn one) without needing any fancy AI or ML algorithm. Just FeenoX and a bunch of standard open source tools to convert a bitmapped picture of the maze into an unstructured mesh.

1. Create a maze

3. Perform some conversions

• PNG \rightarrow PNM \rightarrow SVG \rightarrow DXF \rightarrow GEO
$wget http://www.mazegenerator.net/static/orthogonal_maze_with_20_by_20_cells.png$ convert orthogonal_maze_with_20_by_20_cells.png -negate maze.png
$potrace maze.pnm --alphamax 0 --opttolerance 0 -b svg -o maze.svg$ ./svg2dxf maze.svg maze.dxf
$./dxf2geo maze.dxf 0.1 4. Open it with Gmsh • Add a surface • Set physical curves for “start” and “end” 5. Mesh it (@fig:maze12) gmsh -2 maze.geo 6. Solve \nabla^2 \phi = 0 with BCs \begin{cases} \phi=0 & \text{at “start”} \\ \phi=1 & \text{at “end”} \\ \nabla \phi \cdot \hat{\vec{n}} = 0 & \text{everywhere else} \\ \end{cases} PROBLEM laplace 2D # pretty self-descriptive, isn't it? READ_MESH maze.msh # boundary conditions (default is homogeneous Neumann) BC start phi=0 BC end phi=1 SOLVE_PROBLEM # write the norm of gradient as a scalar field # and the gradient as a 2d vector into a .msh file WRITE_MESH maze-solved.msh \ sqrt(dphidx(x,y)^2+dphidy(x,y)^2) \ VECTOR dphidx dphidy 0  $ feenox maze.fee
$ 7. Open maze-solved.msh, go to start and follow the gradient \nabla \phi! Any arbitrary maze (even hand-drawn) can be solved with FeenoX ## 2.1 Deployment The tool should be easily deployed to production servers. Both 1. an automated method for compiling the sources from scratch aiming at obtaining optimized binaries for a particular host architecture should be provided using a well-established procedures, and 2. one (or more) generic binary version aiming at common server architectures should be provided. Either option should be available to be downloaded from suitable online sources, either by real people and/or automated deployment scripts. As already stated, FeenoX can be compiled from its sources using the well-established configure & make procedure. The code’s source tree is hosted on Github so cloning the repository is the preferred way to obtain FeenoX, but source tarballs are periodically released too according to the requirements in @sec:traceability. The configuration and compilation is based on GNU Autotools that has more than thirty years of maturity and it is the most portable way of compiling C code in a wide variety of UNIX variants. It has been tested with • GNU C compiler • LLVM Clang compiler • Intel C compiler FeenoX depends on the four open source libraries stated in @sec:architecture, with the last three of them being optional. The only mandatory library is the GNU Scientific Library which is part of the GNU/Linux operating system and as such is readily available in all distributions as libgsl-dev. The sources of the rest of the optional libraries are also widely available in most common GNU/Linux distributions. In effect, doing sudo apt-get install gcc make libgsl-dev libsundials-dev petsc-dev slepc-dev is enough to provision all the dependencies needed compile FeenoX from the source tarball with the full set of features. If using the Git repository as a source, then Git itself and the GNU Autoconf and Automake packages are also needed: sudo apt-get install git autoconf automake Even though compiling FeenoX from sources is the recommended way to obtain the tool, since the target binary can be compiled using particularly suited compilation options, flags and optimizations (especially those related to MPI, linear algebra kernels and direct and/or iterative sparse solvers), there are also tarballs with usable binaries for some of the most common architectures—including some non-GNU/Linux variants. These binary distributions contain statically-linked executables that do not need any other shared libraries to be present on the target host, but their flexibility and efficiency is generic and far from ideal. Yet the flexibility of having an execution-ready distribution package for users that do not know how to compile C source code outweights the limited functionality and scalability of the tool. For example, first PETSc can be built with a -Ofast flag: $ cd $PETSC_DIR$ export PETSC_ARCH=linux-fast
$./configure --with-debug=0 COPTFLAGS="-Ofast"$ make -j8
$cd$HOME

And then not only can FeenoX be configured to use that particular PETSc build but also to use a different compiler such as Clang instead of GNU GCC and to use the same -Ofast flag to compile FeenoX itself:

$git clone https://github.com/seamplex/feenox$ cd feenox
$./autogen.sh$ export PETSC_ARCH=linux-fast
$./configure MPICH_CC=clang CFLAGS=-Ofast$ make -j8
# make install

$wget http://gmsh.info/bin/Linux/gmsh-Linux64.tgz$ wget https://seamplex.com/feenox/dist/linux/feenox-linux-amd64.tar.gz

Appendix has @sec:download more details about how to download and compile FeenoX. The full documentation contains a compilation guide with further detailed explanations of each of the steps involved. Since all the commands needed to either download a binary executable or to compile from source with customized optimization flags can be automatized, FeenoX can be built into a container such as Docker. This way, deployment and scalability can be customized and tweaked as needed.

## 2.2 Execution

It is mandatory to be able to execute the tool remotely, either with a direct action from the user or from a high-level workflow which could be triggered by a human or by an automated script. The calling party should be able to monitor the status during run time and get the returned error level after finishing the execution.

The tool shall provide a mean to perform parametric computations by varying one or more problem parameters in a certain prescribed way such that it can be used as an inner solver for an outer-loop optimization tool. In this regard, it is desirable if the tool could compute scalar values such that the figure of merit being optimized (maximum temperature, total weight, total heat flux, minimum natural frequency, maximum displacement, maximum von Mises stress, etc.) is already available without needing further post-processing.

As FeenoX is designed to run as a file filter (i.e. as a transfer function between input and output files) and it explicitly avoids having a graphical interface, the binary executable works as any other UNIX terminal command. When invoked without arguments, it prints its version (a through explanation of the versioning scheme is given in @sec:traceability), a one-line description and the usage options:

$feenox FeenoX v0.1.77-g9325958 a free no-fee no-X uniX-like finite-element(ish) computational engineering tool usage: feenox [options] inputfile [replacement arguments] -h, --help display usage and commmand-line help and exit -v, --version display brief version information and exit -V, --versions display detailed version information -s, --sumarize list all symbols in the input file and exit Instructions will be read from standard input if “-” is passed as inputfile, i.e.$ echo "PRINT 2+2" | feenox -
4

Report bugs at https://github.com/seamplex/feenox or to jeremy@seamplex.com
Feenox home page: https://www.seamplex.com/feenox/

The program can also be executed remotely

1. on a server through a SSH session
2. in a container as part of a provisioning script

FeenoX provides mechanisms to inform its progress by writing certain information to devices or files, which in turn can be monitored remotely or even trigger server actions. Progress can be as simple as an ASCII bar (triggered with --progress) to more complex mechanisms like writing the status in a shared memory segment.

Regarding its execution, there are three ways of solving problems: direct execution, parametric runs and optimization loops.

### 2.2.1 Direct execution

When directly executing FeenoX, one gives a single argument to the executable with the path to the main input file. For example, the following input computes the first twenty numbers of the Fibonacci sequence using the closed-form formula

f(n) = \frac{\varphi^n - (1-\varphi)^n}{\sqrt{5}}

where \varphi=(1+\sqrt{5})/2 is the Golden ratio:

# the Fibonacci sequence using the closed-form formula as a function
phi = (1+sqrt(5))/2
f(n) = (phi^n - (1-phi)^n)/sqrt(5)
PRINT_FUNCTION f MIN 1 MAX 20 STEP 1

FeenoX can be directly executed to print the function f(n) for n=1,\dots,20 both to the standard output and to a file named one (because it is the first way of solving Fibonacci with Feenox):

$feenox fibo_formula.fee | tee one 1 1 2 1 3 2 4 3 5 5 6 8 7 13 8 21 9 34 10 55 11 89 12 144 13 233 14 377 15 610 16 987 17 1597 18 2584 19 4181 20 6765$

Now, we could also have computed these twenty numbers by using the direct definition of the sequence into a vector \vec{f} of size 20. This time we redirect the output to a file named two:

# the fibonacci sequence as a vector
VECTOR f SIZE 20

f[i]<1:2> = 1
f[i]<3:vecsize(f)> = f[i-2] + f[i-1]

PRINT_VECTOR i f
$feenox fibo_vector.fee > two$ 

Finally, we print the sequence as an iterative problem and check that the three outputs are the same:

# the fibonacci sequence as an iterative problem

static_steps = 20
#static_iterations = 1476  # limit of doubles

IF step_static=1|step_static=2
f_n = 1
f_nminus1 = 1
f_nminus2 = 1
ELSE
f_n = f_nminus1 + f_nminus2
f_nminus2 = f_nminus1
f_nminus1 = f_n
ENDIF

PRINT step_static f_n
$feenox fibo_iterative.fee > three$ diff one two
$diff two three$

These three calls were examples of direct execution of FeenoX: a single call with a single argument to solve a single fixed problem.

## 2.3 Parametric

To use FeenoX in a parametric run, one has to successively call the executable passing the main input file path in the first argument followed by an arbitrary number of parameters. These extra parameters will be expanded as string literals $1, $2, etc. appearing in the input file. For example, if hello.fee is

PRINT "Hello $1!" then $ feenox hello.fee World
Hello World!
$feenox hello.fee Universe Hello Universe!$

To have an actual parametric run, an external loop has to successively call FeenoX with the parametric arguments. For example, say this file cantilever.fee fixes the face called “left” and sets a load in the negative z direction of a mesh called cantilever-$1-$2.msh. The output is a single line containing the number of nodes of the mesh and the displacement in the vertical direction w(500,0,0) at the center of the cantilever’s free face:

PROBLEM elastic 3D
READ_MESH cantilever-$1-$2.msh   # in meters

E = 2.1e11         # Young modulus in Pascals
nu = 0.3           # Poisson's ratio

BC left   fixed
BC right  tz=-1e5  # traction in Pascals, negative z

SOLVE_PROBLEM

# z-displacement (components are u,v,w) at the tip vs. number of nodes
PRINT nodes w(500,0,0) "\# $1$2"

Cantilevered beam meshed with structured tetrahedra and hexahedra

Now the following Bash script first calls Gmsh to create the meshes cantilever-${element}-${c}.msh where

• ${element}: tet4, tet10, hex8, hex20, hex27 • ${c}: 1,2,,10

It then calls FeenoX with the input above and passes ${element} and ${c} as extra arguments, which then are expanded as $1 and $2 respectively.

#!/bin/bash

rm -f *.dat
for element in tet4 tet10 hex8 hex20 hex27; do
for c in $(seq 1 10); do # create mesh if not alreay cached mesh=cantilever-${element}-${c} if [ ! -e${mesh}.msh ]; then
scale=$(echo "PRINT 1/${c}" | feenox -)
gmsh -3 -v 0 cantilever-${element}.geo -clscale${scale} -o ${mesh}.msh fi # call FeenoX feenox cantilever.fee${element} ${c} | tee -a cantilever-${element}.dat

done
done

$ Note that the approach used here is to use Gmsh Python API to build the mesh and then fork the FeenoX executable to solve the fork (no pun intended). There are plans to provide a Python API for FeenoX so the problem can be set up, solved and the results read back directly from the script instead of needing to do a fork+exec, read back the standard output as a string and then convert it to a Python float. @Fig:fork shows the results of the combination of the optimization loop over \ell_1 and a parametric run over n. The difference for n=6 and n=7 is in the order of one hundredth of millimeter. ## 2.4 Efficiency The computational resources (i.e. costs measured in CPU/GPU time, random-access memory, long-term storage, etc.) needed to solve a problem should be comparable to other similar state-of-the-art finite-element tools. TO DO ## 2.5 Scalability The tool ought to be able to start solving small problems first to check the inputs and outputs behave as expected and then allow increasing the problem size up in order to achieve to the desired accuracy of the results. As mentioned in [@sec:architecture], large problem should be split among different computers to be able to solve them using a finite amount of per-host computational power (RAM and CPU). • OpenMP in PETSc • Gmsh partitions • run something big to see how it fails • show RAM vs. nodes for mumps & gamg ## 2.6 Flexibility The tool should be able to handle engineering problems involving different materials with potential spatial and time-dependent properties, such as temperature-dependent thermal expansion coefficients and/or non-constant densities. Boundary conditions must be allowed to depend on both space and time as well, like non-uniform pressure loads and/or transient heat fluxes. FeenoX comes from nuclear + experience (what to do and what not to do) Materials: a material library (perhaps included in a frontend GUI?) can write FeenoX’ material definitions. Flexiblity. • everything is an expression, show sophomore’s identity • 1d & 2d interpolated data for functions • thermal transient valve with k(T) and BCs(x,t) ## 2.7 Extensibility It should be possible to add other PDE-casted problem types (such as the Schröedinger equation) to the tool using a reasonable amount of time by one or more skilled programmers. The tool should also allow new models (such as non-linear stress-strain constitutive relationships) to be added as well. • user-provided routines • skel for pdes and annotated models • laplace skel ## 2.8 Interoperability A mean of exchanging data with other computational tools complying to requirements similar to the ones outlined in this document. This includes pre and post-processors but also other computational programs so that coupled calculations can be eventually performed by efficiently exchanging information between calculation codes. • UNIX • POSIX • shmem • mpi • Gmsh • moustache • print -> awk -> latex tables NUREG # 3 Interfaces The tool should be able to allow remote execution without any user intervention after the tool is launched. To achieve this goal it is that the problem should be completely defined in one or more input files and the output should be complete and useful after the tool finishes its execution, as already required. The tool should be able to report the status of the execution (i.e. progress, errors, etc.) and to make this information available to the user or process that launched the execution, possibly from a remote location. ## 3.1 Problem input The problem should be completely defined by one or more input files. These input files might be • particularly formatted files to be read by the tool in an ad-hoc way, and/or • source files for interpreted languages which can call the tool through and API or equivalent method, and/or • any other method that can fulfill the requirements described so far. Preferably, these input files should be plain ASCII file in order to be tracked by distributed control version systems such as Git. If the tool provides an API for an interpreted language such as Python, the Python source used to solve a particular problem should be Git-friendly. It is recommended not to track revisions of mesh data files but of the source input files, i.e. to track the mesher’s input and not the mesher’s output. Therefore, it is recommended not to mix the problem definition with the problem mesh data. It is not mandatory to include a GUI in the main distribution, but the input/output scheme should be such that graphical pre and post-processing tools can create the input files and read the output files so as to allow third parties to develop interfaces. It is recommended to design the workflow as to make it possible for the interfaces to be accessible from mobile devices and web browsers. It is acceptable if only basic usage can be achieved through the usage of graphical interfaces to ease basic usage at least. Complex problems involving non-trivial material properties and boundary conditions might Notwithstanding the suggestion above, it is expected that dar ejemplos comparar con https://cofea.readthedocs.io/en/latest/benchmarks/004-eliptic-membrane/tested-codes.html macro-friendly inputs, rule of generation Simple problems should need simple inputs. English-like input. Nouns are definitions, verbs are instructions. Similar problems should need similar inputs. thermal slab steady state and transient 1d neutron VCS tracking, example with hello world. API in C? ## 3.2 Results output The output ought to contain useful results and should not be cluttered up with non-mandatory information such as ASCII art, notices, explanations or copyright notices. Since the time of cognizant engineers is far more expensive than CPU time, output should be easily interpreted by either a human or, even better, by other programs or interfaces—especially those based in mobile and/or web platforms. Open-source formats and standards should be preferred over privative and ad-hoc formatting to encourage the possibility of using different workflows and/or interfaces. JSON/YAML, state of the art open post-processing formats. Mobile & web-friendly. Common and preferably open-source formats. 100% user-defined output with PRINT, rule of silence rule of economy, i.e. no RELAP yaml/json friendly outputs vtk (vtu), gmsh, frd? 90% is serial (vtk), no need to complicate due to 10% # 4 Quality assurance Since the results obtained with the tool might be used in verifying existing equipment or in designing new mechanical parts in sensitive industries, a certain level of software quality assurance is needed. Not only are best-practices for developing generic software such as • employment of a version control system, • automated testing suites, • user-reported bug tracking support. • etc. required, but also since the tool falls in the category of engineering computational software, verification and validation procedures are also mandatory, as discussed below. Design should be such that governance of engineering data including problem definition, results and documentation can be efficiently performed using state-of-the-art methodologies, such as distributed control version systems ## 4.1 Reproducibility and traceability The full source code and the documentation of the tool ought to be maintained under a control version system. Whether access to the repository is public or not is up to the vendor, as long as the copying conditions are compatible with the definitions of both free and open source software from the FSF and the OSI, respectively as required in [@sec:introduction]. In order to be able to track results obtained with different version of the tools, there should be a clear release procedure. There should be periodical releases of stable versions that are required • not to raise any warnings when compiled using modern versions of common compilers (e.g. GNU, Clang, Intel, etc.) • not to raise any errors when assessed with dynamic memory analysis tools (e.g. Valgrind) for a wide variety of test cases • to pass all the automated test suites as specified in [@sec:testing] These stable releases should follow a common versioning scheme, and either the tarballs with the sources and/or the version control system commits should be digitally signed by a cognizant responsible. Other unstable versions with partial and/or limited features might be released either in the form of tarballs or made available in a code repository. The requirement is that unstable tarballs and main (a.k.a. trunk) branches on the repositories have to be compilable. Any feature that does not work as expected or that does not even compile has to be committed into develop branches before being merge into trunk. If the tool has an executable binary, it should be able to report which version of the code the executable corresponds to. If there is a library callable through an API, there should be a call which returns the version of the code the library corresponds to. It is recommended not to mix mesh data like nodes and element definition with problem data like material properties and boundary conditions so as to ease governance and tracking of computational models and the results associated with them. All the information needed to solve a particular problem (i.e. meshes, boundary conditions, spatially-distributed material properties, etc.) should be generated from a very simple set of files which ought to be susceptible of being tracked with current state-of-the-art version control systems. In order to comply with this suggestion, ASCII formats should be favored when possible. simple <-> simple similar <-> Similar ## 4.2 Automated testing A mean to automatically test the code works as expected is mandatory. A set of problems with known solutions should be solved with the tool after each modification of the code to make sure these changes still give the right answers for the right questions and no regressions are introduced. Unit software testing practices like continuous integration and test coverage are recommended but not mandatory. The tests contained in the test suite should be • varied, • diverse, and • independent Due to efficiency issues, there can be different sets of tests (e.g. unit and integration tests, quick and thorough tests, etc.) Development versions stored in non-main branches can have temporarily-failing tests, but stable versions have to pass all the test suites. make check regressions, example of the change of a sign ## 4.3 Bug reporting and tracking A system to allow developers and users to report bugs and errors and to suggest improvements should be provided. If applicable, bug reports should be tracked, addressed and documented. User-provided suggestions might go into the back log or TO-DO list if appropriate. Here, “bug and errors” mean failure to • compile on supported architectures, • run (unxepected run-time errors, segmentation faults, etc.) • return a correct result github mailing listings ## 4.4 Verification Verification, defined as The process of determining that a model implementation accurately represents the developer’s conceptual description of the model and the solution to the model. i.e. checking if the tool is solving right the equations, should be performed before applying the code to solve any industrial problem. Depending on the nature and regulation of the industry, the verification guidelines and requirements may vary. Since it is expected that code verification tasks could be performed by arbitrary individuals or organizations not necessarily affiliated with the tool vendor, the source code should be available to independent third parties. In this regard, changes in the source code should be controllable, traceable and well documented. Even though the verification requirements may vary among problem types, industries and particular applications, a common method to verify the code is to compare solutions obtained with the tool with known exact solutions or benchmarks. It is thus mandatory to be able to compare the results with analytical solutions, either internally in the tool or through external libraries or tools. This approach is called the Method of Exact Solutions and it is the most widespread scheme for verifying computational software, although it does not provide a comprehensive method to verify the whole spectrum of features. In any case, the tool’s output should be susceptible of being post-processed and analysed in such a way to be able to determine the order of convergence of the numerical solution as compared to the exact one. Another possibility is to follow the Method of Manufactured Solutions, which does address all the shortcomings of MES. It is highly encouraged that the tool allows the application of MMS for software verification. Indeed, this method needs a full explanation of the equations solved by the tool, up to the point that [@sandia-mms] says that Difficulties in determination of the governing equations arises frequently with commercial software, where some information is regarded as proprietary. If the governing equations cannot be determined, we would question the validity of using the code. To enforce the availability of the governing equations, the tool has to be open source as required in @sec:introduction and well documented as required in @sec:documentation. A report following either the MES and/or MMS procedures has to be prepared for each type of equation that the tool can solve. The report should show how the numerical results converge to the exact or manufactured results with respect to the mesh size or number of degrees of freedom. This rate should then be compared to the theoretical expected order. Whenever a verification task is performed and documented, at least one of the cases should be added to the test suite. Even though the verification report must contain a parametric mesh study, a single-mesh case is enough to be added to the test suite. The objective of the tests defined in [@sec:testing] is to be able to detect regressions which might have been inadvertently introduced in the code and not to do any actual verification. Therefore a single-mesh case is enough for the test suites. open source (really, not like CCX -> mostrar ejemplo) GPLv3+ free Git + gitlab, github, bitbucket ## 4.5 Validation As with verification, for each industrial application of the tool there should be a documented procedure to perform a set of validation tests, defined as The process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. i.e. checking that the right equations are being solved by the tool. This procedure should be based on existing industry standards regarding verification and validation such as ASME, AIAA, IAEA, etc. There should be a procedure for each type of physical problem (thermal, mechanical, thermomechanical, nuclear, etc.) and for each problem type when a new • geometry, • mesh type, • material model, • boundary condition, • data interpolation scheme or any other particular application-dependent feature is needed. A report following the validation procedure defined above should be prepared and signed by a responsible engineer in a case-by-case basis for each particular field of application of the tool. Verification can be performed against • known analytical results, and/or • other already-validated tools following the same standards, and/or • experimental results. already done for Fino hip implant, 120+ pages, ASME, cases of increasing complexity ## 4.6 Documentation Documentation should be complete and cover both the user and the developer point of view. It should include a user manual adequate for both reference and tutorial purposes. Other forms of simplified documentation such as quick reference cards or video tutorials are not mandatory but highly recommended. Since the tool should be extendable ([@sec:extensibility]), there should be a separate development manual covering the programming design and implementation, explaining how to extend the code and how to add new features. Also, as non-trivial mathematics which should be verified ([@sec:verification]) are expected, a thorough explanation of what equations are taken into account and how they are solved is required. It should be possible to make the full documentation available online in a way that it can be both printed in hard copy and accessed easily from a mobile device. Users modifying the tool to suit their own needs should be able to modify the associated documentation as well, so a clear notice about the licensing terms of the documentation itself (which might be different from the licensing terms of the source code itself) is mandatory. Tracking changes in the documentation should be similar to tracking changes in the code base. Each individual document ought to explicitly state to which version of the tool applies. Plain ASCII formats should be preferred. It is forbidden to submit documentation in a non-free format. The documentation shall also include procedures for • reporting errors and bugs • releasing stable versions • performing verification and validation studies • contributing to the code base, including • code of conduct • coding styles • variable and function naming convections it’s not compact, but almost! Compactness is the property that a design can fit inside a human being’s head. A good practical test for compactness is this: Does an experienced user normally need a manual? If not, then the design (or at least the subset of it that covers normal use) is compact. unix man page markdown + pandoc = html, pdf, texinfo # 5 Appendix: Downloading and compiling FeenoX ## 5.1 Binary executables Browse to https://www.seamplex.com/feenox/dist/ and check what the latest version for your architecture is. Then do wget https://www.seamplex.com/feenox/dist/linux/feenox-v0.1.59-gbf85679-linux-amd64.tar.gz tar xvzf feenox-v0.1.59-gbf85679-linux-amd64.tar.gz You’ll have the binary under bin and examples, documentation, manpage, etc under share. Copy bin/feenox into somewhere in the PATH and that will be it. If you are root, do sudo cp feenox-v0.1.59-gbf85679-linux-amd64/bin/feenox /usr/local/bin If you are not root, the usual way is to create a directory $HOME/bin and add it to your local path. If you have not done it already, do

mkdir -p $HOME/bin echo 'expot PATH=$PATH:$HOME/bin' >> .bashrc Then finally copy bin/feenox to $HOME/bin

cp feenox-v0.1.59-gbf85679-linux-amd64/bin/feenox $HOME/bin Check if it works by calling feenox from any directory (you might need to open a new terminal so .bashrc is re-read): $ feenox
FeenoX v0.1.67-g8899dfd-dirty
a free no-fee no-X uniX-like finite-element(ish) computational engineering tool

usage: feenox [options] inputfile [replacement arguments]

-h, --help         display usage and commmand-line help and exit
-v, --version      display brief version information and exit
-V, --versions     display detailed version information
-s, --sumarize     list all symbols in the input file and exit

Instructions will be read from standard input if “-” is passed as
inputfile, i.e.

$echo "PRINT 2+2" | feenox - 4 Report bugs at https://github.com/seamplex/feenox or to jeremy@seamplex.com Feenox home page: https://www.seamplex.com/feenox/$ 

## 5.2 Source tarballs

To compile the source tarball, proceed as follows. This procedure does not need git nor autoconf but a new tarball has to be downloaded each time there is a new FeenoX version.

1. Install mandatory dependencies

sudo apt-get install gcc make libgsl-dev

If you cannot install libgsl-dev, you can have the configure script to download and compile it for you. See point 4 below.

2. Install optional dependencies (of course these are optional but recommended)

sudo apt-get install libsundials-dev petsc-dev slepc-dev

wget https://www.seamplex.com/feenox/dist/src/feenox-v0.1.66-g1c4b17b.tar.gz
tar xvzf feenox-v0.1.66-g1c4b17b.tar.gz
4. Configure, compile & make

cd feenox-v0.1.66-g1c4b17b
./configure
make -j4

If you cannot (or do not want) to use libgsl-dev from a package repository, call configure with --enable-download-gsl:

./configure --enable-download-gsl

If you do not have Internet access, get the tarball manually, copy it to the same directory as configure and run again.

5. Run test suite (optional)

make check
6. Install the binary system wide (optional)

sudo make install

## 5.3 Git repository

To compile the Git repository, proceed as follows. This procedure does need git and autoconf but new versions can be pulled and recompiled easily.

1. Install mandatory dependencies

sudo apt-get install gcc make git automake autoconf libgsl-dev

If you cannot install libgsl-dev but still have git and the build toolchain, you can have the configure script to download and compile it for you. See point 4 below.

2. Install optional dependencies (of course these are optional but recommended)

sudo apt-get install libsundials-dev petsc-dev slepc-dev
3. Clone Github repository

git clone https://github.com/seamplex/feenox
4. Boostrap, configure, compile & make

cd feenox
./autogen.sh
./configure
make -j4

If you cannot (or do not want) to use libgsl-dev from a package repository, call configure with --enable-download-gsl:

./configure --enable-download-gsl

If you do not have Internet access, get the tarball manually, copy it to the same directory as configure and run again.

5. Run test suite (optional)

make check
6. Install the binary system wide (optional)

sudo make install

To stay up to date, pull and then autogen, configure and make (and optionally install):

git pull
./autogen.sh; ./configure; make -j4
sudo make install

# 6 Appendix: Rules of UNIX philosophy

## 6.1 Rule of Modularity

Developers should build a program out of simple parts connected by well defined interfaces, so problems are local, and parts of the program can be replaced in future versions to support new features. This rule aims to save time on debugging code that is complex, long, and unreadable.

• FeenoX uses third-party high-quality libraries
• GNU Scientific Library
• SUNDIALS
• PETSc
• SLEPc

## 6.2 Rule of Clarity

Developers should write programs as if the most important communication is to the developer who will read and maintain the program, rather than the computer. This rule aims to make code as readable and comprehensible as possible for whoever works on the code in the future.

• Example two squares in thermal contact.
• LE10 & LE11: a one-to-one correspondence between the problem text and the FeenoX input.

## 6.3 Rule of Composition

Developers should write programs that can communicate easily with other programs. This rule aims to allow developers to break down projects into small, simple programs rather than overly complex monolithic programs.

• FeenoX uses meshes created by a separate mesher (i.e. Gmsh).
• FeenoX writes data that has to be plotted or post-processed by other tools (Gnuplot, Gmsh, Paraview, etc.).
• ASCII output is 100% controlled by the user so it can be tailored to suit any other programs’ input needs such as AWK filters to create LaTeX tables.

## 6.4 Rule of Separation

Developers should separate the mechanisms of the programs from the policies of the programs; one method is to divide a program into a front-end interface and a back-end engine with which that interface communicates. This rule aims to prevent bug introduction by allowing policies to be changed with minimum likelihood of destabilizing operational mechanisms.

• FeenoX does not include a GUI, but it is GUI-friendly.

## 6.5 Rule of Simplicity

Developers should design for simplicity by looking for ways to break up program systems into small, straightforward cooperating pieces. This rule aims to discourage developers’ affection for writing “intricate and beautiful complexities” that are in reality bug prone programs.

• Simple problems need simple input.
• Similar problems need similar inputs.
• English-like self-evident input files matching as close as possible the problem text.
• If there is a single material there is no need to link volumes to properties.

## 6.6 Rule of Parsimony

Developers should avoid writing big programs. This rule aims to prevent overinvestment of development time in failed or suboptimal approaches caused by the owners of the program’s reluctance to throw away visibly large pieces of work. Smaller programs are not only easier to write, optimize, and maintain; they are easier to delete when deprecated.

• Parametric and/or optimization runs have to be driven from an outer script (Bash, Python, etc.)

## 6.7 Rule of Transparency

Developers should design for visibility and discoverability by writing in a way that their thought process can lucidly be seen by future developers working on the project and using input and output formats that make it easy to identify valid input and correct output. This rule aims to reduce debugging time and extend the lifespan of programs.

• Written in C99

## 6.8 Rule of Robustness

Developers should design robust programs by designing for transparency and discoverability, because code that is easy to understand is easier to stress test for unexpected conditions that may not be foreseeable in complex programs. This rule aims to help developers build robust, reliable products.

## 6.9 Rule of Representation

Developers should choose to make data more complicated rather than the procedural logic of the program when faced with the choice, because it is easier for humans to understand complex data compared with complex logic. This rule aims to make programs more readable for any developer working on the project, which allows the program to be maintained.

## 6.10 Rule of Least Surprise

Developers should design programs that build on top of the potential users’ expected knowledge; for example, ‘+’ in a calculator program should always mean ‘addition’. This rule aims to encourage developers to build intuitive products that are easy to use.

• If one needs a problem where the conductivity depends on x as k(x)=1+x then the input is

k(x) = 1+x
• If a problem needs a temperature distribution given by an algebraic expression T(x,y,z)=\sqrt{x^2+y^2}+z then do

T(x,y,z) = sqrt(x^2+y^2) + z

## 6.11 Rule of Silence

Developers should design programs so that they do not print unnecessary output. This rule aims to allow other programs and developers to pick out the information they need from a program’s output without having to parse verbosity.

• No PRINT no output.

## 6.12 Rule of Repair

Developers should design programs that fail in a manner that is easy to localize and diagnose or in other words “fail noisily”. This rule aims to prevent incorrect output from a program from becoming an input and corrupting the output of other code undetected.

Input errors are detected before the computation is started and run-time errors (i.e. a division by zero) con be user controled, they can be fatal or ignored.

## 6.13 Rule of Economy

Developers should value developer time over machine time, because machine cycles today are relatively inexpensive compared to prices in the 1970s. This rule aims to reduce development costs of projects.

• Output is 100% user-defined so the desired results is directly obtained instead of needing further digging into tons of undesired data.The approach of “compute and write everything you can in one single run” made sense in 1970 where CPU time was more expensive than human time, but not anymore.
• Example: LE10 & LE11.

## 6.14 Rule of Generation

Developers should avoid writing code by hand and instead write abstract high-level programs that generate code. This rule aims to reduce human errors and save time.

• Inputs are M4-friendly.
• Parametric runs can be done from scripts through command line arguments expansion.
• Documentation is created out of simple Markdown sources and assembled as needed.

## 6.15 Rule of Optimization

Developers should prototype software before polishing it. This rule aims to prevent developers from spending too much time for marginal gains.

• Premature optimization is the root of all evil
• We are still building. We will optimize later.
• Code optimization: TODO
• Parallelization: TODO
• Comparison with other tools: TODO

## 6.16 Rule of Diversity

Developers should design their programs to be flexible and open. This rule aims to make programs flexible, allowing them to be used in ways other than those their developers intended.

• Either Gmsh or Paraview can be used to post-process results.
• Other formats can be added.

## 6.17 Rule of Extensibility

Developers should design for the future by making their protocols extensible, allowing for easy plugins without modification to the program’s architecture by other developers, noting the version of the program, and more. This rule aims to extend the lifespan and enhance the utility of the code the developer writes.

• FeenoX is GPLv3+. The ‘+’ is for the future.
• Each PDE has a separate source directory. Any of them can be used as a template for new PDEs, especially laplace for elliptic operators.

1. There are some examples of pieces of computational software which are described as “open source” in which even the first of the four freedoms is denied. The most iconic case is that of Android, whose sources are readily available online but there is no straightforward way of updating one’s mobile phone firmware with a customized version, not to mention vendor and hardware lock ins and the possibility of bricking devices if something unexpected happens. In the nuclear industry, it is the case of a Monte Carlo particle-transport program that requests users to sign an agreement about the objective of its usage before allowing its execution. The software itself might be open source because the source code is provided after signing the agreement, but it is not free (as in freedom) at all.↩︎