CFD Online Logo CFD Online URL
Home >

CFD Blog Feeds

Another Fine Mesh top

► Recap of Six Recent CFD Success Stories with a Meshing Assist
    9 Sep, 2020
No one generates a mesh just to generate a mesh. The proof of a mesh’s suitability is successful use in a CFD simulation. That success can be predicated on many factors including the availability of a broad range of mesh … Continue reading
► Use of Grand Challenge Problems to Assess Progress Toward the CFD Vision 2030
    8 Sep, 2020
Join the AIAA’s CFD 2030 Integration Committee at SciTech 2021 this coming January for four invited talks and an extended Q&A session on formulation of grand challenge problems that would provide a basis for assessing progress toward the CFD Vision … Continue reading
► This Week in CFD
    4 Sep, 2020
This week’s CFD news brings some excellent reading as we head into a 3-day weekend, at least here in the U.S. It begins with a research article on undergraduate education that’s certain spark thinking if not debate. And our friends … Continue reading
► This Week in CFD
  28 Aug, 2020
This week’s CFD news includes articles that pose questions about open source software. Does it have a people problem? And are people prejudiced against it? Proving that good things never get old, there’s a multi-part video series on fluid mechanics … Continue reading
► It’s all in the numbering – mesh renumbering may improve simulation speed
  27 Aug, 2020
We all know that the mesh plays a vital role in CFD simulations. Yet, not many realize that renumbering (ordering) of the cells in the Finite Volume Method (FVM) can affect the performance of the linear solver and thus the … Continue reading
► Reducing Boiler Emissions Through Shape Optimization
  25 Aug, 2020
In this work, a flexible framework for discrete adjoint-based reactive flow optimization in SU2 is presented. The implementation is based on a low-Mach number solver and a flamelet progress variable model for strongly cooled laminar premixed flames. Besides the combustion … Continue reading

F*** Yeah Fluid Dynamics top

► Dendritic
  17 Sep, 2020

“What happens when two scientists, a composer, a cellist, and a planetarium animator make art?” The answer is “Dendritic,” a musical composition built directly on the tree-like branching patterns found when a less viscous fluid is injected into a more viscous one sandwiched between two plates.

Normally this viscous fingering instability results in dense, branching fingers, but when there’s directional dependence in the fluid, the pattern transitions instead to one that’s dendritic. In this case, that directionality comes from liquid crystals, whose are rod-like shape makes it easier for liquid to flow in the direction aligned with the rods.

For more on the science, math, and music behind the piece, check out this description from the scientists and composer. (Video, image, and submission credit: I. Bischofberger et al.)

► Bright Volcanic Clouds
  16 Sep, 2020

Every day human activity pumps aerosol particles into the atmosphere, potentially altering our weather patterns. But tracking the effects of those emissions is difficult with so many variables changing at once. It’s easier to see how such particles affect weather patterns somewhere like the Sandwich Islands, where we can observe the effects of a single, known source like a volcano.

That’s what we see in this false-color satellite image. Mount Michael has a permanent lava lake in its central crater, and so often releases sulfur dioxide and other gases. As those gases rise and mix with the passing atmosphere, they can create bright, persistent cloud trails like the one seen here. The brightening comes from the additional small cloud droplets that form around the extra particles emitted from the volcano.

As a bonus, this image includes some extra fluid dynamical goodness. Check out the wave clouds and von Karman vortices in the wake of the neighboring islands! (Image credit: J. Stevens; via NASA Earth Observatory)

► Bacterial Turbulence
  15 Sep, 2020

Conventional fluid dynamical wisdom posits that any flows at the microscale should be laminar. Tiny swimmers like microorganisms live in a world dominated by viscosity, therefore, there can be no turbulence. But experiments with bacterial colonies have shown that’s not entirely true. With enough micro-swimmers moving around, even these viscous, small-scale flows become turbulent.

That’s what is shown in Image 2, where tracer particles show the complex motion of fluid around a bacterial swarm. By tracking both the bacteria motion and the fluid motion, researchers were able to describe the flow using statistical methods similar to those used for conventional turbulence. The characteristics of this bacterial turbulence are not identical to larger-scale turbulence, but they are certainly more turbulent than laminar. (Image credits: bacterium – A. Weiner, bacterial turbulence – J. Dunkel et al.; research credit: J. Dunkel et al.; submitted by Jeff M.)

► How Canal Locks Work
  14 Sep, 2020

For thousands of years, boats have been a critical component of trade, efficiently enabling transport of goods over large distances. But water’s self-leveling creates challenges when moving up and downstream through rivers and canals. To get around this, engineers use locks, which act as a sort of gravity-driven elevator to lift and lower boats to the appropriate water level. In this video from Practical Engineering, we learn about the basic physics behind locks as well as some of the methods engineers use to limit water loss through the lock. (Image and video credit: Practical Engineering)

► Fluorescent Dancing Droplets
  11 Sep, 2020

These fluorescent droplets of glowstick liquid jiggle and dance in a solution of sodium hydroxide. Some droplets jitter. Some rotate. And some undergo one coalescence after another. It’s always fun to see how fluid dynamics and chemistry combine! (Image and video credit: Beauty of Science)

► Why Slicing Tomatoes Works
  10 Sep, 2020

Picture it: a nice, ripe tomato. Your not-so-recently sharpened kitchen knife. You press the blade down into the soft flesh and… it explodes. Soft solids – like a tomato – don’t react well to cutting, but they slice just fine. Examining why that’s the case is at the heart of this model.

Tomatoes are essentially a gel encased in a thin skin. Gels are a kind of hybrid material — not quite liquid and not quite solid. They consist of a network of particles or polymers bonded together and immersed in a liquid. To cut that network apart, the downward force of the blade has to strain the gel past its limits, which squeezes out the surrounding liquid.

The researchers found that this liquid layer is key to how force from the knife’s motion gets transmitted. In particular, they found that the horizontal motion of a slice is necessary to initiate a cut, and that the gel parts most easily when the downward knife velocity is no more than 24% of the horizontal cutting speed. Press down any faster and the strain propagation fluctuates, creating that unfortunate tomato explosion. (Image credit: G. Fring; research credit: S. Mora and Y. Pomeau; via Ars Technica; submitted by Kam-Yung Soh)

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

CFD Online top

► RANS Grid Sensitivity Divergence on LES Grid
  31 Aug, 2020
Reference on not changing y+ while doing a grid sensitivity study:

Originally Posted by sbaffini View Post
Indeed, if y+ =4 is relative to the finest grid, it is confirmed to be a wall function problem. I can't double check now, but I'm pretty sure that the k-omega sst model in CFX uses an all y+ wall function, which means that a wall function is always active. While, in theory, such wall functions should be insensitive to the specific y+ value, they are not perfect and your case is very far from the typical wall function scenario (equilibrium boundary layer), so what you obtain is actually expected.

The only viable solution here, and I suggest you to investigate it also for your other models, is to redistribute cells in your grid to be always within y+ = 1-2, but no more. In any case, the important thing is that you can't have y+ changing between the grids when doing a grid refinement.

EDIT: I know, it sucks...
► Y+ value for Large Eddy Simulation
  31 Aug, 2020
Explanation of Y+ as it relates to viscous sublayer and advection scheme:

Originally Posted by cfdnewbie View Post
yes, at least in the viscous sublayer. The size of your grid cell (or the number of points per unit length) determine the smallest scale you can catch on a given grid. From information theory, the Nyquist theorem tells us that we need at least 2 points per wavelength to represent a frequency (we need to be able to detect the sign change). However, 2 points per wavelength is just for Fourier-type approximations. For other schemes like O1 FV you need a lot more, maybe 6 to 10 to accurately capture a wavelength. Let's assume that you have the same grid in all of the flow (i.e. high resolution everywhere, no grid stretching or such). Then the smallest scale you can capture is determined by your grid and scheme, the better/finer, the smaller the scale.

OF course, most grids will coarsen away from the wall, so the smallest scale will "grow bigger" away from the wall as well

Ha, that's the crux of LES :) of course, the bigger y+, the fewer the small scales you will catch, but does that change the result of the bigger scales?

The answer is not straight forward, but I'll try to make it short:

Let's talk about NS-equations (or any non-linear conservation eqns). The scales represented in the equations are coupled by the non-linearity of the equations, i.e. what happens on one scale will (eventually) reach all other scales (also known as the butterfly effect). So the NS eqns represent the full "nature" with all its scales and interactions. We now truncate our "nature" by resolving only the larger scales, since our grid is too coarse.... what will happen? Will the large scales be influenced by the lack of small scales?

Hell, yeah, they will. We are lacking the balancing interaction of the small scales, since we don't have these scales. We are also lacking the physical effects that take place at small scales (dissipation).... so we have production of turbulence at large scales, the energy is handed down through the medium scales but is NOT dissipated at the small scales, since they are simply not present in our computation. Will that influence the large scales? Definitely!

That's why LES people add some type of viscosity (effect of small scales) to their computations, otherwise, their simulations would very likely just blow up!

hope this help!

► Rans
  31 Aug, 2020
Originally Posted by vinerm View Post
That's a wrong notion that RANS or EVM models are introduced to get faster results or are expected to be used with coarse mesh. There is no such assumption behind development of these models. The only assumption in EVM is that the turbulence is isotropic and non-EVM RANS, such as, RSM don't even have that assumption.

And when it comes to wall treatment, it is not directly linked with turbulence model; even LES requires wall treatment. y^+ is a non-dimensional (Reynolds) number and for almost all industrial fluids, theoretically as well as experimentally, it is found that u^+ = y^+ up to y^+ of 5. And if it is linear within this limit, it does not matter if you have 10 points or just 1 point, the line would be same. So, y^+ being smaller than 1 is an overkill and does not help within anything.

Boundary conditions for both k and \varepsilon at the wall is 0.
► What I've done in the past years and may need someone else to pick it back up
  18 Aug, 2020
This is a blog post aimed to pass on the baton of the work I've done in the past to anyone who wants to pick it back up partially or completely, which I was still doing (or trying to do) until Hanging my volunteer gloves and moving to a new phase of my life.

This blog post could potentially be edited as time goes on and I remember about things I've done in the past and which should be picked up by someone else:
  1. Generating version template pages and logos for said versions at - this is explained here: and here
  2. Writing and testing installation instructions at - The objective was to ensure that the less knowledgeable user would still be able to compile+install OpenFOAM from source code with a much higher success rate, than following the succinct instructions available at the official websites.
  3. Updating the release version links at the top right-most corner of
  4. Uh... several other things listed at, mostly listed here:
  5. Contributing to bug reports and fixes at
  6. Moderator work here at the forum, including:
    1. Hunting down spam, which nowadays is mostly automated, but not fully automated.
    2. Moving threads to the correct sub-forums.
    3. Re-arranging forums to make it easier for people to ask and answer questions, as well as finding existing answers.
    4. Warning forum members when they've not followed the rules...
    5. I wanted to have pruned all of the threads on the main OpenFOAM forum and place them in their correct sub-forums, but never got around to it. There is a thread on the moderator forum that explains how to streamline the process.
    6. I wanted to have finished moving posts into independent threads out of this still large thread:
    7. Also out of this one:
  7. Had a list of posts/threads I wanted to look into... which is now written on this wiki page on my central repository for these kinds of notes: What I wanted to still have done for the OpenFOAM community, but never managed to find the time for it
  8. And had a list of bugs I wanted to solve: Bugs on OpenFOAM's bug tracker I wanted to tackle, but never managed to find the time for it
  9. I have over 50 repositories at - most of them related to OpenFOAM and which will be left as-is for the years to come. If you want to continue working on them and even take over maintenance, open an issue on the respective repository.
► Hanging my volunteer gloves and moving to a new phase of my life
  18 Aug, 2020
TL;DR: As of 2020, I can only help during office hours, at work, if paid and/or affects our projects, namely what we use in OpenFOAM itself and blueCFD-Core.

Full post:
So nearly 2 years after my blog post Why I contribute to the OpenFOAM forum(s), wiki(s) and the public community, I'm writing this blog post you are reading now.

My last 3 thread posts at the forums in CFD-Online this year, were on May 7th, February 27th and January 20th. And before that, it was 10 posts over my winter vacation on the last week of 2019. Before that, it averaged out to around 1 post/month. I have 10,956 posts here at the forum and it still averages to 2.62 post/day.

I'm currently vacation, mid August 2020 and am writing this, since I'm unable to help the way I used to in the past.

So what happened?
In a short description: borderline-burning-out + ~30kg overweight.

In other words, I was still able to work, but having difficulty maintaining a stable life, which wasn't healthy to begin for years now, along with overly stressed, even if there was not much of a reason to be stressed...

What am I doing now, since early 2020?
  1. Changed my diet, namely changed my eating regiment to something I should have done over 20 years ago.
  2. Increased my physical activity to a much healthier dosage.
  3. Am moving on with my life to a new phase where I actually have to behave as a grown up, specially given I'm already 40 years old as I write this.

What does this mean for what I can do to help in the community?
Given my past efforts over a period of 10 years, I'm writing this blog post as an official stance on how much I will be able to help in the future:
  1. The majority (~99.9%) of the public contributions will be done within working hours at my job; in other words, during office hours, at work, if paid and/or affects our projects, namely what we use in OpenFOAM itself and blueCFD-Core.
  2. The remaining 0.1% outside of my job will mostly be the bug tracker at, given that I can't be at both and :(
  3. Everything else where I've helped in the past, will be once in a blue moon, may it be at the forum or
  4. I don't know how many or which community/official OpenFOAM workshops I will attend in the future. I already had to give up on the Iberian User workshop of 2018, due to health reasons, i.e. what has finally led me to this decision this year of 2020.
This has been gradually occurring since at least 2015, but it has effectively come to this stopping point.

What I ask you, as you are reading this blog post?

Associated to this blog post, I'm writing another blog post which I may need to update in the near future: What I've done in the past years and may need someone else to pick it back up
edit: Aiming to wrap up writing said blog post by the end of the 19th of August 2020.

Signing off for now:
I've written some years ago in a forum post, where someone asked a vague question and I went on a rant over "as people grow older, the more they know and the more responsibilities they have, therefore the less free time they have to come and help here... so the less information you provide, the less likely you will get the answer you need".

In a way, my time has come and I need to move on with my life. But I was stressing out too much to notice it sooner. Fortunately I should still be on time to keep going forward and hopefully be able to help more the community in the future.

This has happened to the various authors of code that is currently and was in OpenFOAM in the past, where they helped people publicly over several years and ended up having to pull away from the community, because it's not easy to achieve a balance between life and working as a volunteer.

Fun fact:
Even if I don't post in the next 20 years it would still give me a rate of 1 post/month... :cool::rolleyes:
► 10 crucial parameters to check before committing to a CFD software for academia
    4 Aug, 2020
I have put together a comprehensive list of 10 crucial parameters that you, as a researcher or a teacher, should check with the CFD software provider, before committing to their software.
Attached Thumbnails
Click image for larger version

Name:	cfd_academic.PNG
Views:	35
Size:	107.8 KB
ID:	511  

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15

    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)


        type patch;
            (0 8 12 4)
        type patch;
            (3 7 15 11)
        type wall;
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        type patch;
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        type empty;
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

        polyLine 1 2
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)

        polyLine 9 10
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!


This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously ( as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post:

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (, but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:



Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')


plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,


popt, pcov = curve_fit(sutherland, T, mu)
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')


plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!


In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post ( most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.


Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!


You can download the script here:

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.


(1) Copy to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3
(5) If no errors – run blockMesh

You need to run this with python 3, and you need to have numpy installed


The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.


12% Joukowski Airfoil


With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:

With these inputs, the result looks like this:

Mesh Quality:

Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality


Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

Hanley Innovations top

► Accurate Aircraft Performance Predictions using Stallion 3D
  26 Feb, 2020

Stallion 3D uses your CAD design to simulate the performance of your aircraft.  This enables you to verify your design and compute quantities such as cruise speed, power required and range at a given cruise altitude. Stallion 3D is used to optimize the design before moving forward with building and testing prototypes.

The table below shows the results of Stallion 3D around the cruise angles of attack of the Cessna 402c aircraft.  The CAD design can be obtained from the OpenVSP hangar.

The results were obtained by simulating 5 angles of attack in Stallion 3D on an ordinary laptop computer running MS Windows 10 .  Given the aircraft geometry and flight conditions, Stallion 3D computed the CL, CD, L/D and other aerodynamic quantities.  With this accurate aerodynamics results, the preliminary performance data such as cruise speed, power, range and endurance can be obtained.

Lift Coefficient versus Angle of Attack computed with Stallion 3D

Lift to Drag Ratio versus True Airspeed at 10,000 feet

Power Required versus True Airspeed at 10,000 feet

The Stallion 3D results shows good agreement with the published data for the Cessna 402.  For example, the cruse speed of the aircraft at 10,000 feet is around 140 knots. This coincides with the speed at the maximum L/D (best range) shown in the graph and table above.

 More information about Stallion 3D can be found at the following link.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software that is accessible to engineers, designers and students.  For more information, please visit >

► 5 Tips For Excellent Aerodynamic Analysis and Design
    8 Feb, 2020
Stallion 3D analysis of Uber Elevate eCRM-100 model

Being the best aerodynamics engineer requires meticulous planning and execution.  Here are 5 steps you can following to start your journey to being one of the best aerodynamicist.

1.  Airfoils analysis (VisualFoil) - the wing will not be better than the airfoil. Start with the best airfoil for the design.

2.  Wing analysis (3Dfoil) - know the benefits/limits of taper, geometric & aerodynamic twist, dihedral angles, sweep, induced drag and aspect ratio.

3. Stability analysis (3Dfoil) - longitudinal & lateral static & dynamic stability analysis.  If the airplane is not stable, it might not fly (well).

4. High Lift (MultiElement Airfoils) - airfoil arrangements can do wonders for takeoff, climb, cruise and landing.

5. Analyze the whole arrangement (Stallion 3D) - this is the best information you will get until you flight test the design.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software the is accessible to engineers, designs and students.  For more information, please visit >

► Accurate Aerodynamics with Stallion 3D
  17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling. 

The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention. 

Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.

Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.

Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.

Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

More information about Stallion 3D can be found at:

► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse

Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017

Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.

More information about the software can be found at the following url:

Thanks for reading.

► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:

If your have any questions about this article, please call me at (352) 261-3376 or visit

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

CFD and others... top

► Facts, Myths and Alternative Facts at an Important Juncture
  21 Jun, 2020
We live in an extraordinary time in modern human history. A global pandemic did the unthinkable to billions of people: a nearly total lock-down for months.  Like many universities in the world, KU closed its doors to students since early March of 2020, and all courses were offered online.

Millions watched in horror when George Floyd was murdered, and when a 75 year old man was shoved to the ground and started bleeding from the back of his skull...

Meanwhile, Trump and his allies routinely ignore facts, fabricate alternative facts, and advocate often-debunked conspiracy theories to push his agenda. The political system designed by the founding fathers is assaulted from all directions. The rule of law and the free press are attacked on a daily basis. One often wonders how we managed to get to this point, and if the political system can survive the constant sabotage...It appears the struggle between facts, myths and alternative facts hangs in the balance.

In any scientific discipline, conclusions are drawn, and decisions are made based on verifiable facts. Of course, we are humans, and honest mistakes can be made. There are others, who push alternative facts or misinformation with ulterior motives. Unfortunately, mistaken conclusions and wrong beliefs are sometimes followed widely and become accepted myths. Fortunately, we can always use verifiable scientific facts to debunk them.

There have been many myths in CFD, and quite a few have been rebutted. Some have continued to persist. I'd like to refute several in this blog. I understand some of the topics can be very controversial, but I welcome fact-based debate.

Myth No. 1 - My LES/DNS solution has no numerical dissipation because a central-difference scheme is used.

A central finite difference scheme is indeed free of numerical dissipation in space. However, the time integration scheme inevitably introduces both numerical dissipation and dispersion. Since DNS/LES is unsteady in nature, the solution is not free of numerical dissipation.  

Myth No. 2 - You should use non-dissipative schemes in LES/DNS because upwind schemes have too much numerical dissipation.

It sounds reasonable, but far from being true. We all agree that fully upwind schemes (the stencil shown in Figure 1) are bad. Upwind-biased schemes, on the other hand, are not necessarily bad at all. In fact, in a numerical test with the Burgers equation [1], the upwind biased scheme performed better than the central difference scheme because of its smaller dispersion error. In addition, the numerical dissipation in the upwind-biased scheme makes the simulation more robust since under-resolved high-frequency waves are naturally damped.   

Figure 1. Various discretization stencils for the red point
The Riemann solver used in the DG/FR/CPR scheme also introduces a small amount of dissipation. However, because of its small dispersion error, it out-performs the central difference and upwind-biased schemes. This study shows that both dissipation and dispersion characteristics are equally important. Higher order schemes clearly perform better than a low order non-dissipative central difference scheme.  

Myth No. 3 - Smagorisky model is a physics based sub-grid-scale (SGS) model.

There have been numerous studies based on experimental or DNS data, which show that the SGS stress produced with the Smagorisky model does not correlate with the true SGS stress. The role of the model is then to add numerical dissipation to stablize the simulations. The model coefficient is usually determined by matching a certain turbulent energy spectrum. The fact suggests that the model is purely numerical in nature, but calibrated for certain numerical schemes using a particular turbulent energy spectrum. This calibration is not universal because many simulations produced worse results with the model.

► What Happens When You Run a LES on a RANS Mesh?
  27 Dec, 2019

Surely, you will get garbage because there is no way your LES will have any chance of resolving the turbulent boundary layer. As a result, your skin friction will be way off. Therefore, your drag and lift will be a total disaster.

To actually demonstrate this point of view, we recently embarked upon a numerical experiment to run an implicit large eddy simulation (ILES) of the NASA CRM high-lift configuration from the 3rd AIAA High-Lift Prediction Workshop. The flow conditions are: Mach = 0.2, Reynolds number = 3.26 million based on the mean aerodynamic chord, and the angle of attack = 16 degrees.

A quadratic (Q2) mesh was generated by Dr. Steve Karman of Pointwise, and is shown in Figure 1.

 Figure 1. Quadratic mesh for the NASA CRM high-lift configuration (generated by Pointwise)

The mesh has roughly 2.2 million mixed elements, and is highly clustered near the wall with an average equivalent y+ value smaller than one. A p-refinement study was conducted to assess the mesh sensitivity using our high-order LES tool based on the FR/CPR method, hpMusic. Simulations were performed with solution polynomial degrees of p = 1, 2 and 3, corresponding to 2nd, 3rd and 4th orders in accuracy respectively. No wall-model was used. Needless to say, the higher order simulations captured finer turbulence scales, as shown in Figure 2, which displays the iso-surfaces of the Q-criteria colored by the Mach number.    

p = 1

p = 2

p = 3
Figure 2. Iso-surfaces of the Q-criteria colored by the Mach number

Clearly the flow is mostly laminar on the pressure side, and transitional/turbulent on the suction side of the main wing and the flap. Although the p = 1 simulation captured the least scales, it still correctly identified the laminar and turbulent regions. 

The drag and lift coefficients from the present p-refinement study are compared with experimental data from NASA in Table I. Although the 2nd order results (p = 1) are quite different than those of higher orders, the 3rd and 4th order results are very close, demonstrating very good p-convergence in both the lift and drag coefficients. The lift agrees better with experimental data than the drag, bearing in mind that the experiment has wind tunnel wall effects, and other small instruments which are not present in the computational model. 

Table I. Comparison of lift and drag coefficients with experimental data

p = 1
p = 2
p = 3

This exercise seems to contradict the common sense logic stated in the beginning of this blog. So what happened? The answer is that in this high-lift configuration, the dominant force is due to pressure, rather than friction. In fact, 98.65% of the drag and 99.98% of the lift are due to the pressure force. For such flow problems, running a LES on a RANS mesh (with sufficient accuracy) may produce reasonable predictions in drag and lift. More studies are needed to draw any definite conclusion. We would like to hear from you if you have done something similar.

This study will be presented in the forthcoming AIAA SciTech conference, to be held on January 6th to 10th, 2020 in Orlando, Florida. 

► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 

► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.

A Tank outside my taxi

A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion

To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.

Convergent Science Blog top

► Leveling Up Scaling with CONVERGE 3.0
  14 Aug, 2020

In a competitive market, predictive computational fluid dynamics (CFD) can give you an edge when it comes to product design and development. Not only can you predict problem areas in your product before manufacturing, but you can also optimize your design computationally and devote fewer resources to testing physical models. To get accurate predictions in CFD, you need to have high-resolution grid-convergent meshes, detailed physical models, high-order numerics, and robust chemistry—all of which are computationally expensive. Using simulation to expedite product design works only if you can run your simulations in a reasonable amount of time.

The introduction of high-performance computing (HPC) drastically furthered our ability to obtain accurate results in shorter periods of time. By running simulations in parallel on multiple cores, we can now solve cases with millions of cells and complicated physics that otherwise would have taken a prohibitively long time to complete. 

However, simply running cases on more cores doesn’t necessarily lead to a significant speedup. The speedup from HPC is only as good as your code’s parallelization algorithm. Hence, to get a faster turnaround on product development, we need to improve our parallelization algorithm.

Let’s Start With the Basics

Breaking a problem into parts and solving these parts simultaneously on multiple interlinked processors is known as parallelization. An ideally parallelized problem will scale inversely with the number of cores—twice the number of cores, half the runtime.

A common task in HPC is measuring the scalability, also referred to as scaling efficiency, of an application. Scalability is the study of how the simulation runtime is affected by changing the number of cores or processors. The scaling trend can be visualized by plotting the speedup against the number of cores.

How Does CONVERGE Parallelize?

Parallelization in CONVERGE 2.4 and Earlier

In CONVERGE versions 2.4 and earlier, parallelization is performed by partitioning the solution domain into parallel blocks, which are coarser than the base grid. CONVERGE distributes the blocks to the interlinked processors and then performs a load balance. Load balancing redistributes these parallel blocks such that each processor is assigned roughly the same number of cells.

This parallel-block technique works well unless a simulation contains high levels of embedding (regions in which the base grid is refined to a finer mesh) in the calculation domain. These cases lead to poor parallelization because the cells of a single parallel block cannot be split between multiple processors.

Figure 1 shows an example of parallel block load balancing for a test case in CONVERGE 2.4. The colors of the contour represent the cells owned by each processor. As you can see, the highly embedded region at the center is covered by only a few blocks, leading to a disproportionately high number of cells in those blocks. As a result, the cell distribution across processors is skewed. This phenomenon imposes a practical limit on the number of levels of embedding you can have in earlier versions of CONVERGE while still maintaining a reasonable load balance.

Figure 1: Parallel-block load balancing in CONVERGE 2.4.

Parallelization in CONVERGE 3.0

In CONVERGE 3.0, instead of generating parallel blocks, parallelization is accomplished via cell-based load balancing, i.e., on a cell-by-cell basis. Because each cell can belong to any processor, there is much more flexibility in how the cells are distributed, and we no longer need to worry about our embedding levels.

Figure 2 shows the cell distribution among processors using cell-based load balancing in CONVERGE 3.0 for the same test case shown in Figure 1. You can see that without the restrictions of the parallel blocks, the cells in the highly embedded region are divided between many processors, ensuring an (approximately) equal distribution of cells.

Figure 2: Cell-based load balancing in CONVERGE 3.0.

The cell-based load balancing technique demonstrates significant improvements in scaling, even for large numbers of cores. And unlike previous versions, the load balancing itself in CONVERGE 3.0 is performed in parallel, accelerating the simulation start-up.

Case Studies

In order to see how well the cell-based parallelization works, we have performed strong scaling studies for a number of cases. The term strong scaling means that we ran the exact same simulation (i.e., we kept the number of cells, setup parameters, etc. constant) on different core counts.

SI8 PFI Engine Case

Figure 3 shows scaling results for a typical SI8 port fuel injection (PFI) engine case in CONVERGE 3.0. The case was run for one full engine cycle, and the core count varied from 56 to 448. The plot compares the speedup obtained running the case in CONVERGE 3.0 with the ideal speedup. With enough CPU resources, in this case 448 cores, you can simulate one engine cycle with detailed chemistry in under two hours—which is three times faster than CONVERGE 2.4!

Cores Time (h) Speedup Efficiency Cells per core Engine cycles per day
56 11.51 1 100% 12,500 2.1
112 5.75 2 100% 6,200 4.2
224 3.08 3.74 93% 3,100 7.8
448 1.91 6.67 75% 1,600 12.5
Figure 3: CONVERGE 3.0 scaling results for an SI8 PFI engine simulation run on an in-house cluster. On 448 cores, CONVERGE 3.0 scales with 75% efficiency, and you can simulate more than 12 engine cycles in a single day. Please note that the parallelization profiles will differ from one case to another.

Sandia Flame D Case

If the speedup of the SI8 PFI engine simulation impressed you, then just wait until you see the scaling study for the Sandia Flame D case! Figure 4 shows the results of a strong scaling study performed for the Sandia Flame D case, in which we simulated a methane flame jet using 170 million cells. The case was run on the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA), and the core counts vary from 500 to 8,000. CONVERGE 3.0 demonstrates impressive near-linear scaling even on thousands of cores.

Figure 4: CONVERGE 3.0 scaling results for a combusting turbulent partially premixed flame (Sandia Flame D) case run on the Blue Waters supercomputer at the National Center for Supercomputing Applications[1]. On 8,000 cores, CONVERGE 3.0 scales with 95% efficiency.


Although earlier versions of CONVERGE show good runtime improvements with increasing core counts, speedup is limited for cases with significant local embeddings. CONVERGE 3.0 has been specifically developed to run efficiently on modern hardware configurations that have a high number of cores per node.

With CONVERGE 3.0, we have observed an increase in speedup in simulations with as few as approximately 1,500 cells per core. With its improved scaling efficiency, this new version empowers you to obtain simulation results quickly, even for massive cases, so you can reduce the time it takes to bring your product to market. 

Contact us to learn how you can accelerate your simulations with CONVERGE 3.0.

[1] The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. The NCSA Industry Program is the largest Industrial HPC outreach in the world, and it has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand computational problems at rapid speed and scale. The CONVERGE simulations were run on NCSA’s Blue Waters supercomputer, which is one of the fastest supercomputers on a university campus. Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.

► The Collaboration Effect: A Decade of Innovation
    5 Aug, 2020

From the Argonne National Laboratory + Convergent Science Blog Series

The world is waiting for us to develop the tools needed to design new engine architectures, new concepts, with a finer control over the combustion process. If we can continue to make the progress we’ve achieved over the last ten years, I think society and the environment will continue to reap large rewards.

—Dr. Don Hillebrand, Division Director of the Energy Systems Division, Argonne National Laboratory

The year 2020 marks the ten-year anniversary of a fruitful collaboration between Convergent Science and the U.S. Department of Energy’s Argonne National Laboratory. Over the years, the collaboration has facilitated exciting advances in engine technology, high-performance computing and machine learning, computational methods, physical models, gas turbine and detonation engine simulations, and more. Many engineers at both Argonne and Convergent Science have contributed to these projects, but the collaboration started with one individual.

The Story Origin

Dr. Sibendu Som

Dr. Sibendu Som was introduced to CONVERGE before it was even called CONVERGE. He was a graduate student at the University of Illinois at Chicago (UIC), and in the summer of 2006 Sibendu participated in an industry internship. He worked with engineers on a computational fluid dynamics (CFD) team who were using an internal version of a code in development by a small company named Convergent Science. When Sibendu’s internship ended, he went back to UIC and continued to work with the same CFD code—at the time called MOSES.

For his thesis, Sibendu focused on improving spray models, for which he was obtaining experimental data from Argonne. Spray modeling happens to be a specialty of Dr. Kelly Senecal, Co-Owner of Convergent Science, so Kelly assisted Sibendu in his endeavors.

“Kelly helped me quite a bit,” Sibendu says, “so I actually invited him to be a part of my thesis defense committee.”

Doug Longman and Kelly Senecal

After completing his Ph.D.—and thoroughly impressing Kelly and the rest of his committee—Sibendu became a postdoc at Argonne National Laboratory in the research group of Mr. Doug Longman, Manager of Engine Research. At the time, there was only a little CFD work being done at Argonne in the combustion and spray area, so there was an opportunity to bring in a new code. Having used CONVERGE during his thesis, Sibendu was a proponent of using the software at Argonne.

Partnering with a renowned national laboratory was a big opportunity for Convergent Science. In 2010, Convergent Science had only recently switched from being a CFD consulting company to a CFD software company, and working with Argonne lent credibility to their code. Argonne also provided access to computational resources on a scale that a small company simply could not afford on their own.

“It was also a relationship thing,” Kelly says. “The partnership just started off on the right foot, and we were really happy to work with the Argonne research team.”

A Mutually Beneficial Partnership

Government and private industry have a long history of collaboration in the United States—and for good reason. These relationships are not only beneficial for both parties, but also for taxpayers. The mission of national laboratories is not to compete with industry, but to help support and enhance the missions of private companies for the benefit of the country.

“The national lab system in the United States is a national treasure,” says Dr. Don Hillebrand. “Our job is to look at big science, big physics, big chemistry, big engineering, and solve challenging problems that confront us. We make sure that knowledge or tools or technology solutions get transferred to industrial groups, who develop jobs and products and make the country competitive.”

National laboratories provide access to resources, including advanced technology and funding, that private companies are often unable to obtain on their own. For Convergent Science in particular, access to Argonne’s computational resources made it possible to test CONVERGE on large numbers of cores and to work on improving the scalability for clients who want to run highly parallel simulations. Getting access to these types of resources on the ground floor provides a huge advantage to industry partners.

Theta Supercomputer at Argonne National Laboratory

Another important function of national labs is to investigate long-term or risky areas of research. Private companies survive on the profits they make, and investing in research that does not pay off in the end can be damaging to their business. In the same vein, companies tend to focus on products that they can bring to market relatively quickly to make sure they have a consistent revenue stream. However, long-term and riskier research is critical for developing innovative technologies that have the potential to transform our lives.

“The government drives a lot of research in cutting-edge technology,” says Dr. Dan Lee, Co-Owner of Convergent Science. “They also have advanced facilities and teams of expert engineers doing fundamental research for projects that are potentially going to shape the future.”

Of course, to have an impact on society, the technology developed in national laboratories must end up in the hands of consumers. Thus the end-goal of research and development at government institutions is to transfer that technology to industry.

Ann Schlenker, Director of the Center for Transportation Research at Argonne, spent more than 30 years in industry before transitioning to Argonne. That experience gave her a deep understanding of the synergistic relationship between government and private industry.

“You need to be extremely astute at listening to the voice of the customer. And that means understanding what the challenges are, where the hurdles and difficulties are stressing the system and how best to optimize processes. Because if you can do that, you can develop timely solutions,” Ann says.

Partnering with industry helps ensure that the research at the national labs is relevant, timely, and impactful. This is one way in which these relationships benefit the taxpayer—the results of government research directly address the needs of consumers and help make the country competitive on the world stage.

Delivering Results

The collaboration between Argonne and Convergent Science has resulted in significant advances for the modeling community and the transportation industry. While the details of this research will be discussed in depth in upcoming blog posts, the projects from the past decade generally fall into two categories: advancing simulation for propulsion technologies and improving the scalability of CONVERGE on high-performance computing architectures.

Many projects have focused on modeling processes relevant to the internal combustion engine, such as studying fuel injection and sprays using experimental data from Argonne’s Advanced Photon Source, implementing state-of-the-art nozzle flow models in CONVERGE, simulating ignition, and investigating cycle-to-cycle variation.

Other key areas of focus have been modeling challenging phenomena in gas turbine combustors and breaking ground on simulating rotating detonation engines. Enhancing the scalability of CONVERGE has made it possible to run larger, more complex cases and to obtain more accurate, more relevant results from these simulations.

The overarching goal for these projects continues to be to create better models and establish techniques that will be instrumental in developing the transportation technologies of the future. Perhaps Ann sums it up best:

The day of learning is not over for combustion processes. It’s germane to our gross domestic product for U.S. economic vitality. Our transportation and combustion researchers and industry engineers work side-by-side to achieve the societal goals of better fuel economy and lower emissions. And these strong collaborations and this visionary work allow us to move fully forward with model-based system engineering, with high-fidelity, predictive capabilities that we trust.

The collaboration between Convergent Science and Argonne National Laboratory will certainly help propel us into the future. Learn more about the research performed during this collaboration in upcoming blog posts!

► Models On Top of Models: Thickened Flames in CONVERGE
    2 Jul, 2020

Any CONVERGE user knows that our solver includes a lot of physical models. A lot of physical models! How many combinations exist? How many different ways can you set up a simulation? That’s harder to answer than you might think. There might be N turbulence models and M combustion models, but the total set of combinations isn’t N*M.

Why not? In some cases, our developers haven’t completed it yet! The ECFM and ECFM3Z combustion models, for example, could not be combined with a large eddy simulation (LES) turbulence model until CONVERGE version 3.0.11. We’re adding more features all the time. One interesting example is the thickened flame model (TFM). 

The name is descriptive, of course: TFM is designed to thicken the flame. If you’re not a combustion researcher, this notion may not be intuitive. A real flame is thin (in an internal combustion engine environment, tens or hundreds of microns). Why would we want to design a model that intentionally deviates from this reality? As is often the case with physical modeling, the answer lies in what we’re trying to study.

CONVERGE is often used to study the engineering operability of a premixed internal combustion or gas turbine engine. This requires accurate simulation of macroscopic combustion dynamics (flame properties), including the laminar flamespeed. A large eddy simulation (LES) might use cells on the order of 0.1 mm

The problem may now be clear. The flame is much too thin to resolve on the grid we want to use. In fact, a detailed chemical kinetics solver like SAGE requires five or more cells across the flame in order to reproduce the correct laminar flamespeed. An under-resolved flame results in an underprediction of laminar flamespeed. Of course, we could simply decrease the cell size by an order of magnitude, but that makes for an impractical engineering calculation.

The thickened flame model is designed to solve this problem. The basic idea of Colin et al. [1] was to simulate a flame that is thicker than the physical one, but which reproduces the same laminar flamespeed. From simple scaling analysis, this can be achieved by increasing the thermal and species diffusivity while reducing the reaction rate by a factor of F. Because the flame thickening effect decreases the wrinkling of the flame front, and thus its surface area, an efficiency factor E is introduced so that the correct turbulent flamespeed is recovered.

The combination of these scaling factors allows CONVERGE to recover the correct flamespeed without actually resolving the flame itself. CONVERGE also calculates a flame sensor function so that these scaling factors are applied only at the flame front. By using TFM with SAGE detailed chemistry, a premixed combustion engineering simulation with LES becomes practical.

Hasti et al. [2] evaluated one such case using CONVERGE with LES, SAGE, and TFM. This work examined the Volvo bluff-body augmentor test rig, shown below, which has been subjected to extensive study. At the conditions of interest, the flame thickness is estimated to be about 1 mm, and so SAGE without TFM should require a grid not coarser than 0.2 mm to accurately simulate combustion.

Figure 1: Volvo bluff-body augmentor test rig [3].

With TFM, Hasti et al. show that CONVERGE is able to generate a grid-converged result at a minimum grid spacing of 0.3125 mm. We might expect such a calculation to take only about 40% as many core hours as a simulation with a minimum grid spacing of 0.25 mm.

Figure 2: Representative instantaneous temperature field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm.
Figure 3: Representative instantaneous velocity magnitude field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm, respectively.
Figure 4: Representative instantaneous vorticity magnitude field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm, respectively.
Figure 5: Transverse mean temperature profiles at x/D = 3.75, 8.75, and 13.75.
Base grid sizes of 2 mm, 2.5 mm, and 3 mm correspond to minimum cell sizes of 0.25 mm, 0.3125 mm, and 0.375 mm, respectively.

Understanding the topic of study, the underlying physics, and the way those physics are affected by our choice of physical models, are critical to performing accurate simulations. If you want to combine the power of the SAGE detailed chemical kinetics solver with the transient behavior of an LES turbulence model to understand the behavior of a practical engine–and to do so without bankrupting your IT department–TFM is the enabling technology.

Want to learn more about thickened flame modeling in CONVERGE? Check out these TFM case studies from recent CONVERGE User Conferences (1, 2, 3) and keep an eye out for future Premixed Combustion Modeling advanced training sessions.

[1] Colin, O., Ducros, F., Veynante, D., and Poinsot, T., “A thickened flame model for large eddy simulations of turbulent premixed combustion,” Physics of Fluids, 12(1843), 2000. DOI: 10.1063/1.870436
[2] Hasti, V.R., Liu, S., Kumar, G., and Gore, J.P., “Comparison of Premixed Flamelet Generated Manifold Model and Thickened Flame Model for Bluff Body Stabilized Turbulent Premixed Flame,” 2018 AIAA Aerospace Sciences Meeting, AIAA 2018-0150, Kissimmee, Florida, January 8-12, 2018. DOI: 10.2514/6.2018-0150
[3] Sjunnesson, A., Henrikson, P., and Lofstrom, C., “CARS measurements and visualizations of reacting flows in a bluff body stabilized flame,” 28th Joint Propulsion Conference and Exhibit, AIAA 92-3650, Nashville, Tennessee, July 6-8, 1992. DOI: 10.2514/6.1992-3650

► The Search for Soot-free Diesel: Modeling Ducted Fuel Injection With CONVERGE
  26 Mar, 2020

At the upcoming CONVERGE User Conference, which will be held online from March 31–April 1, Andrea Piano will present results from experimental and numerical studies of the effects of ducted fuel injection on fuel spray characteristics. Dr. Piano is a Research Assistant in the e3 group, coordinated by Prof. Federico Millo at Politecnico di Torino, and these are the first results to be reported from their ongoing collaboration with Prof. Lucio Postrioti at Università degli Studi di Perugia, Andrea Bianco at Powertech Engineering, and Francesco Pesce and Alberto Vassallo at General Motors Global Propulsion Systems. This work is a great example of how CONVERGE can be used in tandem with experimental methods to advance research at the cutting edge of engine technology. Keep reading for a preview of the results that Dr. Piano will discuss in greater detail in his online presentation.

The idea behind ducted fuel injection (DFI), originally conceived by Charles Mueller at Sandia National Laboratories, is to suppress soot formation in diesel engines by allowing the fuel to mix more thoroughly with air before it ignites1. Soot forms when a fuel doesn’t burn completely, which happens when the fuel-to-air ratio is too high. In DFI, a small tube, or duct, is placed near the nozzle of the fuel injector and directed along the axis of the fuel stream toward the autoignition zone. The fuel spray that travels through this duct is better mixed than it would be in a ductless configuration. Experiments at Sandia have shown that DFI can reduce soot formation by as much as 95%, demonstrating the enormous potential of this technology for curtailing harmful emissions from diesel engines.

Introduction to ducted fuel injection from Sandia National Laboratories.

While the Sandia researchers have focused on heavy-duty diesel applications, Dr. Piano and his collaborators are targeting smaller engines, such as those found in passenger cars and light-duty trucks. To understand how the fuel spray evolves in the presence of a duct, they first performed imaging and phase Doppler anemometry analyses of non-reacting sprays in a constant-volume test vessel. Figure 1 shows a sample of the experimental results. The video on the left corresponds to a free spray configuration with no duct, while the video on the right corresponds to a ducted configuration. Observe how the dark liquid breaks up and evaporates more quickly in the ducted configuration—this is the enhanced mixing that occurs in DFI.

Figure 1: Videos from experiments on non-reacting sprays in a free spray configuration (left) and a ducted configuration (right). Images were obtained from a constant-volume vessel at a rail pressure of 1200 bar, vessel temperature of 500°C, and vessel pressure of 20 bar.

Their next step was to develop a CFD model of the fuel spray that could be calibrated against the experimental results. Dr. Piano and his colleagues reproduced the geometry of the experimental setup in a CONVERGE environment, using physical models available in CONVERGE to simulate the processes of spray breakup, evaporation, and boiling, as well as the interactions between the spray and the duct. With fixed embedding and Adaptive Mesh Refinement, they were able to increase the grid resolution in the vicinity of the spray and the duct without a significant increase in computational cost. They simulated the spray penetration for both the free spray and the ducted configuration over a range of operating conditions and validated those results against the experimental data.

With a calibrated spray model in hand, the researchers were then able to run predictive simulations of DFI for reacting fuel sprays. They combined their spray model with the SAGE detailed chemical kinetics solver for combustion modeling, along with the Particulate Mimic model of soot formation. They ran simulations at different rail pressures and vessel temperatures to see how DFI would affect the amount of soot mass produced under engine-like operating conditions. Figures 2 and 3 show examples of the simulation results for a rail pressure of 1200 bar and a vessel temperature of 1000 K. Consistent with the findings of Mueller et al.1, these results show a dramatic reduction in the mass of soot produced during combustion in the ducted configuration as compared to the free spray configuration.

Figure 2: The plots on the right side show the heat release rate and soot mass produced in simulations of reacting sprays (red lines correspond to the free spray configuration and blue lines correspond to the ducted configuration). The dashed vertical lines indicate the simulation time at which the two contour plots were generated, with the free spray configuration on the left and the ducted configuration in the center. Contours are colored by soot mass, with regions of high soot mass shown in red.
Figure 3: The plots on the right side show the heat release rate and soot mass produced in simulations of reacting sprays (red lines correspond to the free spray configuration and blue lines correspond to the ducted configuration). The dashed vertical lines indicate the simulation time at which the two contour plots were generated, with the free spray configuration on the left and the ducted configuration in the center. Contours are colored by soot mass, with regions of high soot mass shown in red.

While these early results are promising, Dr. Piano and his collaborators are just getting started. They will continue using CONVERGE to investigate phenomena such as the duct thermal behavior and to explore the effects of different geometries and operating conditions, with the long-term goal of incorporating DFI into the design of a real engine. If you are interested in learning more about this work, be sure to sign up for the CONVERGE User Conference today!


[1] Mueller, C.J., Nilsen, C.W., Ruth, D.J., Gehmlich, R.K., Pickett, L.M., and Skeen, S.A., “Ducted fuel injection: A new approach for lowering soot emissions from direct-injection engines,” Applied Energy, 204, 206-220, 2017. DOI: 10.1016/j.apenergy.2017.07.001

► An Evening With the Experts: Scaling CFD With High-Performance Computing
  25 Feb, 2020
Listen to the full audio of the panel discussion.

As computing technology continues to advance rapidly, running simulations on hundreds and even thousands of cores is becoming standard practice in the CFD industry. Likewise, CFD software is continually evolving to keep pace with the advances in hardware. For example, CONVERGE 3.0, the latest major release of our software, is specifically designed to scale well in parallel on modern high-performance computing (HPC) systems. It’s clear that HPC is the future of CFD, so how does this shift affect those of us running simulations and how can we make the most of the increased availability of computational resources? At the 2019 CONVERGE User Conference–North America, we assembled a panel of engineers from industry and government to share their expertise.

In the panel discussion, which you can listen to above, you’ll learn about the computing resources available on the cloud and at the U.S. national laboratories and how to take advantage of them. The panelists discuss the types of novel, one-of-a-kind studies that HPC enables and how to handle post-processing data from massive cases run across many cores. Additionally, you’ll get a look at where post-processing is headed in the future to manage the ever-increasing amounts of data generated form large-scale simulations. Listen to the full panel discussion above!


Alan Klug, Vice President of Customer Development, Tecplot

Sibendu Som, Manager of the Computational Multi-Physics Section, Argonne National Laboratory

Joris Poort, CEO and Founder, Rescale

Kelly Senecal, Co-Founder and Owner, Convergent Science


Tiffany Cook, Partner & Public Relations Manager, Convergent Science

► 2019: A (Load) Balanced End to a Successful Decade
  19 Dec, 2019

2019 proved to be an exciting and eventful year for Convergent Science. We released the highly anticipated major rewrite of our software, CONVERGE 3.0. Our United States, European, and Indian offices all saw significant increases in employee count. We have also continued to forge ahead in new application areas, strengthening our presence in the pump, compressor, biomedical, aerospace, and aftertreatment markets, and breaking into the oil and gas industry. Of course, we remain dedicated to simulating internal combustion engines and developing new tools and resources for the automotive community. In particular, we are expanding our repertoire to encompass batteries and electric motors in addition to conventional engines. Our team at Convergent Science continues to be enthusiastic about advancing simulation capabilities and providing unmatched customer support to empower our users to tackle hard CFD problems.


As I mentioned above, this year we released a major new version of our software, CONVERGE 3.0. We have frequently discussed 3.0 in the past few months, including in my recent blog post, so I’ll keep this brief. We set out to make our code more flexible, enable massive parallel scaling, and expand CONVERGE’s capabilities. The results have been remarkable. CONVERGE 3.0 scales with near-ideal efficiencies on thousands of cores, and the addition of inlaid meshes, new physical models, and enhanced chemistry capabilities have opened the door to new applications. Our team invested a lot of effort into making 3.0 a reality, and we’re very proud of what we’ve accomplished. Of course, now that CONVERGE 3.0 has been released, we can all start eagerly anticipating our next major release, CONVERGE 3.1.

Computational Chemistry Consortium

2019 was a big year for the Computational Chemistry Consortium (C3). In July, the first annual face-to-face meeting took place at the Convergent Science World Headquarters in Madison, Wisconsin. Members of industry and researchers from the National University of Ireland Galway, Lawrence Livermore National Laboratory, RWTH Aachen University, and Politecnico di Milano came together to discuss the work done during the first year of the consortium and establish future research paths. The consortium is working on the C3 mechanism, a gasoline and diesel surrogate mechanism that includes NOx and PAH chemistry to model emissions. The first version of the mechanism was released this fall for use by C3 members, and the mechanism will be refined over the coming years. Our goal is to create the most accurate and consistent reaction mechanism for automotive fuels. Stay tuned for future updates!

Third Annual European User Conference

Barcelona played host to this year’s European CONVERGE User Conference. CONVERGE users from across Europe gathered to share their recent work in CFD on topics including turbulent jet ignition, machine learning for design optimization, urea thermolysis, ammonia combustion in SI engines, and gas turbines. The conference also featured some exciting networking events—we spent an evening at the beautiful and historic Poble Espanyol and organized a kart race that pitted attendees against each other in a friendly competition. 

Inaugural CONVERGE User Conference–India

This year we hosted our first-ever CONVERGE User Conference–India in Bangalore and Pune. The conference consisted of two events, each covering different application areas. The event in Bangalore focused on applications such as gas turbines, fluid-structure interaction, and rotating machinery. In Pune, the emphasis was on IC engines and aftertreatment modeling. We saw presentations from both companies and universities, including General Electric, Cummins, Caterpillar, and the Indian Institutes of Technology Bombay, Kanpur, and Madras. We had a great turnout for the conference, with more than 200 attendees across the two events.

CONVERGE in the Big Easy

The sixth annual CONVERGE User Conference–North America took place in New Orleans, Louisiana. Attendees came from industry, academic institutions, and national laboratories in the U.S. and around the globe. The technical presentations covered a wide variety of topics, including flame spray pyrolysis, rotating detonation engines, machine learning, pre-chamber ignition, blood pumps, and aerodynamic characterization of unmanned aerial systems. This year, we hosted a panel of CFD and HPC experts to discuss scaling CFD across thousands of processors; how to take advantage of clusters, supercomputers, and the cloud to run large-scale simulations; and how to post-process large datasets. For networking events, we took a dinner cruise down the Mississippi River and encouraged our guests to explore the vibrant city of New Orleans.

KAUST Workshop

In 2019, we hosted the First CONVERGE Training Workshop and User Meeting at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Attendees came from KAUST and other Saudi Arabian universities and companies for two days of keynote presentations, hands-on CONVERGE tutorials, and networking opportunities. The workshop focused on leveraging CONVERGE for a variety of engineering applications, and running CONVERGE on local workstations, clusters, and Shaheen II, a world-class supercomputer located at KAUST. 

Best Use of HPC in Automotive

We and our colleagues at Argonne National Laboratory and Aramco Research Center – Detroit received this year’s 2019 HPCwire Editors’ Choice Award in the category of Best Use of HPC in Automotive. We were incredibly honored to receive this award for our work using HPC and AI to quickly optimize the design of a clean, highly efficient gasoline compression ignition engine. Using CONVERGE, we tested thousands of engine design variations in parallel to improve fuel efficiency and reduce emissions. We ran the simulations in days, rather than months, on an IBM Blue Gene/Q supercomputer located at Argonne National Laboratory and employed machine learning to further reduce design time. After running the simulations, the best-performing engine design was built in the real world. The engine demonstrated a reduction in CO2 of up to 5%. Our work shows that pairing HPC and AI to rapidly optimize engine design has the potential to significantly advance clean technology for heavy-duty transportation.

Sibendu Som (Argonne National Laboratory), Kelly Senecal (Convergent Science), and Yuanjiang Pei (Aramco Research Center – Detroit) receiving the 2019 HPCwire Editors’ Choice Award

Convergent Science Around the Globe

2019 was a great year for CONVERGE and Convergent Science around the world. In the United States, we gained nearly 20 employees. We added a new Convergent Science office in Houston, Texas, to serve the oil and gas industry. In addition, we have continued to increase our market share in other areas, including automotive, gas turbine, and pumps and compressors.

In Europe, we had a record year for new license sales, up 70% from 2018. A number of new employees joined our European team, including new engineers, sales personnel, and office administrators. We attended and exhibited at tradeshows on a breadth of topics all over Europe, and we expanded our industry and university clientele. 

Our Indian office celebrated its second anniversary in 2019. The employee count nearly doubled in size from 2018, with the addition of several new software developers and marketing and support engineers. The first Indian CONVERGE User Conference was a huge success–we had to increase the maximum number of registrants to accommodate everyone who wanted to attend. We have also grown our client base in the transportation sector, bringing new customers in the automotive industry on board.

In Asia, our partners at IDAJ continue to do a fantastic job supporting CONVERGE. CONVERGE sales significantly increased in 2019 compared to 2018. And at this year’s IDAJ CAE Solution Conference, speakers from major corporations presented CONVERGE results, including Toyota, Daihatsu, Mazda, and DENSO.

Looking Ahead

While we like to recognize the successes of the past year, we’re always looking toward the future. Computing technology is constantly evolving, and we are eager to keep advancing CONVERGE to make the most of the increased availability of computational resources. With the expanded functionality that CONVERGE 3.0 offers, we’re also looking forward to delving into untapped application areas and breaking into new markets. In the upcoming year, we are excited to form new collaborations and strengthen existing partnerships to promote innovation and keep CONVERGE on the cutting-edge of CFD software.

Numerical Simulations using FLOW-3D top

► FLOW-3D CAST Workshops
  18 Aug, 2020
FLOW-3D CAST Metal Casting Workshops
FLOW-3D CAST is a state-of-the-art metal casting simulation modeling platform that combines extraordinarily accurate modeling with versatility, ease of use, and high performance CLOUD computing capabilities. Our FLOW-3D CAST workshops use hands-on exercises to show you how to set up and run successful simulations for detailed analysis of your casting design. Workshop materials provide an introduction to the FLOW-3D CAST modeling platform and detail all the steps of a successful casting model setup, from geometry import through post-processing.

Thursday, September 10, 2020 (US & Canada only)

  • 2:00pm – 5:00pm ET

Thursday, September 24, 2020 (US & Canada only)

  • 2:00pm – 5:00pm ET
Don’t see a date that works with your schedule? Want to discuss an online ‘in-house’ workshop for your team? Contact our workshop instructor.

What will you learn?

  • How to import geometry and set up models, including meshing and initial and boundary conditions
  • How to apply complex physics such as air entrainment, as well as FLOW-3D CAST‘s pioneering filling and solidification models to your simulation, to analyze defects, and adjust your casting design
  • Best practices for casting simulation and design analysis in FLOW-3D CAST

What happens after the workshop?

  • After the workshop, your FLOW-3D CAST license will be extended for 30 days. During this time, one of our CFD engineers will work closely with you to help you apply FLOW-3D CAST to a casting problem of your choosing. You will also have access to our web-based training videos covering introductory through advanced modeling topics. 

Who should attend?

  • Process and casting engineers working in foundry or die casting industries
  • Industry researchers working on new alloy developments, lightweighting, and other challenges in modern metal casting
  • University students interested in CFD for casting applications
  • Workshops are online, hosted through Zoom
  • Registration is limited to 6 attendees
  • Cost: $99
  • 30-day FLOW-3D CAST license

Workshop registration is currently only available to prospective or lapsed users in the United States and Canada.

  • A Windows machine running Windows 7 or later
  • An external mouse (not a touchpad device)
  • Dual monitor setup recommended
  • Dedicated graphics card; nVidia Quadro card required for remote desktop
For more info on recommended hardware, see our Supported Platforms page.

Registration: Workshop registration is currently only available to prospective or lapsed users in the United States and Canada. Prospective users outside of these countries should contact their distributor to inquire about workshops. Existing users should contact to discuss their licensing options.

Cancellation: Flow Science reserves the right to cancel a workshop at any time, due to reasons such as insufficient registrations or instructor unavailability. In such cases, a full refund will be given, or attendees may opt to transfer their registration to another workshop. Flow Science is not responsible for any costs incurred.

Registrants who are unable to attend a workshop may cancel up to one week in advance to receive a full refund. Attendees must cancel their registration by 5:00 pm MST one week prior to the date of the workshop; after that date, no refunds will be given. If available, an attendee can also request to have their registration transferred to another workshop.

Licensing: Workshop licenses are for evaluation purposes only, and not to be used for any commercial purpose other than evaluation of the capabilities of the software.

Register for an Online FLOW-3D CAST Workshop

Register for an Online Metal Casting Workshop
Registration Type *

Workshop License Terms and Conditions *
Request for Workshop Certificate
Certificates will be in PDF format. Flow Science does not confirm that our workshops are eligible for PDHs or CEUs.
FLOW-3D News
Privacy *

Please note: Once you click 'Register', you will be directed to our PayPal portal. If you do not have a PayPal account, choose the 'Pay with credit card' option. Your registration is not complete until you have paid.
If you need assistance with the registration process, please contact Workshop Support.

About the Instructor

Ajit D'Brass, CFD Engineer, Metal Casting Applications

Ajit D’Brass studied manufacturing engineering with a concentration on metal casting at Texas State University. His current work focuses on how to expedite the design phase of a casting through functional, efficient, user-friendly process simulations. Ajit helps customers use FLOW-3D CAST to create streamlined, sustainable workflows.

► Achieving Optimal Continuous Castings
    5 Aug, 2020

Using the continuous casting process, casters can manufacture ingots, high-pressure tubes, and irregularly-shaped bars of high quality and strength, but the process must be controlled through a delicate balance of pour temperature, mold cooling, and draw rate. FLOW-3D CAST v5.1’s Continuous Casting Workspace includes all the tools needed to simulate and optimize a process design to produce high-quality continuous castings in a cost-efficient manner.

Two primary types of continuous casting processes can be modeled: strand casting and direct chill continuous casting. In strand casting, molten metal is poured from a tundish through a mold which has the shape of the part to be cast. The mold, typically made of graphite, gives the casting its shape and provides some cooling to begin solidifying the melt. Additional cooling is applied to the molten strand by cooling channels placed in the mold.

The image below shows a continuous casting of an aluminum/silicon/magnesium slab. Through careful specification of the flow rate of molten metal through the mold and the cooling applied to the mold, the position of the melt front can be controlled so that the slab is fully solidified when it leaves the mold. Additionally, the grain structure in the slab can be optimized by properly controlling the temperature and solidification profiles. By using simulation to study these parameters, trial and error can be greatly reduced or even eliminated.

Here you can see the evolution of the melt front in the mold.

In direct chill continuous casting, additional cooling is applied directly to the casting. The draw rate on the casting is controlled by allowing the end of the casting to solidify on a starter cap before it is drawn out of the mold.

In this example, a bronze billet is cast using a direct chill continuous casting process. As the billet is drawn from the mold, a cooling spray is applied to the billet. The cooling must be sufficient to maintain a solidified shell on the billet as it leaves the mold. The starter cap is withdrawn at a rate that ensures the cooling rate and feed rate are balanced.

The video below shows a simulation of the direct chill process.

With the tools provided in the Continuous Casting Workspace, process engineers can simulate their designs to ensure maximum casting quality and process efficiency for their continuous castings.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► IT Administrator
    2 Aug, 2020

Flow Science, Inc., in Santa Fe, New Mexico, has a job opportunity for a motivated, creative and collaborative IT Administrator.

Our software, FLOW-3D, is used by engineers, designers, and scientists at top manufacturers and institutions throughout the world to simulate and optimize product designs. Many of the products used in our daily lives, from many of the components in an automobile to the paper towels we use to dry our hands, have actually been designed or improved through the use of FLOW-3D.

Principal responsibilities and key requirements

As IT Administrator of this cutting-edge software company, you will work with the company’s internal network and be responsible for system security, infrastructure, performance, and troubleshooting. You will manage user accounts and assist staff with maintenance and upkeep of their hardware and software environments.

This challenging and dynamic role requires the following skills to be successful:

  • Associate’s Degree or higher in an information technology related field
  • Experience in network and/or IT-related work
  • Windows server and Linux administration experience
  • Server and workstation hardware knowledge
  • Excellent oral and written communication skills

Preferred skills and experience

Exceptional candidates will usually have the following skills and experience:

  • CompTIA A+ certification
  • Knowledge of Linux shell scripting
  • Experience with scripting languages such as Python and JavaScript


Flow Science offers an exceptional benefits package to full-time employees including medical, dental, vision insurances, life and disability insurances, 401(k) with generous employer matching, and an incentive compensation plan that offers a year-end bonus opportunity up to 30% of base salary.


Still interested? Submit your resume and a cover letter to Paper copies may be submitted via mail (Attention: Human Resources, 683 Harkle Road, Santa Fe, NM 87505) or fax (505-982-5551). Not quite what you’re looking for? Check out our other openings on our Careers Page >

► Sand Core Making – Is It Time to Vent?
    1 Jul, 2020
Sand cores are a crucial element in the casting process because they are used to create complex interior cavities. For example, sand cores are used to create passages for water cooling, oil lubrication, and air flow in typical V8 engine casting. Ever wonder how a sand core is made? How can a material that works so well for making sandcastles on the beach be made into complex forms able to withstand the brutal conditions of hot metal flowing and solidifying around them? In this blog I will walk you through the process of how sand cores are made and describe the modeling tools in FLOW-3D CAST v5.1 that help engineers design their manufacturing processes.

The Sand Core Making Process Workspace

Choosing the correct physics models for such complex flow dynamics to model sand core making can be daunting. The Sand Core Making Workspace addresses this challenge by providing automated settings for numerical techniques and activating the appropriate physics models. Sub workspaces for cold box, hot box, and inorganic processes guide the user through the setup process with ease.

Sand Shooting

The starting point with all sand cores is the shooting process. In the shooting process, a mixture of air, sand, and binder is “shot” under high pressure into a core box with air vents placed strategically around the cavity to allow air to be displaced by sand.
Water jacket sand core
Simulation of a water jacket sand core. The sand/binder mixture is shot into the core box through the 8 inlets at the top. Air vents of varying size are placed around the sand core to allow air to escape.

The primary goal of a sand core shooting is to create a sand core with uniform density. Two design factors play important roles in achieving this goal — the location of sand inlets and the location and size of the air vents. Simulating the flow of the sand mixture using FLOW-3D CAST allows us to study different inlet and air vent configurations.

This video shows the filling pattern of H32 sand with a 2% binder additive being shot to produce a water jacket sand core. Notice that some of the regions are underfilled.

To address underfilling, air vents can be easily and accurately placed at the problem area using our interactive geometry placement tool. Here, a 6 mm air vent (see red arrow) is placed at a location where incomplete filling was observed.

This video shows a comparison of the filling in the region where the air vent has been added compared with the original result. The filling is now more complete in the region where the air vent was added. More vents can be added to address other underfilled regions.

Core Hardening

Once the air vent configurations have been placed and the shooting provides a uniform sand distribution, the sand core needs to be hardened. Three different hardening methods can be simulated in FLOW-3D CAST: cold box, hot box, and inorganic.

Drying Sand Cores in an Inorganic Process

The sand/binder mixtures used to produce inorganic cores are water based. To harden them, energy from the hot core box along with a hot air purge evaporate the water and carry it out of the core through the air vents. In this video, an intake manifold sand core shot with a sand/binder mixture containing 2% water by weight is dried by a hot (180 C) air purge. The blue region represents the water remaining in the sand core. The air vents are shown in gray. After 150 seconds of drying, the moisture continues to be pushed to the area where the most venting occurs.

Hardening Cores in a Hot Box Process

Sand cores shot in a hot box process are hardened using energy from the core box to cure the binder. This video shows the temperature distribution in the sand core as it is heated by the hot core box.

Simulating the hardening step allows us to determine the temperature distribution in the shot sand core and identify the time required to ensure that all regions of the core are sufficiently heated to harden it.

Gassing Sand Cores in a Cold Box Process

The binder used to produce sand cores shot in a cold box process contains a phenolic urethane resin. To harden these cores and give them the strength required to withstand flowing hot metal in the casting process, hot air carrying a catalyst (amine gas in this case) is used to purge the core. The hot air/amine gas mixture is introduced through the inlets and leaves the core box through the air vents that were used in the shooting step.

This video shows the evolution of amine gas through the porous shot sand core, which is a water jacket for an internal combustion engine.

With FLOW-3D CAST v5.1, sand core manufactures have the tools they need to model their sand core making processes to optimize the quality of their cores. Learn more about the Sand Core Making Workspace.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► Exploring the Centrifugal Casting Workspace
  30 Jun, 2020

A common challenge in most casting processes is minimizing, or in some cases eliminating, filling-related defects such as entrained air and inclusions. For example, in high pressure die casting, entrained air can be moved out of the casting by proper placement of overflow or at least moved to areas of the casting where strength and aesthetics are not compromised. However, some castings such as high pressure pipes, bushings, and high-end jewelry like platinum rings require exceptionally low porosity, high strength, and near-perfect finish. In this blog, we will explore the three centrifugal casting processes – horizontal, vertical, and centrifuge – available in FLOW-3D CAST v5.1’s Centrifugal Casting Workspace and its unique features that allow casting engineers to create high quality castings.

Centrifugal casting processes use rapidly spinning molds to force molten metal outward from the rotation axis while relatively light defects drift out of the casting or at least to the center of the casting where they can be machined away. Two unique features in the centrifugal casting process workspace provide the ability to accurately and efficiently simulate a given design – cylindrical meshes and spinning mold model. Let’s start by looking at a typical horizontal centrifugal casting to see how these features are beneficial.

Horizontal Centrifugal Casting

Here’s an example of a horizontal mold used to cast a pipe. The mold is spun on rollers at 1000 rpm. 

Molten metal is poured into the open end of the mold and falls under gravity until it is picked up by the spinning mold. The melt spreads out quickly into a thin sheet as it fills the mold. The end-on view in the video below shows how a rather coarse 150,000 cell, cylindrical mesh with fine radial resolution near the wall captures the flow accurately and efficiently.  Since the heat transfer in the melt and mold are mostly radial, the fine radial resolution provided by the cylindrical mesh also contributes to the accuracy of the simulation.

In this filling simulation of a horizontal pipe casting, an end-on view of the filling at the left shows the cylindrical mesh used to resolve the flow. This simulation was run on 10 cores of a medium-level CPU (AMD 1950x) in 11 minutes! Even with this relatively coarse mesh resolution, a great deal of process knowledge can be obtained. Once a rough idea of the proper values for the process parameters such as pour rate, melt superheat, and initial mold temperature have been identified, higher mesh resolutions can be used to zero in on more exact values.

Vertical Centrifugal Casting

The next centrifugal casting process we’ll investigate is a vertical centrifugal casting. The vertical centrifugal casting process is ideal for large, symmetrical castings with a length similar or smaller than their diameter. Again, the spinning mold model in a cylindrical mesh is used to provide for an accurate representation of the filling characteristics. Various fill configurations such as using moving metal inputs can be easily studied. For example, metal can be introduced into the spinning mold through a sprue that moves vertically and/or horizontally to distribute the melt. In this video, the metal input brings molten metal into the spinning mold at the top of the mold to fill the flange initially and then moves downward as the filling continues.

In this simulation, a moving metal input is used to fill a vertical spinning mold rotating at 50 rpm. This simulation ran in 17 minutes on 10 cores of an AMD 1950X, which is quite remarkable considering the complexity of the flow. This is due to the efficiencies of the cylindrical meshing method and the spinning mold model. Detailed parametric studies can be carried out to identify an optimal process design with such efficiencies.

Once the filling is complete and the metal has become stable in the spinning mold, the simulation can be restarted in a solidification subprocess. In the solidification subprocess, the flow field is set to zero in a rotating mesh and only solidification is computed. Computing only the solidification allows for extraordinarily fast simulation times. This video shows solidification in a cross-section of the vertical casting. A 600 second simulation time is computed in less than 1 minute.

Centrifuge Casting

The final centrifugal casting process we’ll investigate is a centrifuge casting using an example of a 6-handle lever set.

A caster might wonder how various process parameters such as mold spin rate and spin-up profiles may have on the casting quality. For example, should the melt be poured into an already spinning mold or should the mold be spun up gradually so that entrained air isn’t generated? We can answer this question by comparing two spin-up profiles. On the left is a mold spun up from stationary to 10 rpms over 2 seconds while metal is poured into the cup. From 2 seconds to 3 seconds, the mold spin rate is ramped up to 50 rpms. On the right, the mold spins continuously at 50 rpms. 

A comparison of entrained air in the melt with a ramped up spin rate (left) vs pouring into a mold spinning at a constant rate. The simulation indicates that it is best to spin up the mold gradually to allow the runners to fill before the maximum spin rate is applied. A rotating mesh is used to achieve high filling accuracy as well as fast simulation runtimes. Both simulations ran in about 4 hours on 12 cores of an AMD 2990WX.

This image shows the last frame of the simulation, illustrating that the air entrainment is reduced by slowly spinning up the mold.

Comparison of centrifugal castings

A casting process design engineer can use the Centrifugal Casting Workspace to study a wide variety of process parameters in almost any centrifugal casting setup to achieve optimal casting quality in a reasonable amount of time.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► Simulating the Investment Casting Process
  24 Jun, 2020

The investment casting process can produce high quality, complex castings with great accuracy and controlled grain structure. However, many challenges face process designers hoping to achieve these results. Fortunately, FLOW-3D CAST v5.1 includes an Investment Casting Workspace which provides the necessary tools to study the wide range of process parameters in a virtual space and determine an optimal design before casting a single part.

In this blog, we’ll walk through the Investment Casting Workspace and show how easy it is to simulate a directionally-cooled investment casting using a Bridgman process. The casting we’ll be investigating is this multi-cavity casting on the right.

Multi-cavity investment casting

Shell Building Tool

An investment casting process begins with a wax representation of the part to be cast. The next step is to dip the wax part into a ceramic slurry mixture successively to build up a shell around the part. This is done until a sufficient thickness shell is achieved. FLOW-3D CAST’s shell building tool allows users to create water-tight shells of any thickness in a matter of minutes.

Using the shell building interface in the GUI, the first step is to select the geometry around which the shell should be created. Next, select Fit Mesh to create a computational mesh around the geometry to be shelled. The edge of the mesh where the pouring sprue is located would be moved into the part slightly so that the generated shell is open there. The only other required inputs are the shell thickness and the cell size which should be roughly half the shell thickness.

A preview mode allows various shell thicknesses to be generated and examined quickly. For example, a 5mm shell built from the wax casting part was created in under 2 minutes.

Calculating View Factors

A critical aspect of investment casting is the calculation of view factors between all surfaces in the simulation. Every surface that “sees” another surface requires a calculation of “how” each of the surfaces see each other. The orientation of each surface relative to others and the emissivity of each must be evaluated. For complex shapes, the surface is subdivided, or clustered, and the view factor between each cluster is computed.

Understanding surfaces investment casting

Surface Clustering

In a Bridgman process, where the solidifying casting is being moved slowly through a selectively heated and cooled oven, the view factors are updated continuously throughout the simulation. This simulation result shows the surface clustering computed for the shell mold and the internal surfaces of the oven.

The simulation result shows the surface clustering computed for the shell mold and the internal surfaces of the oven. In a Bridgman process, where the solidifying casting is being moved slowly through a selectively heated and cooled oven, the view factors are updated continuously throughout the simulation.

Cluster Generation

A number of user-adjustable controls for cluster generation are available to minimize memory use and simulation runtime. For example, the cluster size could be set relatively large so that iterative simulations can be run quickly. As the design options are reduced, more refined details can be added to zero-in on the final design.

Here we see the solidifying casting has moved downward from the heated portion of the oven through a cooling ring so that the casting solidifies from the bottom to the top. This process allows for equiaxed grain structure to be formed.

This simulation shows how the temperature distribution in the solidifying casting on the left and the solid fraction on the right. The feeders at the top of each part provide liquid metal to the casting as it solidifies and shrinks.

Many process parameters can affect the outcome of an investment casting. With FLOW-3D CAST v5.1 in your design toolbox, the effect of these parameters, including the temperature profiles of the heated and cooled sections of the oven, the initial shell temperature, and the rate of motion of the solidifying casting through the oven, can be studied in-depth before casting a single part.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

Mentor Blog top

► Event: Integrated Electrical Solutions Forum (IESF) Conferences
  24 Jul, 2020

Come see Mentor Graphics automotive tools in action at Integrated Electrical Solutions Forum. This FREE event also includes industry presentations, case studies, product expo, networking events and technical tracks of industry and technical sessions.

► Technology Overview: Simcenter FLOEFD 2020.1 Electrical Element Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Package Creator Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.

► Training course: Simcenter FLOEFD Thermal Management of Electronic Systems
  29 Jun, 2020

The course provides a detailed description of Simcenter FLOEFD™ capabilities in the specific usage of electronics cooling and thermal management. The hands-on lab exercises further reinforce the discussion topics under the guidance of our industry-expert instructors. The combination of these different formats is designed to highlight the range of functionality available in Simcenter FLOEFD so that you can perform the thermal management of your electronic equipment.

► Technology Overview: Simcenter FLOEFD 2020.1 BCI-ROM and Thermal Netlist Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Battery Model Extraction Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.

Tecplot Blog top

► Tecplot Europe Solutions for CFD Visualization
  17 Sep, 2020

The Tecplot Europe engineering team specializes in custom solutions in the field of numerical simulation, especially CFD. For the past 25 years, they have been building long-term partnerships by understanding customers’ unique problems and delivering easy-to-implement solutions.  

Tecplot Europe (also known as Genias Graphics) provides Tecplot sales and support to the European and Western Asian markets.

On any given day, team members may be working on optimizing the output of a customer’s in-house solver, automating customer workflows with Python scripts, or extending the capability of Tecplot 360 with addons.

Implementing complex technical requirements in a tailor-made way for each and every customer is our most important task.
–Lothar Lippert, Tecplot Europe Manager.

Tecplot Output for Your In-House Solver – Optimizing Performance

Many customers are working on their own in-house codes and solvers. These solvers use different grid sizes, may have unsteady simulations, and use various mesh types – some of which may have meshes that change over time. The Tecplot 360 suite of tools can handle all of that with ease.

The key to optimal performance is a good output file. Tecplot Europe engineers are experts at optimizing a solver’s output to achieve the best possible performance and optimal user experience. Small changes to an output file often result in 10 times better performance, and it may take just a few minutes to change.

Automation with Python and Macros – Optimizing Your Time

Repetitive tasks and generating reports can eat up a lot of time. Automating these routine tasks is a tremendous time savings (as well as a relief from doing mundane tasks). Virtually every day, we help customers automate their workflows with Python and Macros scripts.

Often customers do not know what is possible, and what can be accomplished with macros and Python scripts. We can share and explain sample code and provide ready-to-use scripts and macros, which are also easy for customers to maintain and extend.

Tecplot Europe has several scripts, available on demand. For example, a user wanted to automate the creation of a plot of forces and moments vs span on a wing, together with a visualization of Cp plots along the 3D-view of the geometry. The Python script is available in our Tecplot’s GitHub repository.

Tecplot Europe Add-On Development

Tecplot 360 is known as the most complete post-processing desktop solution for CFD visualization. However, some of our customers have very challenging requirements, which can be solved only by developing additional functionality. Tecplot 360 is extendable with add-ons. The additional capability is wide ranging, from developing data loaders for special file formats, connecting to Web Map Services, or connecting to larger database applications like a dedicated tool to compare wind tunnel data with CFD results.

Optimization with High-Quality Tools – A Case Study

In a recent case, Tecplot Europe engineers helped DLR use Tecplot 360 to optimize the tail strake position of a generic transport aircraft. Optimization with high-quality tools found the best position. Visualization with Tecplot 360 was crucial in helping understand the effects of each tail strake position.

Read the Case Study »

Tecplot Europe can Help You Optimize Your Workflows

Contact Tecplot Europe:
Phone: +49 (0)9402 9480–0

The post Tecplot Europe Solutions for CFD Visualization appeared first on Tecplot.

► Tail Strake Position Optimization for Generic Transport Aircraft
  17 Sep, 2020

German Aerospace Center
Without a visualization component in the design optimization loop, precisely configuring an aircraft’s tail strake position is simply impossible. Tail strakes are “fins” mounted horizontally on the rear fuselage that add stability and controllability. Engineers at the German Aerospace Center (DLR) found that the visualization component was crucial in understanding flow effects of a generic transport aircraft they were designing.

One requirement was the ability to load cargo at the aft end of the aircraft. This feature included a long, upswept ramp you can see in Figure 1. The fuselage upsweep created strong vortices as shown in Figure 2. The vorticies lead to flow detachment which reduced pressure recovery and increased drag. The DLR engineers needed to find a way to keep the flow attached and the drag reduced.

The Goal

Figure 1. Long upswept ramp feature on generic transport aircraft. Figure 2. Vorticies along the ramp.


Finding an Optimal Tail Strake Position

The Problem

Figure 3. The problem was finding the optimal tail strake position.

The Goal

Our goal was to weaken the tail vorticies caused by the ramp. To do this, tail strakes needed to be precisely configured to produce effective counter-rotating vortices.

The Challenge

The problem was to find the optimal tail strake position and orientation (Figure 3).

The Method

A design optimization – from CAD to mesher to solver and optimizer – included a visualization tool, Tecplot 360, which helped in understanding and trusting the results.

Optimization Loop with Tecplot 360

Figure 4. Design optimization loop: Parametric CAD (CATIA V5), Parametric Mesher (CENTAUR), Solver (TAU+2xAdaptions, Optimizer (SUBPLEX) and Visualization with Tecplot 360.

Three parameters investigated

Figure 5. Three parameters were investigated.

Three parameters were investigated as shown in Figure 5:

  • Strake position along u-isocurve of tail surface (generally the fore-aft position)
  • Strake position along v-isocurve (radial location on the fuselage)
  • Rotation angle φ against tangent in u-direction (angle of the strake against the fuselage)

The Results

The resulting XY plots, produced by Tecplot 360, are shown in Figure 6.

  • The optimizer did a good job, showing ideal asymptotic convergence of objective and design variables.
  • The optimization resulted in a 4% improvement of lift to drag ratio due to strake location – a very significant improvement.
  • A 28% difference between worst (No. 4) and best (No. 44) strake position showed the importance of precisely locating the strake for maximum benefit. See Figure 7.
Tecplot 360 XY Plots

Figure 6. The results are shown in Tecplot 360 XY plots.

Best and Worst Tail Strake Position

In the worst tail configuration, the strake amplified the tail vortex, leading to a larger area of separation and high drag. In the optimal configuration, the strake truncated the tail vortex and minimized separation. While optimizers are generally good at driving to a desired result, understanding and trusting the underlying physics requires good visualization. Best and worst scenarios are shown in figure 8.

Optimization with high-quality tools found the best position. Visualization with Tecplot 360 was crucial in helping understand the effects.


Get Help Optimizing Your Workflows

Contact Tecplot Europe:
Phone: +49 (0)9402 9480–0

Learn more about Tecplot Europe Support

Tail strake configuration

Figure 7. Comparison of tail configurations.

Best and worst tail strake configuration

Figure 8. Best and worst tail configuration.

The post Tail Strake Position Optimization for Generic Transport Aircraft appeared first on Tecplot.

► Compusense Aquired by Vela Software
    8 Sep, 2020

Vela Software is pleased to announce the acquisition of Compusense which will report into Vela’s subsidiary Tecplot, augmenting an expanding range of statistical tools with the industry-leading platform for Consumer and Sensory Science testing.

CompusenseSeptember 1, 2020 – Compusense develops Compusense Cloud and Compusense20, powerful SaaS tools used by major food and beverage companies, CPG multinationals and luxury brands to plan, execute, and analyze consumer and sensory tests, leading to insights that help these companies launch and refine successful products. With over 30 years of research and innovation, Compusense sets the standard for sensory research software.

Founders Karen Phipps and Chris Findlay will lead the transition to an all-internal management team comprised of employees with a combined 55 years of experience at the company.

“Karen and I are delighted that we can place the company that we have grown and nurtured for 34 years into the strong hands of Vela,” Findlay said. “Their expertise in growing software companies gives Compusense a solid foundation upon which it can build into the future. We are very pleased that our management team will be able to retain Compusense’s culture and continue to support our amazing clients as we always have.”

Tom Chan, President of Tecplot, thanked Karen and Chris, adding, “they have worked tirelessly to build a great company, and more importantly a great team, who are passionate about helping customers and advancing sensory testing. With the profound disruptions caused by COVID, brands need valued partners more than ever to help them be successful in the marketplace and Compusense is key to ensuring that products hit the right notes with consumers. We look forward to working closely to bring their important technology to more clients across the globe.”

Compusense is based in Guelph, ON, and serves customers world-wide.

If you have any questions about this acquisition, or the capabilities of Compusense’s products, please contact Compusense at

About Tecplot, Inc.

An operating company of Vela Software International, Inc., itself an operating group of Toronto-based Constellation Software, Inc. (CSI), Tecplot is the leading independent developer of visualization and analysis software for engineers and scientists. CSI is a public company listed on the Toronto Stock Exchange (TSX:CSU). CSI acquires, manages, and builds software businesses that provide mission-critical solutions in specific vertical markets.


The post Compusense Aquired by Vela Software appeared first on Tecplot.

► Tecplot 360 Basics – Equations
    2 Sep, 2020
This training session covers data alteration through equations in Tecplot 360, including:
  • Referencing Variables
  • Math syntax & Functions
  • IF conditions
  • Operating on subsets of zones
  • Use of I & J special values
  • Referencing zones in equations

Q&A from the Equations Training


Can I compute a time average over an interval?

This capability is not directly available in the Tecplot 360 user interface. But we do have a robust Python API for Tecplot 360. Scripts are available on the public GitHub, Tecplot Handyscripts, Python scripts. Scroll to find This script uses another behind the scenes script called, which has a phased-average function available in it. It was written by a Tecplot customer, and we thank them for writing it! Because it is on GitHub, it’s supported by our user community.

Can contours be classified and generated based on categories?

The Tecplot 360 contour legend typically shows only numeric values. But sometimes you may want to show values that are string based. For example, if you have different material properties like sand or soil in a geoscience case, or in the case of a CONVERGE dataset where you have particles that may be in the fluid or have bounced or rebounded. You can show strings using our custom label set. We have blog, Creating a Materials Legend, that uses a bit of Tecplot Kung Fu to add a string-based custom label set. You can use that blog as a guide.

Can I do a while loop in specify equations?

In the Tecplot 360 user interface, the answer is no. But with our Python API, PyTecplot, the answer is yes! In the video example, Jared showed finding the difference between the two zones, with a blended wing body shape. If that were a time-dependent simulation, you could use our looping capability to compute that difference over time. The Tecplot 360 macro language has “for” and “while” loop capabilities. And Python, of course, has many logical and flow control operations. You can use these scripts in conjunction with equations.

Is there a discount for academic licenses for students and faculty? And how do I get one?

We have several academic license options for those at degree-granting universities or institutions. You can email Jared McGarry at, and he can help you decide which type of license you need – single user, department, college, campus, or site licenses (with effectively unlimited seats). You can also visit us at Tecplot Academic Suite.

Can we get a PDF of the presentation to help remember the equations?

We don’t have a PDF of this presentation. But you can watch the recording (above). Also, click on the help button in the specified equations dialog. You’ll see a reference to the functions available.

Can an equation reference data in a different frame?

No – equations can only operate on data in the active frame. If you have to datasets that you want to compare, you’ll need to load both into the same frame.

What is the best way to calculate the difference between different zones that have different meshes?

In the video, the blended wing body example, those two zones have the same mesh and you can simply subtract one zone from the other. If you have different meshes, you will need to interpolate results onto a common mesh with the same number of points. See this video tutorial Comparing Grids: Interpolation of Differing Meshes.

Do you have license roaming, and how does it work?

License roaming is enabled for network licenses. You must be connected to your license server to roam your license. Go to Help>License Roaming.

Is there a quick way in alter equation to find a deviation from a node value?

Sure, let’s look at an example: I want to find the difference of U from its value at X=1, Y=1, Z=1. First use the Probe tool to find the value of U at that XYZ location. From the Probe results you can Copy the U value. Then in the Data Alter dialog you would use this equation:

{U_delta} = {U} – 1.234

Where 1.234 is the value you copied from the Probe dialog. This will create a new variable called “U_delta”.

Can we use alter equation to find max-min of a variable and its location?

The best way to find min-max is with a Python script available on our GitHub site: Tecplot Handyscripts on Github, and look for

You supply the zone and the variable through Python. Then the script will, in Tecplot 360, point to the location of the maximum value on your plot and display it in a text box.

There are two options for polyline point extraction. Which one should I use?

There are several options for extracting data along a line in Tecplot 360. Which one to use depends on your use case.

  1. Use the menu option Data>Extract>Precise Line. This will allow you to enter two X, Y, Z locations. You can then extract data across a perfectly straight line between those two points.
  2. Use the menu option Data>Extract>Polyline Over Time. We also have the extract points from polyline, I think it is. Jared, can you go back to the data extract menu? Yeah, polyline over time.
  3. Select a polyline on your plot, right-click, and you can select Extract Points from the context menu.

Is there any plan in the future to duplicate a page like we do for a frame?

We have no immediate plans for this capability. We could create a Tecplot macro or a Python script that would mimic the behavior by looping over each individual frame on a page and copying and pasting it to a new page. If this is something you do frequently, contact, and we can create a custom solution for you.

If I extract the line across the wall, do I get wall quantities?

In the internal combustion case from the video [timestamp: 38:46], we have volume data representing the fluid, and boundary data representing the wall. When you extract across the line, Tecplot 360 will extract points from the first zone that it encounters. In this case, it will encounter the wall. If you want to make sure of that, open the Zone Style dialog and ensure the wall zone is the only active zone.

The post Tecplot 360 Basics – Equations appeared first on Tecplot.

► Isosurface Algorithms – Visualizing Higher Order Elements
  11 Aug, 2020

Visualization of Higher-Order Elements – Part 3: Isosurface Algorithms

This blog was written by Dr. Scott Imlay, Chief Technical Officer, Tecplot, Inc.

In this blog I’ll be discussing our research into isosurface algorithms for higher-order finite-element solutions. The first blog on this topic was A Primer on Visualizing Higher-Order Elements and the second was on the Complex Nature of Higher-Order Finite-Element Data.

Big cells beget little cells
That model their complexity
And little cells have smaller cells
That we choose selectively

In the second blog, I described how the isosurface passing through a linear tetrahedra is a simple plane described entirely by its intersections with the edges. Since the solution varies linearly along the edges you can calculate these intersections very quickly. You can also quickly exclude edges based on the range of the nodal values at either end of the edge. If the isosurface value is greater than the maximum node value, or less than the minimum node value, no further computation is needed. In this way, the vast majority of the edges can be excluded from further computation by a couple of simple floating-point compares. This, among other optimizations, make this technique very fast.

Isosurfaces in a Quadratic, Higher-Order Element

In comparison, an isosurface in a quadratic, or even higher-order, element can be quite complex. The isosurface is not, in general, planar and it doesn’t even have to intersect the edges or surfaces of the element (see Figure 2). You can have isosurfaces that are entirely contained within an element like little islands. How do we extract these isosurfaces?

Isosurface in linear and quadratic tetrahedron

Figure 1. Isosurface in linear tetrahedron (left). Figure 2. Isosurface in quadratic tetrahedron (right)

Nearly all visualization techniques for higher-order isosurfaces involve subdividing the higher-order element into a large number of linear sub-elements. The variation of the solution across these sub-elements approximates the non-linear solutions and the approximation error decreases as the number of sub-elements increase. Once you have the sub-element, you can use existing isosurface algorithms for linear elements to extract an approximate non-linear isosurface. Sounds easy, right?

Visualization Techniques for Higher-Order Isosurfaces

It is fairly easy to implement an algorithm where all higher-order elements are sub-divided into a large number of linear elements. A quadratic tetrahedra, for example may be divided into eight sub-tetrahedra using the existing ten nodes. This sub-division is shown in Figure 3. Each of those sub-tetrahedra may be further subdivided into eight sub-sub-tetrahedra by creating new nodes at the edge centers, interpolating to those nodes using the full quadratic element basis function, and subdivide as done for the original element. This process can be repeated until the non-linear isosurface is sufficiently resolved.

Unfortunately, the number of sub-cells grows exponentially: after the first sub-division it is eight sub-cells, after the second level of sub-divisions it is 64, after the third level of sub-divisions it is 512, and so on. If you start with 500 thousand higher-order cells you will have 256 million linear sub-cells after three levels of sub-division. It is not cheap to create those 256 million linear sub-cells!

Tetrahedron sub-division

Figure 3. Tetrahedron sub-division.

Most of my research has been on optimizations to make this faster. Specifically, are there simple tests that will allow us to eliminate cells early in the process?

For example, for linear elements, we compute the min/max range of the isosurface variable for all nodes in the element, and we exclude cells where the isosurface value is not in that range. We do this because the cell extrema (min’s and max’s) in linear cells are guaranteed to be at the nodes.

Unfortunately, for most basis functions the extrema in a higher-order cell is not generally at the nodes but may be anywhere within the cell. If we can find a way to quickly exclude higher-order cells, we can significantly reduce the computational cost and memory usage of the subdivision process.

Optimizing the Isosurface Algorithm

It turns out you can eliminate many cells based on the min/max values of the isosurface variable at the nodes. A heuristic formula that seems to work is to keep any cell where the isosurface value satisfies this formula:


I wish I had a mathematical proof that this formula always works, but it has worked in all the cases I’ve tested so far. This formula basically has a buffer equal to the range of the isosurface variable in the non-linear cell. The same formula, with smaller buffers, is applied to the sub-cells at each level of recursion. That is, the formula is also applied when sub-dividing the sub-cells, and again when subdividing the sub-sub-cells, but the size of the buffer on the isosurface variable range is smaller each time.

Selectively subdividing elements based on the formula above dramatically reduces the cost of extracting higher-order isosurfaces. In Figure 4 you can have four levels of subdivision for an isosurface of constant radius from a point. By the fourth level of subdivision, all but 8,617 out of a possible 663,552 sub-cells have been excluded. Over 98.7% of the sub-cells have been discarded and further computations should be nearly a factor of 100 faster!

Selective subdivision for quadratic tetrahedral isosurface extraction

Figure 4. Selective subdivision for quadratic tetrahedral isosurface extraction.

Figure 5 shows the extracted isosurface at various levels of subdivision. Four levels of subdivision are sufficient to create a very smooth isosurface.

Quadratic tetrahedral isosurface with increasing levels of subdivision

Figure 5. Quadratic tetrahedral isosurface with increasing levels of subdivision.

In my next blog, I will discuss the results of our research into higher-order finite-element curved surface visualization algorithms. See all blogs on higher order elements.

Subscribe to Tecplot

Get all the latest news from Tecplot, Inc.

Subscribe to Tecplot 360

The post Isosurface Algorithms – Visualizing Higher Order Elements appeared first on Tecplot.

► Q&A Getting Started Tecplot 360 – FVCOM
    4 Aug, 2020

When you think you have covered everything – you find that you have not! You asked some important questions during the Getting Started with Tecplot 360 training session using the FVCOM dataset for coastal and ocean modeling. Enjoy skimming the Q&A here to find something of interest.

Getting Started with Tecplot 360 - FVCOM Dataset

Tecplot 360 plot showing evenly spaced vectors in Boston Bay.
Watch the video »

The goal of these Getting Started with Tecplot 360 training sessions is to increase your efficiency when visualizing and analyzing CFD results. Each session uses a different dataset, so they cover slightly different capabilities. But all sessions cover the basics: user interface, loading data, creating slices, iso-surfaces, and streamlines, and exporting images, animations, and videos. If you have a recommendation for an upcoming training session, we would love to hear about it.

You can watch this training session video, register for upcoming, and watch recorded training sessions.

These questions were answered by Scott Fowler, Tecplot 360 Product Manager, and Jared McGarry, Tecplot Account Manager.

Does the georeferenced image come in the netCDF file?

No, the georeferenced image was not in the netCDF file. We used the tool QGIS to create that image. If you are not familiar with QGIS, it is a free open source tool. If you need help creating images using that tool, our support staff does have some experience with it. We are not geoscience experts here at Tecplot, but can help with some of these file formats.

What netCDF-based file formats do you support?

The FVCOM loader is on by default in the Tecplot 360 installation. ROMs and WRF loaders are included in the installation, but they are not on by default. If you want to use ROMs or WRF formats, please contact us and we can help you enable them. The ROMs and WRF loaders are still in Beta and when they’re ready for prime time, we will turn them on by default.

Can I import Shapefiles?

Yes, but Tecplot 360 does not have a direct loader for it. We have a Python script available on Tecplot’s GitHub page which can convert Shapefiles to Tecplot binary (.plt) format. You can then load the PLT file directly. This video shows you how to Convert Shapefiles to PLT Using PyTecplot.

Can Tecplot 360 create Schlieren Images?

Schlieren images are frequently used for a high-speed aircraft. Tecplot 360 cannot create Schlieren, but it can do Shadowgraph. If that is a good alternative for you, here are two video tutorials that may help:

Can I use the polyline to do a transect and make a vertical plot along that line?

This is not built into the Tecplot 360 directly, but we have written a Python script for this purpose. The Python script requires FVCOM data and the use of the siglev variable. Here is a video tutorial to walk you through Computing a Vertical Transect in Tecplot 360.

How do I change the X-axis to distance instead of points?

The best way to calculate a new distance variable along the X-axis is to use our Python API, PyTecplot. At each point you will be able to get the X and Y values and compute distance using the Pythagorean distance. Our Support Team can certainly help you with the script.

If you need distance, the vertical transect script mentioned in the previous question will compute distance along the line for you.

How can you show time on the plot as you animate through time?

One way to show real-time values in an animation is to create a text box, click to place it, and then use dynamic text. The dynamic text, &(SOLUTIONTIME), refers to solution time. As you step through time, the solution time will update and show the current value on the frame itself.

How does Tecplot 360 deal with large movie exports?

Here are a few things you can do to speed up the export of large movie files:

  • Reduce the anti-aliasing value or turn it off altogether. Antialiasing can slow down exports, especially for larger image sizes.
  • Export a sequence of images instead of a movie file. You can then stitch the images together. This works especially well if you have many time steps. One advantage is this allows you to adjust items like framerate without having to reload all the data.
  • Knowledge Base article: Use FFmpeg to create videos from PNGs
  • Knowledge Base article: Creating a GIF from a Sequence of PNGs

As you animate, or progress, through time, Tecplot 360 will continue to load data, and so you will see an increase in the amount of RAM used, but never fear! Tecplot 360 has an intelligent strategy for offloading data, which frees up RAM. The oldest data is offloaded first, using a cell-aging algorithm. For batch mode operations or movie file exports you may want to adjust the Load-on-Demand settings to Minimize Memory Use. This can be found on the Misc tab in the Options>Performance dialog.

One more thing to note if you have cell-centered data in your dataset, many Tecplot 360 algorithms require that data be interpolated to the nodes. This is important because nodal values are computed, and then stored in a temporary location. This is done so the nodal values do not have to be recomputed later. We have had a few people run out of disk space because they were doing many computations on large simulations. To prevent running out of disk space, make sure that your temporary directory has plenty of room. Tecplot 360 has options to specify the temporary directory location.

When extracting a polyline, will the polyline data change with the time step selected or will a new polyline be required for different solution times?

Extraction happens only at the current time step, so a new extraction will have to be performed at each timestep.

Here are three different ways to extract a polyline:

  1. Draw the polyline and extract the points.
  2. Extract polyline over time by selecting the polyline and going to Data Extract>Extract polyline over time, Use the 2D Cartesian plot type, not the XY line plot type.
  3. Use the state to extract a precise line and specify the exact start and end points for the polyline along a surface, or even through a volume.

Can Tecplot 360 download 2D variables from FVCOM or the depth of average velocity components?

The answer is no, currently not, but we have experimented with a loader to do that. If you need this capability, please contact us at, and we can work with you to update the loader.

Can I use an imported Google earth image as a georeferenced image?

The answer is maybe. Certainly, Tecplot 360 can import any image into the plot. If you have an image, you need an associated world file, in this case a PGW file (a PNG world file) which defines the region of that image. If you do not have a world file, you can import the image and then pan and zoom to get it to line up with your data.

Can the polyline be used to show transient data where the X axis is the polyline curvilinear axis and the Y axis is the variable that changes in time?

A polyline will not work for this, but there is a way to show the behavior of the variable over time at a specific point. Use the Probe tool to create a time series plot. At the probed area, Tecplot 360 will extract a new region of data at that specific point. This new region can be used to show a line plot of that variable over time. The two plots are automatically linked, so when you step through time, you will see the salinity (if salinity is the variable) behavior at that point. And then you will see the bar indicating the current time step at that point.

Is it possible to animate the XY along with the other frames in your demo?

2D and 3D plots can be animated through time, but XY line plots do not have the capability.  To get around this, you can represent a line plot using the 2D plot type. You will not have the multiple Y axes capability, but you can simulate a line plot using the 2D plot types.

Can you show solution time in days such as year, month, day, hour?

Many FVCOM models have solution times are Number of Days since a specific date and time. Tecplot 360 does not inherently display a date and time. However, you can use a PyTecplot script to add Auxiliary Data to the zones that include the actual time.
import datetime
import tecplot as tp
initial_date = datetime.datetime(1858, 11, 17)
with tp.session.suspend():
    for z in tp.active_frame().dataset.zones():
        t = initial_date + datetime.timedelta(z.solution_time)
        z.aux_data["Date"] = t.strftime("%m/%d/%Y %H:%M:%S")

You can then add text to the plot by clicking the Add Text icon on the toolbar or by using Menu>Insert >Text. Then type in the text field: 


Getting Started with Tecplot 360

If you still have questions or recommendations for upcoming training, Contact Us.

Watch the Training Video   or  See All Trainings

The post Q&A Getting Started Tecplot 360 – FVCOM appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► Quickies: Bentley sets IPO price, Altair acquires for HPC
  17 Sep, 2020

Quickies: Bentley sets IPO price, Altair acquires for HPC

I go away for a couple of days, and what happens? Yup. Newsy things! Here are two items of interest:

Bentley issued another update to its IPO filing, this time with prices for the shares. You can see it here. (I have not read the whole thing, nor have I diffed it to find out what else may have changed. Soon. This is what I wrote about the last amended S-1.) In the latest update, we learn that Bentley is helping current shareholders sell around 10.75 million shares at a $17 to $19/share price range. What does that mean?

  • Bentley’s market cap would be between $4.4 billion and $4.7 billion, a 6x-ish multiple of 2019 revenue
  • This sale is for class B shares currently held by existing stockholders. Class B shares hold 1 vote each; class A shares have 29 votes/share. Bentley family members are the primary owners of the class A stock, which will hold 57% of the voting power — so while the class B shares can be owned by anyone, the Bentley family will still, in effect, control the company
  • Bentley shares are expected to begin trading on the NASDAQ market on Wednesday, September 23 under the symbol BSY. (How soon, and how exciting!)
  • I believe that this means that Bentley will be required to report earnings for the fiscal third quarter, sometime in October/November. Will confirm this once the dust settles.

Meanwhile, Altair announced its second acquisition in a week, also to do with HPC. The first was Univa (my note, here); this one is Ellexus, an input/output (I/O) analysis tool, which Altair says “helps customers find and address issues quickly, improving speed accuracy and cloud readiness”. Ellexus Mistral and Breeze “complement Altair’s scheduling technology by providing per-job storage agnostic file and network I/O real-time monitoring to identify I/O latencies and bottlenecks for faster job execution times and better resource utilization”. Neither price paid nor revenue contribution was disclosed.

The post Quickies: Bentley sets IPO price, Altair acquires for HPC appeared first on Schnitger Corporation.

► Altair adds Univa to its HPC offering
  14 Sep, 2020

Altair adds Univa to its HPC offering

Altair just announced that it has acquired Univa, a maker of workload management, scheduling, and optimization solutions for high-performance computing (HPC), on-premise and in the cloud. Altair says Univa’s Grid Engine is a distributed resource management system that optimizes workloads and resources in data centers, improving return-on-investment and delivering better results, faster. Its other main product, Univa Navops Launch, helps migrate enterprise HPC workloads to the cloud by providing real-time insights into workloads and spending, with complete visibility to HPC cloud resources.

Altair CEO Jim Scapa said that “Altair has invested significantly in HPC and cloud technologies for several years. The addition of Univa’s technology and its very experienced team further cements our leadership position in this fast-moving space.”

Altair says it will continue to invest in Univa’s technology to support existing customers while integrating with Altair’s HPC and data analytics solutions.

Details were not released, though we may learn more at Altair’s earnings announcement in a few weeks.

The post Altair adds Univa to its HPC offering appeared first on Schnitger Corporation.

► ESI Q2 revenue slowed, company sets up for growth
  11 Sep, 2020

ESI Q2 revenue slowed, company sets up for growth

ESI Group reported Q2 results yesterday, to round out our PLMish earnings for the quarter ended June 30, 2020. First the details, then some comments:

  • Total revenue in Q2 was €26 million, down 13% as reported and down 14% in constant currencies (cc)
  • License revenue was €20 million, down 9% (down 10% cc)
  • Services revenue was where much of the year/year decline took place, reported as €5.5 million, down 25% (down 25% cc). Like everyone else, ESI saw companies temporarily shut offices and postpone some services engagements. CFO Olfa Zorgati said that all aspects of services were affected –engineering studies, field services, etc.
  • There were some bright spots, however: “repeat business … was particularly strong among the group’s key customers. The Top 20 customers booking increased by 3.9% and represented 56% of total bookings”
  • Revenue by geo stuck to the typical pattern for the first six months of 2020, with revenue from EMEA around 52% of total revenue; Asia, 34%; and the Americas held constant at 14%.
  • The end-industry mix actually tilted towards automotive just a bit, even given the difficult macro-economic context with ESI saying it “remained relatively stable despite a difficult sector context. The other priority industries suffered more from the current crisis, with a significant slowdown in orders in the Aerospace industry”.

CEO Cristel de Rouvray started the earnings call by saying that Q2 was the toughest she had seen — and to point out that, amid huge drops in vehicle sales and declines in air traffic, ESI revenue was only down 9% from 2019. She sees this as a temporary pause in sales but not in discussions and engagements with customers. The problem, of course, is that “we can only go as fast as the customers”. She added, “As we continue to manage this global pandemic, we are balancing two business imperatives: proactive cost management to optimize near-term financial health, and [the] continuation of our transformation plan. The latter gains momentum, reflected in a growing number of customer engagements … and mounting interest in ESI’s offer …”

ESI doesn’t offer guidance, but Ms. de Rouvray said that the company is accelerating its move to a more industry-focused product set and a key-account sales process. Ms. Zorgati believes that the first half of 2020 was the toughest in terms of COVID-19’s revenue impact, with perpetual licenses especially hard-hit (and mainly in China) and that ESI is well-positioned to support customers as activity picks up,

Cutting cost to keep pace with lower revenue, while not jeopardizing the ability to meet demand, once it returns, is a tough balancing act. ESI is thinking long-term and waiting for that uptick. In the meantime, ESI is still out there pitching, closing government-let research contracts, working in consortia and revamping its products. We’ll tune in in late October, when it reports on Q3 results.

The post ESI Q2 revenue slowed, company sets up for growth appeared first on Schnitger Corporation.

► Bentley’s S-1 shines a light on how private companies grow, “with engineers in charge”
  10 Sep, 2020

Bentley’s S-1 shines a light on how private companies grow, “with engineers in charge”

As you probably know, Bentley filed to go public a few weeks ago. I put up a quick blog about the filing that evening and promised to read the whole thing and write about the highlights in a subsequent post. But I hit a snag. A Registration Statement, the S-1 a company submits to the SEC, is … boring. And long. There are nuggets, but there are SO. MANY. DETAILS. All I can say is, it took a while to work through.

Here’s what I found most interesting — your interest may not line up with mine, so please do your own review of the S-1 at the SECwebsite, here, or skip straight to the Amended statement here.

Bentley is filing under a provision of the Jumpstart Our Business Startups Act (the “JOBS Act” – those legislators are funny, no?) of April 2012 that sought to reduce the regulatory burden on some newly-public companies, for which Bentley says it qualifies. (This isn’t nefarious; Altair did the same thing in 2017.) This means that Bentley doesn’t have to report as much data as other companies do — in particular, it has to report only two years of financials in the S-1, and it doesn’t have to share some details like executive compensation.

Bentley did throw in this handy chart, to show how it has consistently grown over the years:

Bentley S-1, page 69

The S-1 is chockablock with data, including a few years of Bentley’s financials. This is a screencap of the table of page 15:

As you can see, Bentley reported total revenue of $629 million, $692 million, and $735 million in 2017, 2018, and 2019. Main takeaway: it’s substantial and growing, with revenue up 7% and 17% for 2018 and 2019. Through June 30, 2020, was up 9% with software revenue up 12% — not bad given everything.

Let’s put that in some context. Bentley in 2019 was roughly half the size of Autodesk’s AEC business but nearly $200 million larger than Nemetschek. Why does that matter? Because some buyers still follow the Jack Welch maxim and only work with the #1 or #2 player in a space — Bentley is clearly that #2. But that may be the wrong comparison since the companies are now heading in different directions. Autodesk, in AEC, focuses on the design and make parts of a project, while Bentley looks more at design and operate — and, increasingly, design in the context of operations and maintenance. It’s looking to answer questions like, how would I design this better if I knew I had this maintenance plan/budget? Over the asset’s lifecycle? (More on this in another post.)

Indeed, Bentley cites reports that show it holds the #1 position in several industry and application area slices, as determined by The ARC Advisory Group: “In August 2019, for Engineering Design Tools for Plants, Infrastructure, and BIM (building information modeling), ARC ranked us #2 overall, as well as #1 in each of Electric Transmission & Distribution and Communications and Water/Wastewater Distribution … [and] Collaborative BIM. In December 2019, for Asset Reliability Software & Services, ARC ranked us #1 overall for software, as well as #1 in each of Transportation, Oil and Gas, and Electric Power Transmission and Distribution”.

Bentley’s always been proud of its R&D — saying things like, “We’ve invested over a billion dollars in acquisitions and R&D in the last 10 years.” We can’t verify that exactly since we’ve only got three years of data, but it’s likely true — Bentley spent $184 million on R&D (25% of revenue, on par or slightly ahead of other PLMish companies) and another $34 million on acquisitions in 2019.

On the topic of acquisitions: Bentley has bought LOTS of small companies over the years, more than technology tuck-ins but nothing as splashy as arch-rival Autodesk. In 2019, it completed four acquisitions for the $34 million I mentioned above; through June 30, it has acquired four more companies for nearly $70 million. The filing says that “[Bentley’s] average historical annual revenue growth rate from acquisitions over the last six years has been approximately 1.1%” — it’s clearly not acquiring revenue, but rather technology and, perhaps, specific customer accounts.

If you’re keeping track of the various companies’ race to recurring revenue, Bentley says that in 2019, “subscriptions represented 83% of our revenues, and together with [reucrring] professional services revenues bring the proportion of our recurring revenues to 86% of total revenues.” That’s on par with other companies that haven’t gone all-subs-all-the-time.

I also found this fascinating: “In 2019, 96 accounts, each contributed over $1 million to our revenues, representing 32% of our revenues. 53% of our 2019 revenues came from 424 accounts, each contributing over $250,000 to our revenues. During 2019, we served 34,127 accountsNo single account provided more than 2.5% of our 2019 revenues. Additionally, we believe that we have a loyal account base, with 80% of our 2018 and 2019 total revenues coming from accounts of more than ten years’ standing, and 87% of our 2018 and 2019 total revenues coming from accounts of more than five years’ standing.” We often wonder if any one account pulls the strings at a vendor, and in Bentley’s case, at least, that’s a no. But the ability to keep 80% of its accounts for 10 years or more — I find that impressive. I am often asked how sticky these tools are — here you see, very sticky,

Let’s talk Siemens. I’ve met with Bentley and Siemens separately and together, and they are 100% in on their technical partnership. The business relationship, perhaps not so smooth. Here, according to the S-1 is the backstory and then some present/future stuff (I edited this for readability):

“In September 2016, we and [some of] the Bentley brothers entered into a Common Stock Purchase Agreement with Siemens, pursuant to which Siemens was authorized, and agreed, to acquire up to $100 million of our Class B common stock from our existing stockholders. Subsequent amendments increased this amount to $250 million (which, if reached), increases by $20 million on each subsequent anniversary of the date of the Common Stock Purchase Agreement so long as the Strategic Collaboration Agreement remains in effect on each subsequent anniversary. The next increase is set to occur on September 23, 2020. … As of June 30, 2020, Siemens beneficially owned 34,764,592 shares of our Class B common stock” and had paid a total of about $250 million for these shares.

A bit of math shows us that this is 14% of the total Class B shares. Why, you say, does this matter? Because each Class B share carries one vote at a shareholder meeting. Class A shares, mostly owned by the Bentley family, have 29 votes each, and there are 11.6 million Class A shares in total. 35 million-ish votes for what Siemens wants versus 336 million-ish votes for what the Bentley family wants (assuming they agree). Siemens does not drive this bus, though it clearly has input.

But the real thing about Bentley+Siemens is the strategic element mentioned above:

“In conjunction with the Common Stock Purchase Agreement, we entered into a Strategic Collaboration Agreement with Siemens … The initial term of the agreement lasts until December 31, 2026 and automatically renews for successive one year terms unless either party elects to terminate the agreement … In addition, Siemens has the right to terminate the agreement and any related collaboration projects if the Bentleys no longer own a majority of our voting power or if we otherwise undergo a change of control”.

Note that last sentence: Siemens can walk away if Bentley Systems changes ownership. And there’s more (again edited down with bold added by me for emphasis):

“we … entered into the Common Stock Purchase Agreement with Siemens in September 2016, pursuant to which we … granted Siemens a right of first refusal with respect to certain deemed liquidation events, offers, sales or certain issuances of our capital stock, … Pursuant to the terms of the Common Stock Purchase Agreement, Siemens’ right of first refusal expires upon the effectiveness of a registration statement in connection with an underwritten initial public offering. Siemens contends that this right of first refusal applies to sales of common stock in an initial public offering by the Company or the Bentley family members party to the Common Stock Purchase Agreement. While we disagree with Siemens’ contention, our initial public offering of Class B common stock will be exclusively by existing holders whose transfers of capital stock are not subject to Siemens’ right of first refusal, and we have not included any shares to be issued by the Company or any shares held by the Bentley family members party to the Common Stock Purchase Agreement in the offering pursuant to this prospectus.

Following the effectiveness of the registration statement … Siemens’ right of first refusal will terminate. Following the completion of this offering, we intend to evaluate opportunities to then undertake a primary offering of our Class B common stock by the Company, subject to [a bunch of stuff] … We have not engaged in any formal discussions regarding any such offering and we have not undertaken any steps to pursue such an offering. The Company lock-up contained in the underwriting agreement to be entered into by us with the underwriters in this offering will permit us and selling stockholders to sell shares of Class B common stock in an aggregate amount equal to up to 20% of our total Class B common stock outstanding at such time beginning on December 1, 2020, and such lock-up agreement expires 180 days following the date of this prospectus. “

I may be the only person who finds this interesting. I draw no conclusions but I would think armies of lawyers would have ironed this out in 2016 … And I’ll tune in on December 1, 2020, to see what happens then. After all, the rumors persist that Siemens may want to acquire all of Bentley, to add it to the DIgital Industries part of the AG.

Leaving aside whatever that is with Siemens, it’s important to note that this IPO is about creating liquidity for existing shareholders and not raising money for the company. Who those “selling shareholders” are isn’t clear to me, but very explicitly does not include the four Bentley brothers who have, for years, been the face of the company.

This is explained in an unexpectedly funny bit of the S-1, on page 107, where they write,

“Barry, Keith, and Ray are respectively chemical, electrical, and mechanical engineers who have spent their entire careers in software. Even Greg, prior to joining the rest of us, was a successful developer of software for what he characterizes as “financial engineering.” Having engineer types in charge seems to have worked for us, perhaps because of the correspondence to our end market of infrastructure engineering.

And this important bit follows:

The four of us are not selling shares in this offering, nor do we contemplate any “exit” other than (as we are all aged in our early 60s) in due course following the example of Barry, who retired at the beginning of this year but remains active on our Board. We plan to continue our modest regular dividend, which will serve to encourage this orderly progression.

OK. So what did I learn? That Bentley is a significant and thriving software vendor, confidently stepping out into new areas like asset operations and maintenance. That many of the people who made it so plan to stay on. That it’s profitable and generating lots of cash. None of that is surprising — we didn’t have the details before, and now we do. And that ethos of “by engineers for engineers” is 100% true to the company’s character, and has been for decades.

What happens next? The offer needs to be priced, meaning the underwriters and Bentley figure out what the market will bear and line up buyers. There’s no date for that yet — but I’ll write about it once it is set.

The post Bentley’s S-1 shines a light on how private companies grow, “with engineers in charge” appeared first on Schnitger Corporation.

► More details on AVEVA + OSIsoft — including about that rights offer
  31 Aug, 2020

More details on AVEVA + OSIsoft — including about that rights offer

We’ve learned a bit more about the proposed combination of AVEVA and OSI since the announcement early last Tuesday. Much more will come out in six weeks or so when the official paperwork is filed but here’s what I learned while listening to the investor and industry analyst calls, as a follow-up to my post from last week:

  • AVEVA’s CEO Craig Hayman, Deputy CEO/CFO James Kidd and OSIsoft founder Dr. Pat Kennedy all seem genuinely excited about the potential to work together and to create separate and combines offerings that leapfrog both companies into new markets and customers
  • Dr. Kennedy will remain involved in a BIG way — he will become AVEVA’s single largest individual shareholder and take the title of Chairman Emeritus. I had missed that in the original announcement.
  • The product set will be additive rather than subtractive — yes, there is a bit of product overlap but nothing is expected to change for customers of PI System or AVEVA Historian. As Mr. Hayman said, it’s extremely unlikely that customers will willingly rip and replace; over time, AVEVA Historian customers may choose PI instead. This is the product map AVEVA shared with investors:
  • A big impetus for the transaction is to further diversify AVEVA away from oil and gas, the former company’s main market. OSIsoft has customers in power (not just generation, but also in transmission and distribution), oil and gas, chemicals, mining, metals and minerals, pulp and paper, and pharmaceutical manufacturing. In nearly all of these industries, PI is used throughout –as in mining, where Mr. Hayman said it is used “from pit to port”. In all, Mr. Kidd estimates, AVEVA’s oil and gas exposure would go from around 40% of revenue to 25% as adding in OSISoft “broadens out our end-market exposure”. He later added, “we see potential in power generation/transmission/distribution, especially with Schneider Electric, as the build-out of power from high voltage, medium voltage to low voltage and distribution. We also see opportunity in buildings, data centers, everywhere where electricity flows, we see opportunity for PI.”
  • The other thing, too, of course, will be the opportunity to cross-sell to one another’s customer base and to start selling more offerings into their joint base, as you can see below. They come to their industrial customers from different angles and see lots of opportunity to turn those differences into revenue.
  • The 200 plus “whitelabeled” PI System-based products aren’t expected to be affected by the combination
  • The companies are remarkably similar in many metrics –see the image below from the investor slide deck– and seem like a good cultural fit, too:

All of that leads AVEVA to confidence: 1. that it can get the deal done and 2. that the deal is a positive development for AVEVA, its employees and customers — and for OSISoft’s as well. As Mr. Hayman said, “the combination significantly increases the depth and breadth of the Company’s portfolio brings together various sources of design assets and operational data type in the middle here is the information land, the basis of the process and production, which will be further interest through applications and data from the portfolio”.

Mr. Kidd pointed out that OSI today largely sells perpetual licenses and maintenance agreements, with just a small proportion of revenue coming from subscriptions. He said that “given AVEVA’s track record in the last couple of years [of transitioning customers to subscriptions], this is an area that we believe we can accelerate and help to create new subscription offerings, particularly using AVEVA Flex.” So expect to see (again) the bump in revenue for perpetuals changing to a slower but more consistent growth curve as OSI undergoes the same transition we’ve seen over and over again in this space.

And Mr. Kidd made one thing very clear, for all you OSI employees: “Like the Schneider-AVEVA merger, this deal is much more about future growth than cost-saving. But that said, we do expect there to be some level of cost synergies, mainly through consolidation of offices, combining IT systems, and integrating the back office.” So don’t expect the success of this deal to be judged on cost-cutting.

AVEVA is paying $5 billion for OSIsoft, $4.4 billion in cash, and $0.6 billion in shares issued to Dr. Kennedy. That $4.4 billion in cash will come from a $3.5 billion rights issue and $0.9 billion in cash, on the balance sheet, and via new debt facilities. I wasn’t sure what a rights issue is –thanks to all who helped me learn– but now understand that it’s a legal form in the UK where shares are issued in a way that gives current shareholders first rights of refusal so they can determine if and how their holding might be diluted. Existing shareholders can subscribe to the issue in proportion to their current ownership stake –so when Schneider Electric says it supports the issue, we can presume they’ll buy 60%ish of the new shares. Of course, shareholders don’t have to ante up if they don’t want to; they can sell these rights if they choose to.

We had, months ago, learned that Schneider Electric was interested in acquiring OSISoft. Mr. Kidd explained it this way: “When you’re trying to navigate strategic value, you have to think about what each company does. Schneider Electric engages in projects around industrial solutions or the power or building solutions; it’s around the build-out of those facilities. If you think about where AVEVA is, with a small 10% exception around the Greenfield CapEx in oil and gas, mostly it’s around the operational side running those facilities and providing the tools to operate those facilities. And once you think about that, then you can understand how certain acquisitions make perfect sense for AVEVA and certain acquisitions make sense for other companies, including Schneider. OSIsoft is an operational system. It’s an OpEx model. Its usage is aligned with the consumption model of the customers. And it works with many different industrial firms including Rockwell Automation, Emerson, ABB [all of which compete with Schneider Electric] in many end markets. And so AVEVA is a perfect fit for OSIsoft.” Now we know.

Last thing: AVEVA announced that it was discussing this with OSISoft nearly a month ago. That gave customers plenty of time to weigh in and ask questions. Mr. Hayman said that “customers were unbelievably positive: [we got] ad hoc emails from customers telling us that they were so very excited, that it was a great strategic choice, how it was a great cultural fit, and that PI System is a great product. I remember being on one Zoom call with over a dozen with over a dozen people from all walks of life in this customer who has one thing in common, which is our relationship with them. And when someone asks about myself and PI everyone in the zoom calls, stopped, turned, looked, and gave us great thumbs up and all smiles and that’s a great product. That’s a great choice. Oh, really, that’s a great thing”.

Next up, regulatory filings in many geos and more comprehensive info for shareholders. The deal is still on target to close around the end of 2020.

The post More details on AVEVA + OSIsoft — including about that rights offer appeared first on Schnitger Corporation.

► Autodesk’s Q2 revenue up 15%, comes out swinging on AEC
  26 Aug, 2020

Autodesk’s Q2 revenue up 15%, comes out swinging on AEC

Autodesk has lost none of its swagger, yesterday reporting that total revenue was up 15%, with results across the metrics that investors look at ahead of consensus estimates. Even so, the company’s guidance for its fiscal third disappointed, leading Autodesk’s share price to be down 3% after hours. First, the details, then quotes and comments:

  • Total revenue was $913 million, up 15% as reported and up 16% on a constant currency basis (cc)
  • Design revenue was $821 million, up 15% (up 16% cc). Autodesk defines this bucket as the maintenance and product subscriptions related to the design products — so including AutoCAD, AutoCAD LT, Industry Collections, Revit, Inventor, Maya and 3ds Max. For reasons that I don’t quite understand, this category also includes the CAM solutions that incorporate both design and make functionality; and all EBAs
  • Make revenue was $71 million, up 37% (up 38% cc). Make includes cloud products such as Assemble, BIM 360, BuildingConnected, PlanGrid, Fusion 360, and Shotgun — in the case of AEC, clearly used to execute (“make”) AEC assets. It’s more confusing in the case of Fusion 360, which is lumped into this category even though it includes significant design capabilities
  • We also got the more traditional breakdown: Revenue from the AEC products was $397 million, up 19%
  • Manufacturing product revenue was $186 million, up 6%
  • Media & Entertainment revenue was $53 million, up 5%
  • Revenue from AutoCAD and AutoCAD LT was $272 million, up 18%
  • Finally, in the catchall category Other, revenue was $5 million, down 8%
  • Subscription plan revenue was $841 million, up 27% (up 28% cc)
  • Maintenance plan revenue was $51 million, down 51% (down 49% cc)
  • By geo, revenue from the Americas was $372 million, up 14% (up 14% cc)
  • From EMEA, $355 million, up 12% (up 16% cc)
  • From APAC, $187 million, up 21% (up 21% cc)

CEO Andrew Anagnost started the call by talking about COVID, and what Autodesk saw as the world slowly reopened during fiscal Q2: “We closely monitored the usage patterns of our products across the globe. In China, Korea, and Japan, we are seeing usage above pre-COVID levels. In some areas of Europe, we continue to see a recovery as well. In the Americas, we experienced a slight uptick in usage for most key products in July. We see a positive correlation between usage trends and new business performance, which gives us confidence that the green shoots we see in usage will translate to improved new business performance in subsequent quarters.” CFO Scott Herren added that “business is recovering in the markets that were impacted by the pandemic earlier on. Some of our major markets like the US and UK have stabilized, but are yet to show meaningful improvement … Second quarter new business activity was more impacted [by COVID-related issues] than Q1, with new business declining in the mid-teens percent. We think the second quarter will be the most impacted by the pandemic.”

Autodesk said it continues to see success in bringing non-compliant (aka pirated) and legacy (ie lapsed/very old version) users into the fold. The company says it igned 3 license compliance deals worth over $1M in APAC.

Autodesk reports revenue, with all of the accounting treatments of subscription revenue, as well as billings — the sum of revenue and the net change in deferred revenue from the beginning to the end of the period. In other words, revenue actually received and bills sent out but not yet paid by customers. And that’s where investors were disappointed: in FQ2, billings were down 12% from a year ago to $787 million, and the company forecast billings for the year to be down by as much as 3%. Since invoices sent out this quarter turn into revenue in some future quarter, when billings decline, that’s a cause for concern. Why did Autodesk say its billings would go down? Because it saw a dip in the contribution from multiyear contracts when compared to prior quarters, a trend that Mr. Herren was beginning to reverse itself towards the end of fiscal Q2. My take on two possible reasons: first, customers have less confidence in Autodesk’s ability to deliver value to its subscriptions, meaning a growing “show me” attitude even in the face of discounts for longer periods. Second, less confidence in customers’ need for the software in the far-of future — they don’t need subscriptions for workers they aren’t sure they’ll still have. We’ll have to tune into this metric in FQ3 to see what develops.

Mr. Anagnost also commented on where billings are coming from — online versus indirect sales versus direct sales. He said that “we saw strong double-digit billings growth through the online channel during the [fiscal second] quarter. Our online sales are helping attract new customers to the Autodesk family, as nearly three out of four new customers in the quarter came in through e-commerce”. In general, he said, “We’re still trying to get that direct online business up to 25% of our total business.” That’s been the stated goal for quite a while — and since Autodesk is taking more of the high0end / large account business direct, this squeezes the reseller channel.

Autodesk didn’t release channel performance data for FQ2, but Mr. Herren said that Autodesk saw a strong quarter among smaller accounts, which matches what other PLMish companies told us — smaller decision teams, faster cycles, are possible at smaller companies. The mid-market was “tepid” as it waited for Autodesk’s multi-user to named-user deals to kick in at the start of FQ3, and as they paused to assess their prospects for the rest of the year. The named accounts (biggest prospects) “didn’t have a big Q2 [but] that seems to be heavier in the second half of the year. We’ve got a very full pipeline of large transactions, large accounts, EBA [Enterprise Business Agreements– token-based access to a pool of products over a defined period] renewals that are coming up.” [UPDATE: My bad. Autodesk did release that 30% of revenue was direct and 70% came from indirect sources, on par with prior quarters.] 

Mr. Herren told investors that large deals are still hard to close and that the (re)opening activities around the world create a complex mix of business climates. He sees “varying degrees of demand in the Americas, which includes our largest end market. At the upper end of our guidance range, we are modeling meaningful recovery in the region in the third quarter, with continued improvement in the fourth quarter. At the low end of the range, we anticipate a slower recovery in the third quarter and improvement in Q4” That translates to a forecast of FQ3 revenue between $930 million and $945 million. For fiscal 2021 (ending January 31, 2021) Autodesk sees revenue of $3,715 million to $3,765 million, up 13.5% to 15.%. That’s a slight increase in the midpoint of the guidance, and a decrease in the range — as usually happens as we get closer to the yer-end.

And now, the elephant in the room: Mr. Anagnost listed a lot of AEC wins in his opening remarks, and spoke further about the unhappy architect customers who are causing such a kerfuffle by answering an investor question this way:

“[These customers] have legitimate concerns about the functionality in Revit and we take those incredibly seriously. And the fact is, is that from an architectural standpoint, Revit hasn’t gotten a lot of incremental investment. A lot of [our] AEC investments have gone to construction, to revenue enhancements targeting the engineering component and workflows– structural workflows, in particular. So, there are some real, legitimate concerns there.

The other concern they have is the move from multiuser to named users. These are large multi-user clients and they’ve seen multi-user prices drift up. They really want a pay-per-use model. We want them to have a pay-per-use model, which they would prefer to a cloud licensing. We’re all on the same page.

But that said, these customers come from a highly privileged, roughly 20% of our subscription base, that moved from maintenance to subscription and have pretty deep price protections relative to the rest of the base. And if you look at their expenditures over a five-year period, frankly, even moving out another five years, as they add seats, they are actually paying less to Autodesk than they would have under the old perpetual model. And that was a deliberate part of the transition, even as multiuser prices go up in everything. If you add up what they would have paid us for adding users over time, they actually end up paying less over a five-year period and, frankly, as they add users over a 10-year period.

We’re not concerned about that. We said very early on that we were going to take care of these maintenance customers … We did that. Lots of debates with all of you [investors] about the maintenance subscription program and 10-year price lock. It wasn’t exactly something that all of you were behind. But we think it was right. And yes, it has resulted in this.

We’re never going to be on the same page with this audience [meaning, investors] about that particular part of the equation. But remember, this is a shrinking bit of our subscription base, the protected 20% now. There’ll be less than that later. But, over time, they pay less than they used to in the old perpetual model.

That started out so well, got lost a bit in the middle, and perhaps recovered towards the end. But it didn’t address the fundamental question: when will Autodesk provide more value to these disaffected customers? Admittedly, Mr. Anagnost is in a tough spot in speaking to investors who want one thing (more revenue and profit) about customers who want lower prices (meaning, less revenue to Autodesk). But it’s completely of his own making: framing the transition to subs as a way to raise revenue per customer was never going to have any other outcome than this unless Autodesk over-delivered on product-related promises.

We did get a glimpse into Autodesk’s thought process on R&D. Later in the call, answering a question about roadmaps, Mr. Anagnost added,

“[The question] is, where we put new dollars. So, for instance, at the beginning of this year, this whole concern around architecture and architects, is something we saw coming because this has been a five-plus year kind of tension. We actually increased investment in AutoCAD Architecture at the beginning of this year. We used incremental R&D dollars to increase investment in that space.

Moving forward we will deliberately choose where we add incremental investment, and we’ve been very forthright with the construction space in terms of incremental investment. We’re not going to shift money away from that. But as we add incremental investment into next year and year after that, we’ll probably add more incremental investment into other places over time.

“We’re in the enviable position to be able to [add incremental nivestment], we’re spending more in R&D than we ever had in our history. And we have still room to invest more, we’re just going to choose deliberately to add incremental investment in certain spaces, like we did at the beginning of this year for architecture.”

Notice it’s not Revit. And if someone saw it coming, why let it get to this point?

Topic switch to manufacturing, where revenue grew 6% in FQ2. Mr. Anagnost told investors that “we’re growing faster than our biggest competitor in the space. We had good strong growth coming into the year so we’re comparing 6% to a good year last year — we’re actually happy with the performance we’re seeing right now. And it’s only going to continue to get better”.

Last thing: AutoCAD and AutoCAD LT. We don’t hear much about it, but it’s likely the most used CAD product on the planet. Mr. Anagnost said that “We used to talk about that as the canary in the coal mine for market dislocation at the low end but subscription changes everything. The subscription price point for LT is very attractive and most of the customers that are buying it are small to medium businesses. It does what they need.” Autodesk used to talk about AutoCAD and LT as entry points into the Autodesk product family; it clearly has value on its own, too.

Well. Verticals and geos. Subs and maintenance. Low- to high-end. A very wide-ranging call with investors, a solid FQ2 and decent outlook.

Note: the quotes are from my notes, checked against the recording of the earnings call, which you can get to here. Listen for yourself!

The post Autodesk’s Q2 revenue up 15%, comes out swinging on AEC appeared first on Schnitger Corporation.


Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: