CFD Online Logo CFD Online URL
Home >

CFD Blog Feeds

Another Fine Mesh top

► Attention Students: Now Hiring Engineering Interns for Summer 2021
  25 Nov, 2020
Pointwise seeks two engineering interns for the summer of 2021, one for our Engineering Services team and one for our Product Development team. Engineering Services In Engineering Services, you will primarily be responsible for applied meshing. You will test new … Continue reading
► We’re Hiring an Applications Engineer for our Engineering Services Team
  24 Nov, 2020
Pointwise seeks a dynamic, inquisitive, detail-oriented person with strong communication skills and a passion for helping people solve challenging technical problems to join our Engineering Services team in Fort Worth, Texas. As an Applications Engineer on our Engineering Services team … Continue reading
► Recording Now Available: Mesh and Run a High-Fidelity Aircraft Simulation in Minutes
  23 Nov, 2020
Just a quick post to let you know that the recording of our recent webinar with FlexCompute is now available for viewing at your leisure. At Pointwise we have been developing a new suite of features called Flashpoint that takes … Continue reading
► This Week in CFD
  20 Nov, 2020
This week’s CFD news has something for fans of origin stories; read about how CFD Support came to be. There’s also a lot of good reading about HPC, some positive business news, and signs that we might start having in-person … Continue reading
► This Week in CFD
  13 Nov, 2020
In this week’s CFD news we learn that in-person events are starting to show up on the calendar. Lots of applied CFD this week, from boilers to bookshelves. And news of a new book by Edward Tufte for those of … Continue reading
► Native AzoreCFD Interface Now Available for Pointwise
  13 Nov, 2020
Azore Software announced the availability of a plugin that provides a native interface between Pointwise and AzoreCFD. You may recall from August the launch of AzoreCFD, a new entrant to the commercial CFD market from veteran engineering and CFD consulting … Continue reading

F*** Yeah Fluid Dynamics top

► Slow Motion Speech
  25 Nov, 2020

Sneezing, coughing, and speaking all produce a spray of droplets capable of spreading COVID-19 and other respiratory illnesses. This Slow Mo Guys video is the latest demonstration in a long line of evidence for why wearing masks in public is such an important part of ending our current public health crisis. Also, I think we can all agree: that sneeze footage is gross. (Image and video credit: The Slow Mo Guys)

► Floating in Levitating Liquids
  24 Nov, 2020

When it comes to stability, nature can be amazingly counter-intuitive, as in this case of flotation on the underside of a levitating liquid. First things first: how is this liquid layer levitating? To answer that, consider a simpler system: a pendulum. There are two equilibrium positions for a pendulum: hanging straight down or pointing straight up. We don’t typically observe the latter position because it’s unstable; the slightest disturbance from that perfectly vertical situation will make it fall. But it’s possible to stabilize an inverted pendulum simply by shaking it up and down. The vibration creates a dynamic stability.

The same physics, it turns out, holds for a layer of viscous fluid. With the right vibration, the denser fluid can levitate stably over a layer of air. Inside this vibrating layer, the rules of buoyancy are a little different because the vibration modifies the effects of gravity. As a result, bubbles deep in the liquid layer sink (Image 1). The researchers used this behavior to create their levitating layer (Image 2). The shaking also serves to stabilize objects floating on the underside of the liquid layer, allowing the boat in Image 3 to float upside down! (Image and research credit: B. Apffel et al.; via NYTimes; submitted by multiple sources)

► Hudson Bay Watercolors
  23 Nov, 2020

Rivers sweep fresh water and sediment into the Hudson Bay in this satellite image. Dark brown plumes mark the mouths of several coastal rivers as they add to the cyclonic sediment flow around the bay and out the Hudson Strait. Paler swirls, like strokes of watercolors, mark turbulent mixing between the sediment-filled shallows and the deep blue waters of the bay. (Image credit: J. Stevens/USGS; via NASA Earth Observatory)

► “The Unseen Sea”
  20 Nov, 2020

San Francisco’s picturesque fogs form “The Unseen Sea” in Simon Christen’s timelapse. Viewed at the right speed, the motion of clouds becomes remarkably ocean-like, with standing waves and surges against the hillside like waves crashing on a beach. Clouds in air don’t have the same surface tension effects as water waves in air, but, for the most part, the physics of their motion is the same, which is why they look so alike. (Image and video credit: S. Christen)

► Synchronizing Microfluidic Drops
  19 Nov, 2020

In nature, synchronization occurs when oscillators interact. A group of metronomes shifting to tick in unison is a classic example. Here, the system is a microfluidic T-junction and the oscillators are the liquid interfaces along the narrower inlet channels. Systems like this one have long been used to create alternating droplets (Image 1), corresponding to out-of-phase synchronization. But a new paper shows that the same system can perform in-phase synchronization (Image 2), too, generating droplets at the same time.

For any synchronization to occur, the main channel must be narrow enough for the two side channels to influence one another. Once that’s the case, the out-of-phase synchronization happens at a relatively high flow rate, and lowering the flow rate causes the system to transition to in-phase synchronization. (Image and research credit: E. Um et al.; submitted by Joonwoo J.)

► Dead Water
  18 Nov, 2020

In the days before motorized propulsion, sailors would sometimes find themselves slowed nearly to a stop by what they called ‘dead water‘. As discovered in laboratory experiments over a century ago by Vagn Walfrid Ekman, the dead water phenomenon occurs where a layer of fresh water exists over saltier water. The ship’s motion generates internal waves in the salty layer, which in turn causes substantial additional drag on the boat. In a related phenomenon, named for Ekman, the internal waves generated by a boat’s initial acceleration cause its speed to fluctuate.

While these phenomena have little effect on today’s shipping, they can be relevant for swimmers in areas like harbors and fjords where fresh water meets the sea. And their effects were undoubtedly substantial for much of history. There is even speculation that dead water might have caused the defeat of Mark Antony and Cleopatra’s superior navy at the hands of Octavian’s smaller ships in the Battle of Actium. (Image credit: M. Blum; research credit: J. Fourdrinoy et al.; via Hakai Magazine; submitted by Kam-Yung Soh)

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

CFD Online top

► RANS Grid Sensitivity Divergence on LES Grid
  31 Aug, 2020
Reference on not changing y+ while doing a grid sensitivity study:

Originally Posted by sbaffini View Post
Indeed, if y+ =4 is relative to the finest grid, it is confirmed to be a wall function problem. I can't double check now, but I'm pretty sure that the k-omega sst model in CFX uses an all y+ wall function, which means that a wall function is always active. While, in theory, such wall functions should be insensitive to the specific y+ value, they are not perfect and your case is very far from the typical wall function scenario (equilibrium boundary layer), so what you obtain is actually expected.

The only viable solution here, and I suggest you to investigate it also for your other models, is to redistribute cells in your grid to be always within y+ = 1-2, but no more. In any case, the important thing is that you can't have y+ changing between the grids when doing a grid refinement.

EDIT: I know, it sucks...
► Y+ value for Large Eddy Simulation
  31 Aug, 2020
Explanation of Y+ as it relates to viscous sublayer and advection scheme:

Originally Posted by cfdnewbie View Post
yes, at least in the viscous sublayer. The size of your grid cell (or the number of points per unit length) determine the smallest scale you can catch on a given grid. From information theory, the Nyquist theorem tells us that we need at least 2 points per wavelength to represent a frequency (we need to be able to detect the sign change). However, 2 points per wavelength is just for Fourier-type approximations. For other schemes like O1 FV you need a lot more, maybe 6 to 10 to accurately capture a wavelength. Let's assume that you have the same grid in all of the flow (i.e. high resolution everywhere, no grid stretching or such). Then the smallest scale you can capture is determined by your grid and scheme, the better/finer, the smaller the scale.

OF course, most grids will coarsen away from the wall, so the smallest scale will "grow bigger" away from the wall as well

Ha, that's the crux of LES :) of course, the bigger y+, the fewer the small scales you will catch, but does that change the result of the bigger scales?

The answer is not straight forward, but I'll try to make it short:

Let's talk about NS-equations (or any non-linear conservation eqns). The scales represented in the equations are coupled by the non-linearity of the equations, i.e. what happens on one scale will (eventually) reach all other scales (also known as the butterfly effect). So the NS eqns represent the full "nature" with all its scales and interactions. We now truncate our "nature" by resolving only the larger scales, since our grid is too coarse.... what will happen? Will the large scales be influenced by the lack of small scales?

Hell, yeah, they will. We are lacking the balancing interaction of the small scales, since we don't have these scales. We are also lacking the physical effects that take place at small scales (dissipation).... so we have production of turbulence at large scales, the energy is handed down through the medium scales but is NOT dissipated at the small scales, since they are simply not present in our computation. Will that influence the large scales? Definitely!

That's why LES people add some type of viscosity (effect of small scales) to their computations, otherwise, their simulations would very likely just blow up!

hope this help!

► Rans
  31 Aug, 2020
Originally Posted by vinerm View Post
That's a wrong notion that RANS or EVM models are introduced to get faster results or are expected to be used with coarse mesh. There is no such assumption behind development of these models. The only assumption in EVM is that the turbulence is isotropic and non-EVM RANS, such as, RSM don't even have that assumption.

And when it comes to wall treatment, it is not directly linked with turbulence model; even LES requires wall treatment. y^+ is a non-dimensional (Reynolds) number and for almost all industrial fluids, theoretically as well as experimentally, it is found that u^+ = y^+ up to y^+ of 5. And if it is linear within this limit, it does not matter if you have 10 points or just 1 point, the line would be same. So, y^+ being smaller than 1 is an overkill and does not help within anything.

Boundary conditions for both k and \varepsilon at the wall is 0.
► What I've done in the past years and may need someone else to pick it back up
  18 Aug, 2020
This is a blog post aimed to pass on the baton of the work I've done in the past to anyone who wants to pick it back up partially or completely, which I was still doing (or trying to do) until Hanging my volunteer gloves and moving to a new phase of my life.

This blog post could potentially be edited as time goes on and I remember about things I've done in the past and which should be picked up by someone else:
  1. Generating version template pages and logos for said versions at - this is explained here: and here
  2. Writing and testing installation instructions at - The objective was to ensure that the less knowledgeable user would still be able to compile+install OpenFOAM from source code with a much higher success rate, than following the succinct instructions available at the official websites.
  3. Updating the release version links at the top right-most corner of
  4. Uh... several other things listed at, mostly listed here:
  5. Contributing to bug reports and fixes at
  6. Moderator work here at the forum, including:
    1. Hunting down spam, which nowadays is mostly automated, but not fully automated.
    2. Moving threads to the correct sub-forums.
    3. Re-arranging forums to make it easier for people to ask and answer questions, as well as finding existing answers.
    4. Warning forum members when they've not followed the rules...
    5. I wanted to have pruned all of the threads on the main OpenFOAM forum and place them in their correct sub-forums, but never got around to it. There is a thread on the moderator forum that explains how to streamline the process.
    6. I wanted to have finished moving posts into independent threads out of this still large thread:
    7. Also out of this one:
  7. Had a list of posts/threads I wanted to look into... which is now written on this wiki page on my central repository for these kinds of notes: What I wanted to still have done for the OpenFOAM community, but never managed to find the time for it
  8. And had a list of bugs I wanted to solve: Bugs on OpenFOAM's bug tracker I wanted to tackle, but never managed to find the time for it
  9. I have over 50 repositories at - most of them related to OpenFOAM and which will be left as-is for the years to come. If you want to continue working on them and even take over maintenance, open an issue on the respective repository.
► Hanging my volunteer gloves and moving to a new phase of my life
  18 Aug, 2020
TL;DR: As of 2020, I can only help during office hours, at work, if paid and/or affects our projects, namely what we use in OpenFOAM itself and blueCFD-Core.

Full post:
So nearly 2 years after my blog post Why I contribute to the OpenFOAM forum(s), wiki(s) and the public community, I'm writing this blog post you are reading now.

My last 3 thread posts at the forums in CFD-Online this year, were on May 7th, February 27th and January 20th. And before that, it was 10 posts over my winter vacation on the last week of 2019. Before that, it averaged out to around 1 post/month. I have 10,956 posts here at the forum and it still averages to 2.62 post/day.

I'm currently vacation, mid August 2020 and am writing this, since I'm unable to help the way I used to in the past.

So what happened?
In a short description: borderline-burning-out + ~30kg overweight.

In other words, I was still able to work, but having difficulty maintaining a stable life, which wasn't healthy to begin for years now, along with overly stressed, even if there was not much of a reason to be stressed...

What am I doing now, since early 2020?
  1. Changed my diet, namely changed my eating regiment to something I should have done over 20 years ago.
  2. Increased my physical activity to a much healthier dosage.
  3. Am moving on with my life to a new phase where I actually have to behave as a grown up, specially given I'm already 40 years old as I write this.

What does this mean for what I can do to help in the community?
Given my past efforts over a period of 10 years, I'm writing this blog post as an official stance on how much I will be able to help in the future:
  1. The majority (~99.9%) of the public contributions will be done within working hours at my job; in other words, during office hours, at work, if paid and/or affects our projects, namely what we use in OpenFOAM itself and blueCFD-Core.
  2. The remaining 0.1% outside of my job will mostly be the bug tracker at, given that I can't be at both and :(
  3. Everything else where I've helped in the past, will be once in a blue moon, may it be at the forum or
  4. I don't know how many or which community/official OpenFOAM workshops I will attend in the future. I already had to give up on the Iberian User workshop of 2018, due to health reasons, i.e. what has finally led me to this decision this year of 2020.
This has been gradually occurring since at least 2015, but it has effectively come to this stopping point.

What I ask you, as you are reading this blog post?

Associated to this blog post, I'm writing another blog post which I may need to update in the near future: What I've done in the past years and may need someone else to pick it back up
edit: Aiming to wrap up writing said blog post by the end of the 19th of August 2020.

Signing off for now:
I've written some years ago in a forum post, where someone asked a vague question and I went on a rant over "as people grow older, the more they know and the more responsibilities they have, therefore the less free time they have to come and help here... so the less information you provide, the less likely you will get the answer you need".

In a way, my time has come and I need to move on with my life. But I was stressing out too much to notice it sooner. Fortunately I should still be on time to keep going forward and hopefully be able to help more the community in the future.

This has happened to the various authors of code that is currently and was in OpenFOAM in the past, where they helped people publicly over several years and ended up having to pull away from the community, because it's not easy to achieve a balance between life and working as a volunteer.

Fun fact:
Even if I don't post in the next 20 years it would still give me a rate of 1 post/month... :cool::rolleyes:
► 10 crucial parameters to check before committing to a CFD software for academia
    4 Aug, 2020
I have put together a comprehensive list of 10 crucial parameters that you, as a researcher or a teacher, should check with the CFD software provider, before committing to their software.
Attached Thumbnails
Click image for larger version

Name:	cfd_academic.PNG
Views:	68
Size:	107.8 KB
ID:	511  

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15

    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)


        type patch;
            (0 8 12 4)
        type patch;
            (3 7 15 11)
        type wall;
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        type patch;
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        type empty;
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

        polyLine 1 2
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)

        polyLine 9 10
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!


This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously ( as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post:

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (, but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:



Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')


plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,


popt, pcov = curve_fit(sutherland, T, mu)
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')


plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!


In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post ( most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.


Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!


You can download the script here:

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.


(1) Copy to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3
(5) If no errors – run blockMesh

You need to run this with python 3, and you need to have numpy installed


The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.


12% Joukowski Airfoil


With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:

With these inputs, the result looks like this:

Mesh Quality:

Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality


Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

Hanley Innovations top

► Accurate Aircraft Performance Predictions using Stallion 3D
  26 Feb, 2020

Stallion 3D uses your CAD design to simulate the performance of your aircraft.  This enables you to verify your design and compute quantities such as cruise speed, power required and range at a given cruise altitude. Stallion 3D is used to optimize the design before moving forward with building and testing prototypes.

The table below shows the results of Stallion 3D around the cruise angles of attack of the Cessna 402c aircraft.  The CAD design can be obtained from the OpenVSP hangar.

The results were obtained by simulating 5 angles of attack in Stallion 3D on an ordinary laptop computer running MS Windows 10 .  Given the aircraft geometry and flight conditions, Stallion 3D computed the CL, CD, L/D and other aerodynamic quantities.  With this accurate aerodynamics results, the preliminary performance data such as cruise speed, power, range and endurance can be obtained.

Lift Coefficient versus Angle of Attack computed with Stallion 3D

Lift to Drag Ratio versus True Airspeed at 10,000 feet

Power Required versus True Airspeed at 10,000 feet

The Stallion 3D results shows good agreement with the published data for the Cessna 402.  For example, the cruse speed of the aircraft at 10,000 feet is around 140 knots. This coincides with the speed at the maximum L/D (best range) shown in the graph and table above.

 More information about Stallion 3D can be found at the following link.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software that is accessible to engineers, designers and students.  For more information, please visit >

► 5 Tips For Excellent Aerodynamic Analysis and Design
    8 Feb, 2020
Stallion 3D analysis of Uber Elevate eCRM-100 model

Being the best aerodynamics engineer requires meticulous planning and execution.  Here are 5 steps you can following to start your journey to being one of the best aerodynamicist.

1.  Airfoils analysis (VisualFoil) - the wing will not be better than the airfoil. Start with the best airfoil for the design.

2.  Wing analysis (3Dfoil) - know the benefits/limits of taper, geometric & aerodynamic twist, dihedral angles, sweep, induced drag and aspect ratio.

3. Stability analysis (3Dfoil) - longitudinal & lateral static & dynamic stability analysis.  If the airplane is not stable, it might not fly (well).

4. High Lift (MultiElement Airfoils) - airfoil arrangements can do wonders for takeoff, climb, cruise and landing.

5. Analyze the whole arrangement (Stallion 3D) - this is the best information you will get until you flight test the design.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software the is accessible to engineers, designs and students.  For more information, please visit >

► Accurate Aerodynamics with Stallion 3D
  17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling. 

The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention. 

Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.

Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.

Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.

Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

More information about Stallion 3D can be found at:

► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse

Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017

Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.

More information about the software can be found at the following url:

Thanks for reading.

► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:

If your have any questions about this article, please call me at (352) 261-3376 or visit

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

CFD and others... top

► Facts, Myths and Alternative Facts at an Important Juncture
  21 Jun, 2020
We live in an extraordinary time in modern human history. A global pandemic did the unthinkable to billions of people: a nearly total lock-down for months.  Like many universities in the world, KU closed its doors to students since early March of 2020, and all courses were offered online.

Millions watched in horror when George Floyd was murdered, and when a 75 year old man was shoved to the ground and started bleeding from the back of his skull...

Meanwhile, Trump and his allies routinely ignore facts, fabricate alternative facts, and advocate often-debunked conspiracy theories to push his agenda. The political system designed by the founding fathers is assaulted from all directions. The rule of law and the free press are attacked on a daily basis. One often wonders how we managed to get to this point, and if the political system can survive the constant sabotage...It appears the struggle between facts, myths and alternative facts hangs in the balance.

In any scientific discipline, conclusions are drawn, and decisions are made based on verifiable facts. Of course, we are humans, and honest mistakes can be made. There are others, who push alternative facts or misinformation with ulterior motives. Unfortunately, mistaken conclusions and wrong beliefs are sometimes followed widely and become accepted myths. Fortunately, we can always use verifiable scientific facts to debunk them.

There have been many myths in CFD, and quite a few have been rebutted. Some have continued to persist. I'd like to refute several in this blog. I understand some of the topics can be very controversial, but I welcome fact-based debate.

Myth No. 1 - My LES/DNS solution has no numerical dissipation because a central-difference scheme is used.

A central finite difference scheme is indeed free of numerical dissipation in space. However, the time integration scheme inevitably introduces both numerical dissipation and dispersion. Since DNS/LES is unsteady in nature, the solution is not free of numerical dissipation.  

Myth No. 2 - You should use non-dissipative schemes in LES/DNS because upwind schemes have too much numerical dissipation.

It sounds reasonable, but far from being true. We all agree that fully upwind schemes (the stencil shown in Figure 1) are bad. Upwind-biased schemes, on the other hand, are not necessarily bad at all. In fact, in a numerical test with the Burgers equation [1], the upwind biased scheme performed better than the central difference scheme because of its smaller dispersion error. In addition, the numerical dissipation in the upwind-biased scheme makes the simulation more robust since under-resolved high-frequency waves are naturally damped.   

Figure 1. Various discretization stencils for the red point
The Riemann solver used in the DG/FR/CPR scheme also introduces a small amount of dissipation. However, because of its small dispersion error, it out-performs the central difference and upwind-biased schemes. This study shows that both dissipation and dispersion characteristics are equally important. Higher order schemes clearly perform better than a low order non-dissipative central difference scheme.  

Myth No. 3 - Smagorisky model is a physics based sub-grid-scale (SGS) model.

There have been numerous studies based on experimental or DNS data, which show that the SGS stress produced with the Smagorisky model does not correlate with the true SGS stress. The role of the model is then to add numerical dissipation to stablize the simulations. The model coefficient is usually determined by matching a certain turbulent energy spectrum. The fact suggests that the model is purely numerical in nature, but calibrated for certain numerical schemes using a particular turbulent energy spectrum. This calibration is not universal because many simulations produced worse results with the model.

► What Happens When You Run a LES on a RANS Mesh?
  27 Dec, 2019

Surely, you will get garbage because there is no way your LES will have any chance of resolving the turbulent boundary layer. As a result, your skin friction will be way off. Therefore, your drag and lift will be a total disaster.

To actually demonstrate this point of view, we recently embarked upon a numerical experiment to run an implicit large eddy simulation (ILES) of the NASA CRM high-lift configuration from the 3rd AIAA High-Lift Prediction Workshop. The flow conditions are: Mach = 0.2, Reynolds number = 3.26 million based on the mean aerodynamic chord, and the angle of attack = 16 degrees.

A quadratic (Q2) mesh was generated by Dr. Steve Karman of Pointwise, and is shown in Figure 1.

 Figure 1. Quadratic mesh for the NASA CRM high-lift configuration (generated by Pointwise)

The mesh has roughly 2.2 million mixed elements, and is highly clustered near the wall with an average equivalent y+ value smaller than one. A p-refinement study was conducted to assess the mesh sensitivity using our high-order LES tool based on the FR/CPR method, hpMusic. Simulations were performed with solution polynomial degrees of p = 1, 2 and 3, corresponding to 2nd, 3rd and 4th orders in accuracy respectively. No wall-model was used. Needless to say, the higher order simulations captured finer turbulence scales, as shown in Figure 2, which displays the iso-surfaces of the Q-criteria colored by the Mach number.    

p = 1

p = 2

p = 3
Figure 2. Iso-surfaces of the Q-criteria colored by the Mach number

Clearly the flow is mostly laminar on the pressure side, and transitional/turbulent on the suction side of the main wing and the flap. Although the p = 1 simulation captured the least scales, it still correctly identified the laminar and turbulent regions. 

The drag and lift coefficients from the present p-refinement study are compared with experimental data from NASA in Table I. Although the 2nd order results (p = 1) are quite different than those of higher orders, the 3rd and 4th order results are very close, demonstrating very good p-convergence in both the lift and drag coefficients. The lift agrees better with experimental data than the drag, bearing in mind that the experiment has wind tunnel wall effects, and other small instruments which are not present in the computational model. 

Table I. Comparison of lift and drag coefficients with experimental data

p = 1
p = 2
p = 3

This exercise seems to contradict the common sense logic stated in the beginning of this blog. So what happened? The answer is that in this high-lift configuration, the dominant force is due to pressure, rather than friction. In fact, 98.65% of the drag and 99.98% of the lift are due to the pressure force. For such flow problems, running a LES on a RANS mesh (with sufficient accuracy) may produce reasonable predictions in drag and lift. More studies are needed to draw any definite conclusion. We would like to hear from you if you have done something similar.

This study will be presented in the forthcoming AIAA SciTech conference, to be held on January 6th to 10th, 2020 in Orlando, Florida. 

► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 

► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.

A Tank outside my taxi

A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion

To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.

Convergent Science Blog top

► The Collaboration Effect: Advancing Engines Through Simulation & Experimentation
    9 Nov, 2020

From the Argonne National Laboratory + Convergent Science Blog Series

Through the collaboration between Argonne National Laboratory and Convergent Science, we provide fundamental research that enables manufacturers to design cleaner and more efficient engines by optimizing combustion. 

–Doug Longman, Manager of Engine Research at Argonne National Laboratory

The internal combustion engine has come a long way since its inception—the engine in your car today is significantly quieter, cleaner, and more efficient than its 1800s-era counterpart. For many years, the primary means of achieving these advances was experimentation. Indeed, we have experiments to thank for a myriad of innovations, from fuel injection systems to turbocharging to Wankel engines.

More recently, a new tool was added to the engine designer’s toolbox: simulation. Beginning in the 1970s and ‘80s, computational fluid dynamics (CFD) opened the door to a new level of refinement and optimization.

“One of the really cool things about simulation is that you can look at physics that cannot be easily captured in an experiment—details of the flow that might be blocked from view, for example,” says Eric Pomraning, Co-Owner of Convergent Science.

Of course, experiments remain vitally important to engine research, since CFD simulations model physical processes, and experiments are necessary to validate your results and ground your simulations in reality.

Argonne National Laboratory and Convergent Science combine these two approaches—experiments and simulation—to further improve the internal combustion engine. Two of the main levers we have to control the efficiency and emissions of an engine are the fuel injection system and the ignition system, both of which have been significant areas of focus during the collaboration.

Fuel Injection

The combustion process in an internal combustion engine really begins with fuel injection. The physics of injection determine how the fuel and air in the cylinder will mix, ignite, and ultimately combust. 

Argonne National Laboratory is home to the Advanced Photon Source (APS), a DOE Office of Science User Facility. The APS provides a unique opportunity to characterize the internal passages of injector nozzles with incredibly high spatial resolution through the use of high-energy x-rays. This data is invaluable for developing accurate CFD models that manufacturers can use in their design processes.

Early on in the collaboration, Christopher Powell, Principle Engine Research Scientist at Argonne, and his team leveraged the APS to investigate needle motion in an injector.

“Injector manufacturers had long suspected that off-axis motion of the injector valve could be present. But they never had a way to measure it before, so they weren’t sure how it impacted fuel injection,” says Chris.

The x-ray studies performed at the APS were the first in the world to confirm that some injector needles do exhibit radial motion in addition to the intended axial motion, a phenomenon dubbed “needle wobble.” Argonne and Convergent Science engineers simulated this experimental data in CONVERGE, prescribing radial motion to the injector needle. They found that needle wobble can substantially impact the fuel distribution as it exits the injector. Manufacturers were able to apply the results of this research to design injectors with a more predictable spray pattern, which, in turn, leads to a more predictable combustion event.

More recently, researchers at Argonne have used the APS to investigate the shape of fuel injector flow passages and characterize surface roughness. Imperfections in the geometry can influence the spray and the subsequent downstream engine processes. 

“If we use a CAD geometry, which is smooth, we will miss out on some of the physics, like cavitation, that can be triggered by surface imperfections,” says Sameera Wijeyakulasuriya, Senior Principal Engineer at Convergent Science. “But if we use the x-ray scanned geometry, we can incorporate those surface imperfections into our numerical models, so we can see how the flow field behaves and responds.”

Argonne and Convergent Science engineers performed internal nozzle flow simulations that used the real injector geometries and that incorporated real needle motion.1 Using the one-way coupling approach in CONVERGE, they mapped the results of the internal flow simulations to the exit of each injector orifice to initialize a multi-plume Lagrangian spray simulation. As you can see in Figure 1, the surface roughness and needle motion significantly impact the spray plume—the one-way coupling approach captures features that the standard rate of injection (ROI) method could not. In addition, the real injector parameters introduce orifice-to-orifice variability, which affects the combustion behavior down the line.

Figure 1: Comparison of the spray plume (top) and the effect of orifice-to-orifice variability on combustion behavior (bottom) simulated using the standard ROI method (left) and the one-way coupling method (right), which accounts for the real injector geometry and needle motion.

The real injector geometries not only allow for more accurate computational simulations, but they also can serve as a diagnostic tool for manufacturers to assess how well their manufacturing processes are producing the desired nozzle shape and size.

Spark Ignition

Accurately characterizing fuel injection sets the stage for the next lever we can optimize in our engine: ignition. In spark-ignition engines, the ignition event initiates the formation of the flame kernel, the growth of the flame kernel, and the flame propagation mechanism.

“In the past, ignition was just modeled as a hot source—dumping an amount of energy in a small region and hoping it transitions to a flame. The amount of physics in the process was very limited,” says Sibendu Som, Manager of the Computational Multi-Physics Section at Argonne.

These simplified models are adequate for most stable engine conditions, but you can run into trouble when you start simulating more advanced combustion concepts. In these scenarios, the simplified ignition models fall short in replicating experimental data. Over the course of their collaboration, Argonne and Convergent Science have incorporated more physics into ignition models to make them robust for a variety of engine conditions. 

For example, high-performance spark-ignition engines often feature high levels of dilution and increased levels of turbulence. These conditions can have a significant impact on the ignition process, which consequently affects combustion stability and cycle-to-cycle variation (CCV). To capture the elongation and stretch experienced by the spark channel under highly turbulent conditions, Argonne and Convergent Science engineers developed a new ignition model, the hybrid Lagrangian-Eulerian spark-ignition (LESI) model.

In Figure 2, you can see that the LESI model more accurately captures the behavior of the spark under turbulent conditions compared to a commonly used energy deposition model.2 The LESI model will be available in future versions of CONVERGE, accessible to manufacturers to help them better understand ignition and mitigate CCV.

Figure 2: Comparison of experimental results (A) with a commonly used energy deposition model (B) and the LESI model (C) at turbulent engine-like conditions.

Cycle-to-Cycle Variation

Ideally, every cycle of an internal combustion engine would be exactly identical to ensure smooth operation. In real engines, variability in the injection, ignition, and combustion means that not every cycle will be the same. Cyclic variability is especially prevalent in high-efficiency engines that push the limits of combustion stability. Extreme cycles can cause engine knock and misfires—and they can influence emissions.

“Not every engine cycle generates significant emissions. Often they’re primarily formed only during rare cycles—maybe one or two out of a hundred,” says Keith Richards, Co-Owner of Convergent Science. “Being able to capture cyclic variability will ultimately allow us to improve our predictive capabilities for emissions.”

Modeling CCV requires simulating numerous engine cycles, which is a highly (and at times prohibitively) time-consuming process. Several years ago, Keith suggested a potential solution—starting several engine cycles concurrently, each with a small perturbation to the flow field, which allows each simulation to develop into a unique solution. 

Argonne and Convergent Science compared this approach—called the concurrent perturbation method (CPM)—to the traditional approach of simulating engine cycles consecutively. Figure 3 shows CCV results obtained using CPM compared to concurrently run cycles, which you can see match very well.3 This means that with sufficient computational resources, you can predict CCV in the amount of time it takes to run a single engine cycle.

Figure 3: CCV results from consecutively run simulations (left) versus concurrently run simulations (right) for the same gasoline direct injection engine case.

The study described above, and the vast majority of all CCV simulation studies, use large eddy simulations (LES), because LES allows you to resolve some of the turbulence scales that lead to cyclic variability. Reynolds-Averaged Navier-Stokes (RANS), on the other hand, provides an ensemble average that theoretically damps out variations between cycles. At least this was the consensus among the engine modeling community until Riccardo Scarcelli, a Research Scientist at Argonne, noticed something strange.

“I was running consecutive engine cycle simulations to move away from the initial boundary conditions, and I realized that the cycles were never converged to an average solution—the cycles were never like the cycle before or the cycle after,” Riccardo says. “And that was strange because I was using RANS, not LES.”

Argonne and Convergent Science worked together to untangle this mystery, and they discovered that RANS is able to capture the deterministic component of CCV. RANS has long been the predominant turbulence model used in engine simulations, so how had this phenomenon gone unnoticed? In the past, most engine simulations modeled conventional combustion, which shows little cyclic variability in practice in either diesel or gasoline engines. The more complex combustion regimes simulated today—along with the use of finer grids and more accurate numerics—allows RANS to pick up on some of the cycle-to-cycle variations that these engines exhibit in the real world. While RANS will not provide as accurate a picture as LES, it can be a useful tool to capture CCV trends. Additionally, RANS can be run on a much coarser mesh than LES, so you can get a faster turnaround on an inherently expensive problem, making CCV studies more practical for industry timelines.

Advancing Engine Technology

The gains in understanding and improved models developed during the Argonne and Convergent Science collaboration provide great benefit to the engine community. One of the primary missions of Argonne National Laboratory is to transfer knowledge and technology to industry. To that end, the models developed during the collaboration will continue to be implemented in CONVERGE, putting the technology in the hands of manufacturers, so they can create better engines. 

What can we look forward to in the future? There will continue to be a strong focus on developing high fidelity numerics, expanding and improving chemistry tools and mechanisms, integrating machine learning into the simulation process, and speeding up CFD simulations—establishing more efficient models and further increasing the scalability of CONVERGE to take advantage of the latest computational resources. Moreover, we can look forward to seeing the innovations of the last decade of collaboration incorporated into the engines of the next decade, bringing us closer to a clean transportation future.

In case you missed the other posts in the series, you can find them here:


[1] Torelli, R., Matusik, K.E., Nelli, K.C., Kastengren, A.L., Fezzaa, K., Powell, C.F., Som, S., Pei, Y., Tzanetakis, T., Zhang, Y., Traver, M., and Cleary, D.J., “Evaluation of Shot-to-Shot In-Nozzle Flow Variations in a Heavy-Duty Diesel Injector Using Real Nozzle Geometry,” SAE Paper 2018-01-0303, 2018. DOI: 10.4271/2018-01-0303

[2] Scarcelli, R., Zhang, A., Wallner, T., Som, S., Huang, J., Wijeyakulasuriya, S., Mao, Y., Zhu, X., and Lee, S.-Y., “Development of a Hybrid Lagrangian–Eulerian Model to Describe Spark-Ignition Processes at Engine-Like Turbulent Flow Conditions,” Journal of Engineering for Gas Turbines and Power, 141(9), 2019. DOI: 10.1115/1.4043397
[3] Probst, D., Wijeyakulasuriya, S., Pomraning, E., Kodavasal, J., Scarcelli, R., and Som, S., “Predicting Cycle-to-Cycle Variation With Concurrent Cycles In A Gasoline Direct Injected Engine With Large Eddy Simulations”, Journal of Energy Resources Technology, 142(4), 2020. DOI: 10.1115/1.4044766

► Exploring Offshore Wind Energy: Creating a Cleaner Future With CFD
  19 Oct, 2020

Renewable energy is being generated at unprecedented levels in the United States, and those levels will only continue to rise. The growth in renewable energy has been driven largely by wind power—over the last decade, wind energy generation in the U.S. has increased by 400% 1. It’s easy to see why wind power is appealing. It’s sustainable, cost-effective, and offers the opportunity for domestic energy production. But, like all energy sources, wind power doesn’t come without drawbacks. Concerns have been raised about land use, noise, consequences to wildlife habitats, and the aesthetic impact of wind turbines on the landscape 2.

However, there is a potential solution to many of these issues: what if you move wind turbines offshore? In addition to mitigating concerns over land use, noise, and visual impact, offshore wind turbines offer several other advantages. Compared to onshore, wind speeds offshore tend to be higher and steadier, leading to large gains in energy production. Also, in the U.S., a large portion of the population lives near the coasts or in the Great Lakes region, which minimizes problems associated with transporting wind-generated electricity. But despite these advantages, only 0.03% of the U.S. wind-generating capacity in 2018 came from offshore wind plants 1. So why hasn’t offshore wind energy become more prevalent? Well, one of the major challenges with offshore wind energy is a problem of engineering—wind turbine support structures must be designed to withstand the significant wind and wave loads offshore.

Today, there are computational tools that engineers can use to help design optimized support structures for offshore wind turbines. Namely, computational fluid dynamics (CFD) simulations can offer valuable insight into the interaction between waves and the wind turbine support structures. 

Two-phase CONVERGE simulation of a solitary wave breaking on a monopile. The water phase is shown, colored by horizontal velocity.

A CFD Case Study

Hannah Johlas, NSF Graduate Research Fellow

Hannah Johlas is an NSF Graduate Research Fellow in Dr. David Schmidt’s lab at the University of Massachusetts Amherst. Hannah uses CFD to study fixed-bottom offshore wind turbines at shallow-to-intermediate water depths (up to approximately 50 meters deep). Turbines located at these depths are of particular interest because of a phenomenon called breaking waves. As waves move from deeper to shallower water, the wavelength decreases and the wave height increases in a process called shoaling. If a wave becomes steep enough, the crest can overturn and topple forward, creating a breaking wave. Breaking waves can impart substantial forces onto turbine support structures, so if you’re planning to build a wind turbine in shallower water, it’s important to know if that turbine might experience breaking waves.

Hannah uses CONVERGE CFD software to predict if waves are likely to break for ocean characteristics common to potential offshore wind turbine sites along the east coast of the U.S. She also predicts the forces from breaking waves slamming into the wind turbine support structures. The results of the CONVERGE simulations are then used to evaluate the accuracy of simplified engineering models to determine which models best capture wave behavior and wave forces and, thus, which ones should be used when designing wind turbines.

CONVERGE Simulations

In this study, Hannah simulated 39 different wave trains in CONVERGE using a two-phase finite volume CFD model 3. She leveraged the volume of fluid (VOF) method with the Piecewise Linear Interface Calculation scheme to capture the air-water interface. Additionally, automated meshing and Adaptive Mesh Refinement ensured accurate results while minimizing the time to set up and run the simulations.

“CONVERGE’s adaptive meshing helps simulate fluid interfaces at reduced computational cost,” Hannah says. “This feature is particularly useful for resolving the complex air-water interface in breaking wave simulations.”

Some of the breaking waves were then simulated slamming into monopiles, the large cylinders used as support structures for offshore wind turbines in shallow water. The results of these CONVERGE simulations were validated against experimental data before being used to evaluate the simplified engineering models.

Experimental setup at Oregon State University (left) and the corresponding CONVERGE simulation (right) of a wave breaking on a monopile.


Four common models for predicting whether a wave will break (McCowan, Miche, Battjes, and Goda) were assessed. The models were evaluated by how frequently they produced false positives (i.e., the model predicts a wave should break, but the simulated wave does not break) and false negatives (i.e., the model predicts a wave should not break, but the simulated wave does break) and how well they predicted the steepness of the breaking waves. False positives are preferable to false negatives when designing a conservative support structure, since breaking wave loads are usually higher than non-breaking waves.

The study results indicate that none of the models perform well under all conditions, and instead which model you should use depends on the characteristics of the ocean at the site you’re considering.

“For sites with low seafloor slopes, the Goda model is the best at conservatively predicting whether a given wave will break,” Hannah says. “For higher seafloor slopes, the Battjes model is preferred.”

Four slam force models were also evaluated: Goda, Campbell-Weynberg, Cointe-Armand, and Wienke-Oumerachi. The slam models and the simulated CFD wave forces were compared for their peak total force, their force time history, and breaking wave shape. 

The results show that all four slam models are conservative (i.e., predict higher peak forces than the simulated waves) and assume the worst-case shape for the breaking wave during impact. The Goda slam model is the least conservative, while the Cointe-Armand and Wienke-Oumerachi slam models are the most conservative. All four models neglect the effects of runup on the monopiles, which was present in the CFD simulations. This could explain some of the discrepancies between the forces predicted by the engineering models and the CFD simulations.


Offshore wind energy is a promising technology for clean energy production, but to gain traction in the industry, there needs to be sound engineering models to use when designing the turbines. Hannah’s research provides guidelines on which engineering models should be used for a given set of ocean characteristics. Her results also highlight the areas that could be improved upon. 

“The slam force models don’t account for variety in wave shape at impact or for wave runup on the monopiles,” Hannah says. “Future studies should focus on incorporating these factors into the engineering models to improve their predictive capabilities.”

CONVERGE for Renewable Energy

CFD has a fundamental role to play in the development of renewable energy. CONVERGE’s combination of autonomous meshing, high-fidelity physical models, and ability to easily handle complex, moving geometries make it particularly well suited to the task. Whether it’s studying the interaction of waves with offshore turbines, optimizing the design of onshore wind farms, or predicting wind loads on solar panels, CONVERGE has the tools you need to help bring about the next generation of energy production.

Interested in learning more about Hannah’s research? Check out her paper here.


[1] Marcy, C., “U.S. renewable electricity generation has doubled since 2008,”, accessed on Nov 11, 2016.

[2] Center for Sustainable Systems, University of Michigan, “U.S. Renewable Energy Factsheet”,, accessed on Nov 11, 2016.

[3] Johlas, H.M., Hallowell, S., Xie, S., Lomonaco, P., Lackner, M.A., Arwade, S.A., Myers, A.T., and Schmidt, D.P., “Modeling Breaking Waves for Fixed-Bottom Support Structures for Offshore Wind Turbines,” ASME 2018 1st International Offshore Wind Technical Conference, IOWTC2018-1095, San Francisco, CA, United States, Nov 4–7, 2018. DOI: 10.1115/IOWTC2018-1095

► CONVERGE for Pumps & Compressors: The Engineering Solution for Design Optimization
  12 Oct, 2020

Across industries, manufacturers share many of the same goals: create quality products, boost productivity, and reduce expenses. In the pumps and compressors business, manufacturers must contend with the complexity of the machines themselves in order to reach these goals. Given the intricate geometries, moving components, and tight clearances between parts, designing pumps and compressors to be efficient and reliable is no trivial matter. 

First, assessing the device’s performance by building and testing a prototype can be time-consuming and costly. And when you’re performing a design study, machining and switching out various components further compounds your expenses. There are also limitations in how many instruments you can place inside the device and where you can place them, which can make fully characterizing the machine difficult. New methods for testing and manufacturing can help streamline this process, but there remains room for alternative approaches.

Centrifugal pump

Computational fluid dynamics (CFD) offers significant advantages for designing pumps and compressors. Through CFD simulations, you can obtain valuable insight into the behavior of the fluid inside your machine and the interactions between the fluid and solid components—and CONVERGE CFD software is well suited for the task.

Designed to model three-dimensional fluid flows in systems with complex geometries and moving boundaries, CONVERGE is equipped to simulate any positive displacement or dynamic pump or compressor. And with a suite of advanced models, CONVERGE allows you to computationally study the physical phenomena that affect efficiency and reliability—such as surge, pressure pulsations, cavitation, and vibration—to design an optimal machine.

The Value of CONVERGE

CFD provides a unique opportunity to visualize the inner workings of your machine during operation, generating data on pressures, temperatures, velocities, and fluid properties without the limitations of physical measurements. The entire flow field can be analyzed with CFD, including areas that are difficult or impossible to measure experimentally. This additional data allows you to comprehensively characterize your pump or compressor and pinpoint areas for improvement.

Since CONVERGE leads the way in predictive CFD technology, you can analyze pump and compressor designs that have not yet been built and still be confident in your results. Compared to building and testing prototypes, simulations are fast and inexpensive, and altering a computer-modeled geometry is trivial. Iterating through designs virtually and building only the most promising candidates reduces the expenses associated with the design process. 

While three-dimensional CFD is fast compared to experimental methods, it is typically slower than one- or two-dimensional analysis tools, which are often incorporated into the design process. However, 1D and 2D methods are inherently limited in their ability to capture the 3D nature of physical flows, and thus can miss important flow phenomena that may negatively affect performance. 

CONVERGE drastically reduces the time required to set up a 3D pump or compressor simulation with its autonomous meshing capabilities. Creating a mesh by hand—which is standard practice in many CFD programs—can be a weeks-long process, particularly for cases with complex moving geometries such as pumps and compressors. With autonomous meshing, CONVERGE automatically generates an optimized Cartesian mesh based on a few simple user-defined parameters, effectively eliminating all user meshing time. 

In addition, the increased computational resources available today can greatly reduce the time requirements to run CFD simulations. CONVERGE is specifically designed to enable highly parallel simulations to run on many processors and demonstrates excellent scaling on thousands of cores. Additionally, Convergent Science partners with cloud service providers, who offer affordable on-demand access to the latest computing resources, making it simple to speed up your simulations.

Validation Cases

Accurately capturing real-world physical phenomena is critical to obtaining useful simulation results. CONVERGE features robust fluid-structure interaction (FSI) modeling capabilities. For example, you can simulate the interaction between the bulk flow and the valves to predict impact velocity, fatigue, and failure points. CONVERGE also features a conjugate heat transfer (CHT) model to resolve spatially varying surface temperature distributions, and a multi-phase model to study cavitation, oil splashing, and other free surface flows of interest. 

CONVERGE has been validated on numerous types of compressors and pumps1-10, and we will discuss two common applications below. 

Scroll Compressor

Scroll compressors are often used in air conditioning systems, and the major design goals for these machines today are reducing noise and improving efficiency. Scroll compressors consist of a stationary scroll and an orbiting scroll, which create a complex system that can be challenging to model. Some codes use a moving mesh to simulate moving boundaries, but this can introduce diffusive error that lowers the accuracy of your results. CONVERGE automatically generates a stationary mesh at each time-step to accommodate moving boundaries, which provides high numerical accuracy. In addition, CONVERGE employs a unique Cartesian cut-cell approach to perfectly represent your compressor geometry, no matter how complex. 

In this study1, CONVERGE was used to simulate a scroll compressor with a deforming reed valve. An FSI model was used to capture the motion of the discharge reed valve. Figure 1 shows the CFD-predicted mass flow rate through the scroll compressor compared to experimental values. As you can see, there is good agreement between the simulation and experiment. 

This method is particularly useful for the optimization phase of design, as parametric changes to the geometry can be easily incorporated. In addition, Adaptive Mesh Refinement (AMR) allows you to accurately capture the physical phenomena of interest while maintaining a reasonable computational expense.

Figure 1: Top: Representative cut-plane of a scroll compressor simulation with the mesh overlaid, colored by velocity. Bottom: Experimental (black square and triangles) and CONVERGE simulation (pink circles) results1 for mass flow rate.

Screw Compressor

Next, we will look at a twin screw compressor. These compressors have two helical screws that rotate in opposite directions, and are frequently used in industrial, manufacturing, and refrigeration applications. A common challenge for designing screw compressors—and many other pumps and compressors—is the tight clearances between parts. Inevitably, there will be some leakage flow between chambers, which will affect the device’s performance.

CONVERGE offers several methods for capturing the fluid behavior in these small gaps. Using local mesh embedding and AMR, you can directly resolve the gaps. This method is highly accurate, but it can come with a high computational price tag. An alternative approach is to use one of CONVERGE’s gap models to account for the leakage flows without fully resolving the gaps. This method balances accuracy and time costs, so you can get the results you need when you need them.

Another factor that must be taken into account when designing screw compressors is thermal expansion. Heat transfer between the fluid and the solid walls means the clearances will vary down the length of the rotors. CONVERGE’s CHT model can capture the heat transfer between the solid and the fluid to account for this phenomenon.

This study2 of a dry twin screw compressor employs a gap model to account for leakage flows, CHT modeling to capture heat transfer, and AMR to resolve large-scale flow structures. Mass flow rate, power, and discharge temperature were predicted with CONVERGE and compared to experimentally measured values. This study also investigated the effects of the base grid size on the accuracy of the results. In Figure 2, you can see there is good agreement between the experimental and simulated data, particularly for the most refined grid. The method used in this study provides accurate results in a turn-around time that is practical for engineering applications.

Figure 2: Top: Representative cut-plane of a dry twin screw compressor simulation with the mesh overlaid (colored by velocity). Bottom: Mass flow rate, power, and discharge temperature results2 of the experiment (black squares) and the CONVERGE simulations (colored circles).


The benefits CONVERGE offers for designing pumps and compressors directly translate to a tangible competitive advantage. CFD benefits your business by reducing costs and enabling you to bring your product to market faster, and CONVERGE features tools to help you optimize your designs and produce high-quality products for your customers. To find out how CONVERGE can benefit you, contact us today!


[1] Rowinski, D., Pham, H.-D., and Brandt, T., “Modeling a Scroll Compressor Using a Cartesian Cut-Cell Based CFD Methodology with Automatic Adaptive Meshing,” 24th International Compressor Engineering Conference at Purdue, 1252, West Lafayette, IN, United States, Jul 9–12, 2018.

[2] Rowinski, D., Li, Y., and Bansal, K., “Investigations of Automatic Meshing in Modeling a Dry Twin Screw Compressor,” 24th International Compressor Engineering Conference at Purdue, 1528, West Lafayette, IN, United States, Jul 9–12, 2018.

[3] Rowinski, D., Sadique, J., Oliveira, S., and Real, M., “Modeling a Reciprocating Compressor Using a Two-Way Coupled Fluid and Solid Solver with Automatic Grid Generation and Adaptive Mesh Refinement,” 24th International Compressor Engineering Conference at Purdue, 1587, West Lafayette, IN, United States, Jul 9–12, 2018.

[4] Rowinski, D.H., Nikolov, A., and Brümmer, A., “Modeling a Dry Running Twin-Screw Expander using a Coupled Thermal-Fluid Solver with Automatic Mesh Generation,” 10th International Conference on Screw Machines, Dortmund, Germany, Sep 18–19, 2018.

[5] da Silva, L.R., Dutra, T., Deschamps, C.J., and Rodrigues, T.T., “A New Modeling Strategy to Simulation the Compression Cycle of Reciprocating Compressors,” IIR Conference on Compressors, 0226, Bratislava, Slovakia, Sep 6–8, 2017. DOI: 10.18462/iir.compr.2017.0226

[6] Willie, J., “Analytical and Numerical Prediction of the Flow and Performance in a Claw Vacuum Pump,” 10th International Conference on Screw Machines, Dortmund, Germany, Sep 18–19, 2018. DOI: 10.1088/1757-899X/425/1/012026

[7] Jhun, C., Siedlecki, C., Xu, L., Lukic, B., Newswanger, R., Yeager, E., Reibson, J., Cysyk, J., Weiss, W., and Rosenberg, G., “Stress and Exposure Time on Von Willebrand Factor Degradation,” Artificial Organs, 2018. DOI: 10.1111/aor.13323

[8] Rowinski, D.H., “New Applications in Multi-Phase Flow Modeling With CONVERGE: Gerotor Pumps, Fuel Tank Sloshing, and Gear Churning,” 2018 CONVERGE User Conference–Europe, Bologna, Italy, Mar 19–23, 2018.

[9] Willie, J., “Simulation and Optimization of Flow Inside Claw Vacuum Pumps,” 2018 CONVERGE User Conference–Europe, Bologna, Italy, Mar 19–23, 2018.

[10] Scheib, C.M., Newswanger, R.K., Cysyk, J.P., Reibson, J.D., Lukic, B., Doxtater, B., Yeager, E., Leibich, P., Bletcher, K., Siedlecki, C.A., Weiss, W.J., Rosenberg, G., and Jhun, C., “LVAD Redesign: Pump Variation for Minimizing Thrombus Susceptibility Potential,” ASAIO 65th Annual Conference, San Francisco, CA, United States, Jun 26–29, 2019.

► Leveling Up Scaling with CONVERGE 3.0
  14 Aug, 2020

In a competitive market, predictive computational fluid dynamics (CFD) can give you an edge when it comes to product design and development. Not only can you predict problem areas in your product before manufacturing, but you can also optimize your design computationally and devote fewer resources to testing physical models. To get accurate predictions in CFD, you need to have high-resolution grid-convergent meshes, detailed physical models, high-order numerics, and robust chemistry—all of which are computationally expensive. Using simulation to expedite product design works only if you can run your simulations in a reasonable amount of time.

The introduction of high-performance computing (HPC) drastically furthered our ability to obtain accurate results in shorter periods of time. By running simulations in parallel on multiple cores, we can now solve cases with millions of cells and complicated physics that otherwise would have taken a prohibitively long time to complete. 

However, simply running cases on more cores doesn’t necessarily lead to a significant speedup. The speedup from HPC is only as good as your code’s parallelization algorithm. Hence, to get a faster turnaround on product development, we need to improve our parallelization algorithm.

Let’s Start With the Basics

Breaking a problem into parts and solving these parts simultaneously on multiple interlinked processors is known as parallelization. An ideally parallelized problem will scale inversely with the number of cores—twice the number of cores, half the runtime.

A common task in HPC is measuring the scalability, also referred to as scaling efficiency, of an application. Scalability is the study of how the simulation runtime is affected by changing the number of cores or processors. The scaling trend can be visualized by plotting the speedup against the number of cores.

How Does CONVERGE Parallelize?

Parallelization in CONVERGE 2.4 and Earlier

In CONVERGE versions 2.4 and earlier, parallelization is performed by partitioning the solution domain into parallel blocks, which are coarser than the base grid. CONVERGE distributes the blocks to the interlinked processors and then performs a load balance. Load balancing redistributes these parallel blocks such that each processor is assigned roughly the same number of cells.

This parallel-block technique works well unless a simulation contains high levels of embedding (regions in which the base grid is refined to a finer mesh) in the calculation domain. These cases lead to poor parallelization because the cells of a single parallel block cannot be split between multiple processors.

Figure 1 shows an example of parallel block load balancing for a test case in CONVERGE 2.4. The colors of the contour represent the cells owned by each processor. As you can see, the highly embedded region at the center is covered by only a few blocks, leading to a disproportionately high number of cells in those blocks. As a result, the cell distribution across processors is skewed. This phenomenon imposes a practical limit on the number of levels of embedding you can have in earlier versions of CONVERGE while still maintaining a reasonable load balance.

Figure 1: Parallel-block load balancing in CONVERGE 2.4.

Parallelization in CONVERGE 3.0

In CONVERGE 3.0, instead of generating parallel blocks, parallelization is accomplished via cell-based load balancing, i.e., on a cell-by-cell basis. Because each cell can belong to any processor, there is much more flexibility in how the cells are distributed, and we no longer need to worry about our embedding levels.

Figure 2 shows the cell distribution among processors using cell-based load balancing in CONVERGE 3.0 for the same test case shown in Figure 1. You can see that without the restrictions of the parallel blocks, the cells in the highly embedded region are divided between many processors, ensuring an (approximately) equal distribution of cells.

Figure 2: Cell-based load balancing in CONVERGE 3.0.

The cell-based load balancing technique demonstrates significant improvements in scaling, even for large numbers of cores. And unlike previous versions, the load balancing itself in CONVERGE 3.0 is performed in parallel, accelerating the simulation start-up.

Case Studies

In order to see how well the cell-based parallelization works, we have performed strong scaling studies for a number of cases. The term strong scaling means that we ran the exact same simulation (i.e., we kept the number of cells, setup parameters, etc. constant) on different core counts.

SI8 PFI Engine Case

Figure 3 shows scaling results for a typical SI8 port fuel injection (PFI) engine case in CONVERGE 3.0. The case was run for one full engine cycle, and the core count varied from 56 to 448. The plot compares the speedup obtained running the case in CONVERGE 3.0 with the ideal speedup. With enough CPU resources, in this case 448 cores, you can simulate one engine cycle with detailed chemistry in under two hours—which is three times faster than CONVERGE 2.4!

Cores Time (h) Speedup Efficiency Cells per core Engine cycles per day
56 11.51 1 100% 12,500 2.1
112 5.75 2 100% 6,200 4.2
224 3.08 3.74 93% 3,100 7.8
448 1.91 6.67 75% 1,600 12.5
Figure 3: CONVERGE 3.0 scaling results for an SI8 PFI engine simulation run on an in-house cluster. On 448 cores, CONVERGE 3.0 scales with 75% efficiency, and you can simulate more than 12 engine cycles in a single day. Please note that the parallelization profiles will differ from one case to another.

Sandia Flame D Case

If the speedup of the SI8 PFI engine simulation impressed you, then just wait until you see the scaling study for the Sandia Flame D case! Figure 4 shows the results of a strong scaling study performed for the Sandia Flame D case, in which we simulated a methane flame jet using 170 million cells. The case was run on the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA), and the core counts vary from 500 to 8,000. CONVERGE 3.0 demonstrates impressive near-linear scaling even on thousands of cores.

Figure 4: CONVERGE 3.0 scaling results for a combusting turbulent partially premixed flame (Sandia Flame D) case run on the Blue Waters supercomputer at the National Center for Supercomputing Applications[1]. On 8,000 cores, CONVERGE 3.0 scales with 95% efficiency.


Although earlier versions of CONVERGE show good runtime improvements with increasing core counts, speedup is limited for cases with significant local embeddings. CONVERGE 3.0 has been specifically developed to run efficiently on modern hardware configurations that have a high number of cores per node.

With CONVERGE 3.0, we have observed an increase in speedup in simulations with as few as approximately 1,500 cells per core. With its improved scaling efficiency, this new version empowers you to obtain simulation results quickly, even for massive cases, so you can reduce the time it takes to bring your product to market. 

Contact us to learn how you can accelerate your simulations with CONVERGE 3.0.

[1] The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. The NCSA Industry Program is the largest Industrial HPC outreach in the world, and it has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand computational problems at rapid speed and scale. The CONVERGE simulations were run on NCSA’s Blue Waters supercomputer, which is one of the fastest supercomputers on a university campus. Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.

► The Collaboration Effect: A Decade of Innovation
    5 Aug, 2020

From the Argonne National Laboratory + Convergent Science Blog Series

The world is waiting for us to develop the tools needed to design new engine architectures, new concepts, with a finer control over the combustion process. If we can continue to make the progress we’ve achieved over the last ten years, I think society and the environment will continue to reap large rewards.

—Dr. Don Hillebrand, Division Director of the Energy Systems Division, Argonne National Laboratory

The year 2020 marks the ten-year anniversary of a fruitful collaboration between Convergent Science and the U.S. Department of Energy’s Argonne National Laboratory. Over the years, the collaboration has facilitated exciting advances in engine technology, high-performance computing and machine learning, computational methods, physical models, gas turbine and detonation engine simulations, and more. Many engineers at both Argonne and Convergent Science have contributed to these projects, but the collaboration started with one individual.

The Story Origin

Dr. Sibendu Som

Dr. Sibendu Som was introduced to CONVERGE before it was even called CONVERGE. He was a graduate student at the University of Illinois at Chicago (UIC), and in the summer of 2006 Sibendu participated in an industry internship. He worked with engineers on a computational fluid dynamics (CFD) team who were using an internal version of a code in development by a small company named Convergent Science. When Sibendu’s internship ended, he went back to UIC and continued to work with the same CFD code—at the time called MOSES.

For his thesis, Sibendu focused on improving spray models, for which he was obtaining experimental data from Argonne. Spray modeling happens to be a specialty of Dr. Kelly Senecal, Co-Owner of Convergent Science, so Kelly assisted Sibendu in his endeavors.

“Kelly helped me quite a bit,” Sibendu says, “so I actually invited him to be a part of my thesis defense committee.”

Doug Longman and Kelly Senecal

After completing his Ph.D.—and thoroughly impressing Kelly and the rest of his committee—Sibendu became a postdoc at Argonne National Laboratory in the research group of Mr. Doug Longman, Manager of Engine Research. At the time, there was only a little CFD work being done at Argonne in the combustion and spray area, so there was an opportunity to bring in a new code. Having used CONVERGE during his thesis, Sibendu was a proponent of using the software at Argonne.

Partnering with a renowned national laboratory was a big opportunity for Convergent Science. In 2010, Convergent Science had only recently switched from being a CFD consulting company to a CFD software company, and working with Argonne lent credibility to their code. Argonne also provided access to computational resources on a scale that a small company simply could not afford on their own.

“It was also a relationship thing,” Kelly says. “The partnership just started off on the right foot, and we were really happy to work with the Argonne research team.”

A Mutually Beneficial Partnership

Government and private industry have a long history of collaboration in the United States—and for good reason. These relationships are not only beneficial for both parties, but also for taxpayers. The mission of national laboratories is not to compete with industry, but to help support and enhance the missions of private companies for the benefit of the country.

“The national lab system in the United States is a national treasure,” says Dr. Don Hillebrand. “Our job is to look at big science, big physics, big chemistry, big engineering, and solve challenging problems that confront us. We make sure that knowledge or tools or technology solutions get transferred to industrial groups, who develop jobs and products and make the country competitive.”

National laboratories provide access to resources, including advanced technology and funding, that private companies are often unable to obtain on their own. For Convergent Science in particular, access to Argonne’s computational resources made it possible to test CONVERGE on large numbers of cores and to work on improving the scalability for clients who want to run highly parallel simulations. Getting access to these types of resources on the ground floor provides a huge advantage to industry partners.

Theta Supercomputer at Argonne National Laboratory

Another important function of national labs is to investigate long-term or risky areas of research. Private companies survive on the profits they make, and investing in research that does not pay off in the end can be damaging to their business. In the same vein, companies tend to focus on products that they can bring to market relatively quickly to make sure they have a consistent revenue stream. However, long-term and riskier research is critical for developing innovative technologies that have the potential to transform our lives.

“The government drives a lot of research in cutting-edge technology,” says Dr. Dan Lee, Co-Owner of Convergent Science. “They also have advanced facilities and teams of expert engineers doing fundamental research for projects that are potentially going to shape the future.”

Of course, to have an impact on society, the technology developed in national laboratories must end up in the hands of consumers. Thus the end-goal of research and development at government institutions is to transfer that technology to industry.

Ann Schlenker, Director of the Center for Transportation Research at Argonne, spent more than 30 years in industry before transitioning to Argonne. That experience gave her a deep understanding of the synergistic relationship between government and private industry.

“You need to be extremely astute at listening to the voice of the customer. And that means understanding what the challenges are, where the hurdles and difficulties are stressing the system and how best to optimize processes. Because if you can do that, you can develop timely solutions,” Ann says.

Partnering with industry helps ensure that the research at the national labs is relevant, timely, and impactful. This is one way in which these relationships benefit the taxpayer—the results of government research directly address the needs of consumers and help make the country competitive on the world stage.

Delivering Results

The collaboration between Argonne and Convergent Science has resulted in significant advances for the modeling community and the transportation industry. While the details of this research will be discussed in depth in upcoming blog posts, the projects from the past decade generally fall into two categories: advancing simulation for propulsion technologies and improving the scalability of CONVERGE on high-performance computing architectures.

Many projects have focused on modeling processes relevant to the internal combustion engine, such as studying fuel injection and sprays using experimental data from Argonne’s Advanced Photon Source, implementing state-of-the-art nozzle flow models in CONVERGE, simulating ignition, and investigating cycle-to-cycle variation.

Other key areas of focus have been modeling challenging phenomena in gas turbine combustors and breaking ground on simulating rotating detonation engines. Enhancing the scalability of CONVERGE has made it possible to run larger, more complex cases and to obtain more accurate, more relevant results from these simulations.

The overarching goal for these projects continues to be to create better models and establish techniques that will be instrumental in developing the transportation technologies of the future. Perhaps Ann sums it up best:

The day of learning is not over for combustion processes. It’s germane to our gross domestic product for U.S. economic vitality. Our transportation and combustion researchers and industry engineers work side-by-side to achieve the societal goals of better fuel economy and lower emissions. And these strong collaborations and this visionary work allow us to move fully forward with model-based system engineering, with high-fidelity, predictive capabilities that we trust.

The collaboration between Convergent Science and Argonne National Laboratory will certainly help propel us into the future. Learn more about the research performed during this collaboration in upcoming blog posts!

In case you missed the other posts in the series, you can find them here:

► Models On Top of Models: Thickened Flames in CONVERGE
    2 Jul, 2020

Any CONVERGE user knows that our solver includes a lot of physical models. A lot of physical models! How many combinations exist? How many different ways can you set up a simulation? That’s harder to answer than you might think. There might be N turbulence models and M combustion models, but the total set of combinations isn’t N*M.

Why not? In some cases, our developers haven’t completed it yet! The ECFM and ECFM3Z combustion models, for example, could not be combined with a large eddy simulation (LES) turbulence model until CONVERGE version 3.0.11. We’re adding more features all the time. One interesting example is the thickened flame model (TFM). 

The name is descriptive, of course: TFM is designed to thicken the flame. If you’re not a combustion researcher, this notion may not be intuitive. A real flame is thin (in an internal combustion engine environment, tens or hundreds of microns). Why would we want to design a model that intentionally deviates from this reality? As is often the case with physical modeling, the answer lies in what we’re trying to study.

CONVERGE is often used to study the engineering operability of a premixed internal combustion or gas turbine engine. This requires accurate simulation of macroscopic combustion dynamics (flame properties), including the laminar flamespeed. A large eddy simulation (LES) might use cells on the order of 0.1 mm

The problem may now be clear. The flame is much too thin to resolve on the grid we want to use. In fact, a detailed chemical kinetics solver like SAGE requires five or more cells across the flame in order to reproduce the correct laminar flamespeed. An under-resolved flame results in an underprediction of laminar flamespeed. Of course, we could simply decrease the cell size by an order of magnitude, but that makes for an impractical engineering calculation.

The thickened flame model is designed to solve this problem. The basic idea of Colin et al.1 was to simulate a flame that is thicker than the physical one, but which reproduces the same laminar flamespeed. From simple scaling analysis, this can be achieved by increasing the thermal and species diffusivity while reducing the reaction rate by a factor of F. Because the flame thickening effect decreases the wrinkling of the flame front, and thus its surface area, an efficiency factor E is introduced so that the correct turbulent flamespeed is recovered.

The combination of these scaling factors allows CONVERGE to recover the correct flamespeed without actually resolving the flame itself. CONVERGE also calculates a flame sensor function so that these scaling factors are applied only at the flame front. By using TFM with SAGE detailed chemistry, a premixed combustion engineering simulation with LES becomes practical.

Hasti et al.2 evaluated one such case using CONVERGE with LES, SAGE, and TFM. This work examined the Volvo bluff-body augmentor test rig, shown below, which has been subjected to extensive study. At the conditions of interest, the flame thickness is estimated to be about 1 mm, and so SAGE without TFM should require a grid not coarser than 0.2 mm to accurately simulate combustion.

Figure 1: Volvo bluff-body augmentor test rig3.

With TFM, Hasti et al. show that CONVERGE is able to generate a grid-converged result at a minimum grid spacing of 0.3125 mm. We might expect such a calculation to take only about 40% as many core hours as a simulation with a minimum grid spacing of 0.25 mm.

Figure 2: Representative instantaneous temperature field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm.
Figure 3: Representative instantaneous velocity magnitude field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm, respectively.
Figure 4: Representative instantaneous vorticity magnitude field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm, respectively.
Figure 5: Transverse mean temperature profiles at x/D = 3.75, 8.75, and 13.75.
Base grid sizes of 2 mm, 2.5 mm, and 3 mm correspond to minimum cell sizes of 0.25 mm, 0.3125 mm, and 0.375 mm, respectively.

Understanding the topic of study, the underlying physics, and the way those physics are affected by our choice of physical models, are critical to performing accurate simulations. If you want to combine the power of the SAGE detailed chemical kinetics solver with the transient behavior of an LES turbulence model to understand the behavior of a practical engine–and to do so without bankrupting your IT department–TFM is the enabling technology.

Want to learn more about thickened flame modeling in CONVERGE? Check out these TFM case studies from recent CONVERGE User Conferences (1, 2, 3) and keep an eye out for future Premixed Combustion Modeling advanced training sessions.


[1] Colin, O., Ducros, F., Veynante, D., and Poinsot, T., “A thickened flame model for large eddy simulations of turbulent premixed combustion,” Physics of Fluids, 12(1843), 2000. DOI: 10.1063/1.870436
[2] Hasti, V.R., Liu, S., Kumar, G., and Gore, J.P., “Comparison of Premixed Flamelet Generated Manifold Model and Thickened Flame Model for Bluff Body Stabilized Turbulent Premixed Flame,” 2018 AIAA Aerospace Sciences Meeting, AIAA 2018-0150, Kissimmee, Florida, January 8-12, 2018. DOI: 10.2514/6.2018-0150
[3] Sjunnesson, A., Henrikson, P., and Lofstrom, C., “CARS measurements and visualizations of reacting flows in a bluff body stabilized flame,” 28th Joint Propulsion Conference and Exhibit, AIAA 92-3650, Nashville, Tennessee, July 6-8, 1992. DOI: 10.2514/6.1992-3650

Numerical Simulations using FLOW-3D top

► The Solid Proof: The Latest in Solidification Modeling
  16 Nov, 2020

One of the most exciting new developments offered with the release of FLOW-3D CAST v5.1 is the new chemistry-based aluminum-silicon and aluminum-copper alloy solidification model. This new model allows users to predict the microstructure and mechanical properties for as-cast and heat-treated castings. Experimental data was used to verify and validate our model predictions, which is detailed in Modeling and Simulation of Microstructures and Mechanical Properties of AlSi- and AlCi-based Alloys, a publication that recently won the Best Paper Award from the 2020 American Foundry Society Aluminum & Light Metals Division.

Test bars - FLOW-3D CAST Solidification Model

The paper highlights a casting study done in collaboration with The University of Alabama at Birmingham, in which A356 and A206 commercial ingots were used to create a wedge-shaped pattern using the lost foam method. For more detail about the study, check out our recent webinar on the new solidification model.

What does the new model do exactly?

FLOW-3D CAST’s new solidification model accurately predicts grain size and secondary dendritic arm spacing (SDAS) by tracking the evolution of the alloy’s chemical elements and reactions. Then empirical relationships are used to calculate then resulting microstructure to mechanical properties. This calculation also allows us to output a non-dimensional Niyama criterion for improved porosity prediction.

Here we highlight some of our results from the aluminum silicon A356 samples. The data is very compelling!

First and foremost, accurate cooling curves are foundational to the study of microstructure. The first step was to establish that our model correctly matched thermocouple data extracted from experiments.

With this solid foundation, and with the detailed knowledge of the alloy composition, an accurate prediction of the secondary dentric arm spacing then leads to an accurate prediction of mechanical properties.

Accurate input data and a solid handle on pouring and cooling parameters are always the necessary foundation that can help us obtain accurate predictions of microstructure, porosity and mechanical properties.

Alloy composition
Cooling curve FLOW-3D CAST

Verification and Validation

We see an excellent correlation between the experimental  data and the outputs of the solidification model, as shown in the following plots for SDAS, ultimate tensile strength, and elongation.                        

Here at Flow Science we deliver innovative developments that help our customers conceptualize, create, and analyze their casting designs with confidence. If you would like more information on the new solidification model or a personal demonstration of FLOW-3D CAST v5.1, please reach out to

Thank you and stay tuned for our next post!

Ajit D'Brass

Ajit D'Brass

Metal Casting Engineer at Flow Science

► Simulation of Joule Heating-based Core Drying
    4 Nov, 2020
FLOW-3D CAST case studies

Simulation of Joule heating-based Core Drying

This article was contributed by Eric Riedel 1,2

1Otto-von-Guericke-University Magdeburg, Institute of Manufacturing Technology and Quality Management, Germany

2Soplain GmbH, Germany

Modern casting production requires the use of sand cores. Growing environmental awareness as well as tougher regulations have supported the development of inorganic, emission-free binder systems, in which the cores are dried and cured by heat. In what is known as the hot box process, heat is generated in the core boxes and transferred to the sand binder mixture. However, the hot box process exhibits two major technological disadvantages.

The first disadvantage is the very low thermal conductivity of quartz sand of about 1 W/(m·K). Due to outside-in heat transfer, the process is time-consuming, can lead to shell formation and thus quality issues. For this reason, very high core-box temperatures of up to 523.15 K or more are applied to accelerate the heat transfer. The second disadvantage of the hot box process is that the core drying itself cannot be directly measured and digitized in real time. Instead, it can only be measured passively by recording peripheral parameters, such as from the core box.

The ACS Process

The new, patented Advanced Core Solution (ACS) process aims for time- and energy-efficient core drying and curing. The ACS process uses a property common to all inorganic binder systems: because they are water-based, they are electrically conductive. The key factor is the development of electrically conductive core box materials, whose conductivity can be adjusted to that of the sand-binder mixture. When a voltage is applied, the electrical current flows uniformly through the core box and sand-binder mixture, as demonstrated in Figure 1. Put more precisely, current flows through the electrically conductive binder bridges between the sand grains. Due to its inherent electrical resistance, the sand core heats uniformly without shell formation. The scientific principle behind it, called Joule heating, is based on Joule’s first law. In the series process, the electrically conductive core box heats up through Joule heating as well, additionally accelerating the drying process. This is a further important advantage, since for the ACS process, no complicated heating devices within the core boxes are required anymore, thus simplifying core box construction.

With this new process, and for the first time, heat is generated directly where it is needed: within the core. Since the necessary heat is generated through the homogeneously-distributed binder and transferred to the adjoining sand, the low thermal conductivity of the quartz sand is no longer a limiting process factor. Additionally, for the first time, the recording of drying-specific electrical parameters allows for comprehensive real-time monitoring of the drying process itself. Using FLOW-3D, the ACS process can be simulated, fulfilling an important criterion for industrial application, including the quantification of process benefits.

Sand core joule heating setup
Figure 1: Basic comparison of the current flows: a) without, b) with adjustment of the electrical conductivity of the core box to that of the sand-binder mixture.

Model Description

The modeling is based on the work of Starobin et al. [1], but extends it with the Electro-mechanics model in FLOW-3D. Activating the electric potential (iepot = 1), takes electro-thermal effects, i.e., Joule heating (iethermo = 1), into account. Model details can be taken from [2]. Via the electrical properties of the components, the core box is assigned a dynamic potential (ioepotm = 1) with an electrical conductivity (oecond) and, if necessary, a dielectric potential (odiel); the same applies to the sand core in order to account for electrical conductivity of the entire sand-binder mixture. The electrodes are assigned a fixed potential (ioepotm = 0), an electrical conductivity, and a negative electric potential (oepot) for one electrode and a positive electric potential for the other. Since a temperature-dependent definition of the electrical conductivity is not yet possible, we worked with restart simulations and active simulation control. This way, the average electrical conductivities of the respective temperature ranges could be considered, i.e., 293.15 to 303.15 K, 303.15 to 313.15 K, and so on. The following investigations focus on one-fluid simulation, i.e., purging was not considered.


In the first step, a commercially available inorganic sand-binder mixture was used for the experimental investigation and validation of the simulation model to investigate heating and temperature-dependent electrical conductivity. The time required to reach 373.15 K as well as the power and energy input into the sand core were measured. Based on the experimental analysis and results, a basic simulation model was created. For reasons of discretion, some of the underlying results are presented only qualitatively. The results are demonstrated in Figure 2, showing high accordance between the measured values and the simulation.

Experimental vs. simulation results core drying
Figure 2: Comparison of experimental and simulation results. The measuring points mark the reaching of the specified target temperatures in steps of ten, starting at 293.15 K: a) temperature-averaged power input- average deviation from measured values: 0,95 %, b) energy input - average deviation from measured values: 4.8 %.

Based on the validated results, the ACS process and simulation are shown using a simple but high-volume geometry, which illustrates the fundamentals and high potential of the advanced ACS development compared to the classic hot box process. The geometric alignment can be taken from Figure 3. Three cases were simulated: (1) a classic hot box process; (2) an ACS cold start process with cold tool (293.15 K); and (3) an ACS series process accounting for the tool heating due to the Joule effect. All three-dimensional models were discretized with a cell size of 1 mm. Table 1 sums up the most important details of the calculated scenarios.

Favorizing core heating drying
Figure 3: Geometric alignment of simulation setup for conductive core heating and drying.
Core box properties table
Table 1: Overview of calculated core drying cases. Values are derived from real experiments.

Results and Discusssion

Figure 4 shows the temperature and moisture development for the classic hot box process, clearly showing the outside-in heat transfer and corresponding moisture reduction. The simulation was carried out for 120 s with moisture still present in the sand core center at the end of the simulation; in practice, cycle time targets force an early termination of the drying process with shell formation and residual moisture in the core center. However, the ACS cold start simulation (corresponding to the first shot when the core shooting machine is started up), which is shown in Figure 5, shows the basic principle of the new process: the uniform heating of the core leads to an inside-out moisture transport. Furthermore, the sand core heats up faster than the core box. In the series process, the core box also reaches temperatures greater than 373.15 K through Joule heating, resulting in a mixture of hot box and ACS processes which further accelerates the drying process. The results of the ACS series simulation are summarized in Figure 6. While the sand core is not fully cured even after 120 seconds in the hot box process, the ACS process allows the core to dry completely after 72 or 45 seconds. Despite the significantly lower core box temperature, the new process shows a significant acceleration in core drying and the great potential of the new approach. One major advantage is a massive reduction in cycle times, including the associated energy requirements and the corresponding CO2 emissions. The energy introduced into the sand core can be measured during the real process as well as predicted in advance using simulation, which is another great advantage in terms of process design and transparency. Additionally, the simulation clearly illustrates the geometry-independent homogeneous heating of the test specimen, which means that moisture is not trapped in the core center and shell formation is avoided. All in all, the new process enables a significant increase in efficiency of the process and the quality of the inorganically bound sand cores as well. The process diagrams of all three cases are summarized in Figure 7.

Summary and Outlook

The demonstrated modelling shows the capability of FLOW-3D to simulate the new core drying process accurately as well as the potential of the new process for more efficient core drying and curing compared to the conventional hot box process. Even if the new simulation setup is still in the development stage and needs more real-case experiments, it still allows for great insights in the drying behavior, with very good agreement with experimental measurements so far.

Presently, within the simulation, the electrical conductivity of the sand-binder mixture is generated via the quartz sand, which in reality is not electrically conductive but corresponds to the electrical conductivity of the real-measured sand-binder mixtures. This way, the electrical conductivity of the entire sand-binder mixture is accounted for in the simulation and seems to fit the experimental results. For more precise simulations, the possibility of saving a temperature-dependent electrical conductivity of the solid core (i.e. the sand-binder mixture) would be helpful in order to take the actual conductivity curve into account. Further steps will concentrate on two-fluid simulation models. Initial trials show the basic feasibility with good results.

Despite the steps still to be taken, it can be said that the ability to simulate the ACS process with FLOW-3D marks an important milestone in the holistic establishment of a Joule heating-based core drying process and shows the benefits of this process for inorganic sand core manufacturing.


  • Starobin, C.W. Hirt, H. Lang, M. Todte, Core Drying Simulation and Validation, AFS Proceedings, Schaumburg, IL USA, 2011
  • FLOW-3D from Flow Science, Inc., Santa Fe, NM, USA
► Flow Science Receives the 2020 Flying 40
    4 Nov, 2020

Flow Science named one of the fastest growing technology companies in New Mexico for the fifth year running.

Santa Fe, NM, November 4, 2020 – Flow Science has been named one of New Mexico Technology’s Flying 40 recipients for the last five consecutive years. The New Mexico Technology Flying 40 awards recognize the 40 fastest growing technology companies in New Mexico each year.

It is an honor to be recognized for the fifth year in a row by the Flying 40. As Flow Science continues to grow and expand its operation in New Mexico, we strive to appear on this list for years to come, said Flow Science President & CEO, Dr. Amir Isfahani.

These awards are given out by the Flying 40 program based on three revenue categories: the top revenue growth companies with revenues between $1 million and $10 million, the top revenue growth companies with revenues of more than $10 million, and the top revenue-producing technology companies irrespective of revenue growth.

Sherman McCorkle, President and CEO of the Sandia Science & Technology Park Development Corporation (SSTPDC), who hosted the program in 2020 stated, As the New Mexico economy enters an impressive growth phase, I think it is important to recognize not only the companies that laid the foundation, but also those who are leading recovery. With aggregate revenues close to $1 billion, these companies deserve our recognition. SSTPDC is excited to carry on the legacy of the program.

More information about the Flying 40 can be found online at

About Flow Science

Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science is located in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.
683 Harkle Rd.
Santa Fe, NM 87505
Attn: Amanda Ruggles
+1 505-982-0088

► Announcing the Launch of FLOW-3D HYDRO
  29 Oct, 2020

Announcing the Launch of FLOW-3D HYDRO

FLOW-3D HYDRO features a streamlined, water-focused user interface and offers new simulation templates for efficient modeling workflows

Santa Fe, NM, October 29, 2020 – Flow Science has launched FLOW-3D HYDRO, the complete CFD modeling solution for the civil and environmental engineering industry. FLOW-3D HYDRO features a streamlined, water-focused user interface and offers new simulation templates for efficient modeling workflows, as well as expanded training materials geared to the needs of the civil or environmental engineer. FLOW-3D HYDRO’s advanced solver developments include mine tailings, multiphase flows and shallow water models. Parallelized for high performance computing and designed for every level of modeling proficiency, FLOW-3D HYDRO puts exceptional simulation capabilities in the hands of the user. A full description of what’s new is available here

FLOW-3D HYDRO is the result of listening to our customers and understanding their needs. Building upon years of developing advanced CFD solutions for our water & environmental customers as well as the widespread adoption of our general-CFD FLOW-3D in the civil and environmental engineering industry, we wanted to develop a water-focused interface that would make the software more accessible and relevant to these users, said Dr. Amir Isfahani, President & CEO of Flow Science. We’ve seen model setup times as well as setup errors decrease significantly – we think this new product in terms of usability and modeling success is going to be a great asset for water and environmental practitioners.

A series of online workshops has been scheduled which will introduce the new FLOW-3D HYDRO software through a series of guided, hands-on exercises. Workshop registration includes a 30-day evaluation license so that attendees can explore the software and its capabilities. Registration is available online.

FLOW-3D HYDRO will also be available through Flow Science’s specially expanded Academic Program. University students, researchers and professors can apply for free, short-term licenses for teaching or research at or contact for more information about an expanded Academic Program tailored to FLOW-3D HYDRO.

We’re super excited to be rolling this new product out to the academic community through our Academic Program. We’re committed to the next generation and we feel it is important to give students, researchers and professors the latest in technology so that they can advance their studies and are better prepared to engineer the future, said Dr. Isfahani.

Committed to user success, FLOW-3D HYDRO comes with high-level support, video tutorials and access to an extensive set of example simulations. Customers can also take advantage of Flow Science’s CFD Services to augment their product experience, including customized training courses, HPC resources and flexible cloud computing options. 

A FLOW-3D HYDRO release webinar will be held on December 3. Online registration is available here

About Flow Science

Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science is located in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.
683 Harkle Rd.
Santa Fe, NM 87505
Attn: Amanda Ruggles
+1 505-982-0088

► FLOW-3D HYDRO Workshops
  18 Oct, 2020
Announcing our 2021 FLOW-3D HYDRO workshops

Our FLOW-3D HYDRO workshops introduce the new FLOW-3D HYDRO software to civil and environmental engineers through a series of guided, hands-on exercises. You will explore the hydraulics of typical dam and weir cases, municipal conveyance and wastewater problems, and river and environmental applications. By the end of the workshop, you will have absorbed FLOW-3D HYDRO’s user interface, reviewed CFD modeling best practices, and become familiar with the steps that are common to three classes of hydraulic problems.

Unless otherwise noted, all workshops run from 11:00 am – 2:00 pm ET (8:00 am – 11:00 am PT) over two consecutive days.

  • December 10 – 11, 2020

  • June 23 – 24, 2021

  • January 20 – 21, 2021

  • July 14 – 15, 2021

  • February 17 – 18, 2021

  • August 18 – 19, 2021

  • March 16 – 17, 2021

  • September 22 – 23, 2021

  • April 21 – 22, 2021

  • October 27 – 28, 2021

  • May 18 – 19, 2021

  • November 9 – 10, 2021

Who should attend?

  • Practicing engineers working in the water resources, environmental, energy and civil engineering industries
  • Regulators and decision makers looking to better understand what state-of-the-art tools are available to the modeling community
  • University students interested in using CFD in their research
  • All modelers working in the field of environmental hydraulics

What will you learn?

  • How to import geometry and set up free surface hydraulic models, including meshing and initial and boundary conditions.
  • How to add complexity by including sediment transport and scour, particles, scalars and turbulence.
  • How to use sophisticated visualization tools such as FlowSight to effectively analyze and convey simulation results.

You’ve completed the workshop, now what?

We recognize that you may want to further explore the capabilities of FLOW-3D HYDRO by setting up your own problem or comparing CFD results with prior measurements in the field or in the lab. After the workshop, your license will be extended for 30 days. During this time you will have the support of one of our CFD engineers who will help you work through your specifics. You will also have access to our web-based training videos covering introductory through advanced modeling topics. 

  • Workshops are online, hosted through Zoom
  • Registration is limited to 10 attendees
  • Cost: $499 (private sector); $299 (government); $99 (academic)
  • Each workshop is broken into two 3-hour sessions
  • 30-day FLOW-3D HYDRO license*

*See our Registration and Licensing Policy

  • A Windows machine running 64 bit Windows 7 or later
  • An external mouse (not a touchpad device)
  • Dual monitor setup recommended
  • Webcam recommended
  • Dedicated graphics card; nVidia Quadro card required for remote desktop
For more info on recommended hardware, see our Supported Platforms page.

Registration: Workshop registration is available to prospective users in the US, Canada, the UK, and Ireland. Prospective users outside of these countries should contact their distributor to inquire about workshops. Existing users should contact to discuss their licensing options.

Cancellation: Flow Science reserves the right to cancel a workshop at any time, due to reasons such as insufficient registrations or instructor unavailability. In such cases, a full refund will be given, or attendees may opt to transfer their registration to another workshop. Flow Science is not responsible for any costs incurred.

Registrants who are unable to attend a workshop may cancel up to one week in advance to receive a full refund. Attendees must cancel their registration by 5:00 pm MST one week prior to the date of the workshop; after that date, no refunds will be given. If available, an attendee can also request to have their registration transferred to another workshop.

Licensing: Workshop licenses are for evaluation purposes only, and not to be used for any commercial purpose other than evaluation of the capabilities of the software.

Register for an Online FLOW-3D HYDRO Workshop

Register for an Online FLOW-3D HYDRO Workshop
All workshops will run for two 3-hour sessions over two days.
Registration Type *

Workshop License Terms and Conditions *
Request for Workshop Certificate
Certificates will be in PDF format. Flow Science does not confirm that our workshops are eligible for PDHs or CEUs.
FLOW-3D News
Privacy *

Please note: Once you click 'Register', you will be directed to our PayPal portal. If you do not have a PayPal account, choose the 'Pay with credit card' option. Your registration is not complete until you have paid.
If you need assistance with the registration process, please contact Workshop Support.

About the Instructor

Brian Fox, FLOW-3D CFD Engineer

Brian Fox is a senior applications engineer with Flow Science who specializes in water and environmental modeling. Brian received an M.S. in Civil Engineering from Colorado State University with a focus on river hydraulics and sedimentation. He has over 10 years of combined experience working within private, public and academic sectors using 1D, 2D and 3D hydraulic models for projects including fish passage, river restoration, bridge scour analysis, sediment transport modeling and analysis of hydraulic structures.

► FLOW-3D CAST Workshops
  18 Aug, 2020
FLOW-3D CAST Metal Casting Workshops
FLOW-3D CAST is a state-of-the-art metal casting simulation modeling platform that combines extraordinarily accurate modeling with versatility, ease of use, and high performance CLOUD computing capabilities. Our FLOW-3D CAST workshops use hands-on exercises to show you how to set up and run successful simulations for detailed analysis of your casting design. Workshop materials provide an introduction to the FLOW-3D CAST modeling platform and detail all the steps of a successful casting model setup, from geometry import through post-processing.

Stay tuned for new FLOW-3D CAST workshop dates!

Want to discuss an online ‘in-house’ workshop for your team? Contact our workshop instructor.

What will you learn?

  • How to import geometry and set up models, including meshing and initial and boundary conditions
  • How to apply complex physics such as air entrainment, as well as FLOW-3D CAST‘s pioneering filling and solidification models to your simulation, to analyze defects, and adjust your casting design
  • Best practices for casting simulation and design analysis in FLOW-3D CAST

What happens after the workshop?

  • After the workshop, your FLOW-3D CAST license will be extended for 30 days. During this time, one of our CFD engineers will work closely with you to help you apply FLOW-3D CAST to a casting problem of your choosing. You will also have access to our web-based training videos covering introductory through advanced modeling topics. 

Who should attend?

  • Process and casting engineers working in foundry or die casting industries
  • Industry researchers working on new alloy developments, lightweighting, and other challenges in modern metal casting
  • University students interested in CFD for casting applications
  • Workshops are online, hosted through Zoom
  • Registration is limited to 6 attendees
  • Cost: $99
  • 30-day FLOW-3D CAST license

Workshop registration is currently only available to prospective or lapsed users in the United States and Canada.

  • A Windows machine running Windows 7 or later
  • An external mouse (not a touchpad device)
  • Dual monitor setup recommended
  • Dedicated graphics card; nVidia Quadro card required for remote desktop
For more info on recommended hardware, see our Supported Platforms page.

Registration: Workshop registration is currently only available to prospective or lapsed users in the United States and Canada. Prospective users outside of these countries should contact their distributor to inquire about workshops. Existing users should contact to discuss their licensing options.

Cancellation: Flow Science reserves the right to cancel a workshop at any time, due to reasons such as insufficient registrations or instructor unavailability. In such cases, a full refund will be given, or attendees may opt to transfer their registration to another workshop. Flow Science is not responsible for any costs incurred.

Registrants who are unable to attend a workshop may cancel up to one week in advance to receive a full refund. Attendees must cancel their registration by 5:00 pm MST one week prior to the date of the workshop; after that date, no refunds will be given. If available, an attendee can also request to have their registration transferred to another workshop.

Licensing: Workshop licenses are for evaluation purposes only, and not to be used for any commercial purpose other than evaluation of the capabilities of the software.

Register for an Online FLOW-3D CAST Workshop

Register for an Online Metal Casting Workshop
Registration Type *

Workshop License Terms and Conditions *
Request for Workshop Certificate
Certificates will be in PDF format. Flow Science does not confirm that our workshops are eligible for PDHs or CEUs.
FLOW-3D News
Privacy *

Please note: Once you click 'Register', you will be directed to our PayPal portal. If you do not have a PayPal account, choose the 'Pay with credit card' option. Your registration is not complete until you have paid.
If you need assistance with the registration process, please contact Workshop Support.

About the Instructor

Ajit D'Brass, CFD Engineer, Metal Casting Applications

Ajit D’Brass studied manufacturing engineering with a concentration on metal casting at Texas State University. His current work focuses on how to expedite the design phase of a casting through functional, efficient, user-friendly process simulations. Ajit helps customers use FLOW-3D CAST to create streamlined, sustainable workflows.

Mentor Blog top

► News Article: Graphcore leverages multiple Mentor technologies for its massive, second-generation AI platform
  10 Nov, 2020

Graphcore has used a range of technologies from Mentor, a Siemens business, to successfully design and verify its latest M2000 platform based on the Graphcore Colossus™ GC200 Intelligence Processing Unit (IPU) processor.

► Event: Integrated Electrical Solutions Forum (IESF) Conferences
  24 Jul, 2020

Come see Mentor Graphics automotive tools in action at Integrated Electrical Solutions Forum. This FREE event also includes industry presentations, case studies, product expo, networking events and technical tracks of industry and technical sessions.

► Technology Overview: Simcenter FLOEFD 2020.1 Electrical Element Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Package Creator Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 BCI-ROM and Thermal Netlist Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Battery Model Extraction Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.

Tecplot Blog top

► Webinar: What’s New in Tecplot 360 2020 R2
  25 Nov, 2020

Upcoming Webinar

Tecplot 360 2020 R2 – What’s New
with Product Manager Scott Fowler
Wednesday, December 16th


Here’s the Webinar Agenda:

  • Faster CFD variable calculations
  • Tecplot Chorus reintroduction
  • CGNS 4 file format compatibility
  • Save variables by name
  • New EXODUS II Data Loader

The webinar will be followed by Q&A, you can submit your questions ahead of time when you register or ask them after you join the session. This webinar will be recorded so that you can watch it later.

This Webinar is scheduled just a few days after the release. To receive the release notification email, subscribe to Tecplot 360 and add to your address book to ensure deliverability.

► Visualizing Isosurfaces in Higher-order Element Solutions
  25 Nov, 2020

Division Technique for Higher-Order Pyramid and Prism Isosurface Visualization

Higher-Order finite-element CFD methods have the potential to reduce the computational cost to achieve a desired solution error. These techniques have been an area of research for many years and are becoming more widely available in popular CFD codes.CFD visualization software is lagging behind the development of higher-order CFD analysis codes.

Visualizing Isosurfaces in Higher-order Element Solutions

In the AIAA SciTech 2020 presentation, I will discuss a technique for visualizing isosurfaces in higher-order element solutions with reduced memory usage. The technique recursively subdivides higher-order elements into smaller linear sub-elements. In these sub-elements the isosurface can be extracted using standard marching-tets or marching-cubes techniques. Memory usage is minimized by discarding unneeded sub-elements. In a previous paper this technique was demonstrated with higher-order hexahedra and tetrahedra with Lagrangian polynomial basis functions.

In this paper, the technique is extended to higher-order pyramids and prisms. The results are compared to other techniques for visualization of higher-order element isosurfaces.

Tecplot Blogs on Higher Order Element Solutions

Scott has written a series of blogs on visualization techniques for higher-order elements. You can link directly to the blogs below or view the list.

Scott’s Presentation

January 15, 2021 from 2:30 PM to 3:45 PM Eastern Time.
Presentation Type: Technical Paper (Completed Research)
Session: MVCE-02, Meshing Applications II
See all Technical Presentations

Technical Paper by:

Scott T. Imlay, David E. Taflin, and Craig Mackey
Tecplot Inc., Bellevue, WA, 98006

The post Visualizing Isosurfaces in Higher-order Element Solutions appeared first on Tecplot.

► Tecplot 360 – Use Python to Load Custom File Formats Part 2
    4 Nov, 2020

Tecplot 360 Basics Training: PyTecplot – Part 2

No Loader? No Problem!
Part 2 – Loading 2D Data

with Product Manager Scott Fowler

Here’s the Webinar Agenda:

  • What is PyTecplot?
  • Introduction to 2D structured data
  • Using PyTecplot to create 2D structured data
  • Putting it all together – loading data directly from a CSV file
  • Additional Resources

Download the Resources

The post Tecplot 360 – Use Python to Load Custom File Formats Part 2 appeared first on Tecplot.

► Visualizing Higher-Order Finite-Element Surfaces
  29 Oct, 2020

This blog was written by Dr. Scott Imlay, Chief Technical Officer, Tecplot, Inc.

In this series of blogs, I discuss the results of our research on visualizing higher-order finite-elements. The first blog on this topic was A Primer on Visualizing Higher-Order Elements. The second was on the Complex Nature of Higher-Order Finite-Element Data. And the third was on Visualizing Isosurface Algorithms for Higher-Order Finite-Elements.

In this 4th blog of the series, I take a look at a technique for visualizing higher-order surface elements.

Carry on, my wayward son
There’ll be peace when you are done
Lay your weary head to rest
Don’t you cry no more

– Chorus of “Carry On Wayward Son” by Kansas

Fluid Dynamics of Driving Cross Country

I’m driving through the South Yakima Valley in our “new to us” motorhome and this is playing on the stereo. On this day, these words speak to me in multiple ways. First, tomorrow would have been my daughter’s 35th birthday. This was the theme song to one of her favorite TV shows and could have been the theme song for her final weeks of life. “Carry on my sweet daughter, there’ll be peace when you are done!” Second, we are currently driving to the Grand Canyon where I’ll join some friends to run from the South rim to the North rim and back (about 50 miles). I’ll definitely need some prodding to “carry on”, I’ll be very “weary” at the end, and there may be tears. But, most immediately, I’m currently driving with a 40 mph crosswind.

Scott Imlay - Grand Canyon Runner

As I write, we are driving to the Grand Canyon where I’ll join some friends to run from the South rim to the North rim and back (about 50 miles).

When a gust hits the motorhome leans uncomfortably to the left and I need to steer right to keep it in the lane. Then we pass into a calmer area behind a hill and the motorhome veers to the right. I contemplate stopping until the wind dies but I know they will subside in about 30 miles, so I “carry on.” I’m using my fluid dynamics knowledge to estimate when the crosswind will subside slightly (high bank on the windward side) and when it will increase (gullies through the bank). I come up on a bridge over a gully and prepare for an increase in wind, but it actually reduces a little. What? A PhD and nearly 40 years of experience in fluid dynamics and it can still surprise me.

Turbulent Boundary Layers

I’m driving in a turbulent atmospheric boundary layer, and the nature of the turbulence is highly dependent on the geometry of the terrain. In the atmosphere, this boundary layer is hundreds or thousands of feet thick, but on an airplane it may be a fraction of an inch thick. When doing a CFD computation of the air flow around an airplane, especially doing a large-eddy simulation of a direct numerical simulation of the turbulence, the geometry must be precisely described. For standard second-order accurate CFD calculations, this is done by using an enormous number of surface elements, but for higher-order element methods, the surface elements (or the side of the volume element defining the surface) generally must be curved.

For higher-order finite-element surfaces, the geometry (x,y,z coordinates of any point on the surface) is defined for each element by polynomials in terms of the local surface coordinates (r,s). If the same polynomial type and order is used for the solution and the geometry, it is called isoparametric. If a lower order polynomial is used for the geometry than compared to the solution, it is call subparametric. If higher, it is superparametric. In any case, if the same type of polynomial functions are used for geometry and solution, and then similar visualization techniques can be used as well.

Technique for Visualizing Higher-Order Surface Elements

Our technique for visualizing higher-order surface elements is very similar to our technique for higher-order volume elements. We use hierarchical subdivision of the surface elements. On each level of subdivision, triangles are subdivided into four sub-triangles and quadrilaterals are subdivided into four sub-quadrilaterals. The hierarchical subdivision of the surface elements continues until the resulting set of linear elements is a good representation of the curved polynomial surface.

The figure below shows how this works for quadratic triangles defining the surface of an ONERAM6 wing. The image on the left is the surface created if you throw out three additional high-order nodes and just treat the quadratic elements as linear elements. The image on the right is after three levels of subdivision. The refined surface is a much better representation of the actual surface.

Wing Tips with and without HOE nodes

Wing tips with and without HOE nodes. See all HOE blogs.

I will have at least one more blog on visualization of higher-order finite-element results, although I’m not yet sure what it will cover. Check back to find out! Or Subscribe to Tecplot.

The post Visualizing Higher-Order Finite-Element Surfaces appeared first on Tecplot.

► Tecplot 360 2020 Release 2 – Sneak Peek
  14 Oct, 2020

Tecplot 360 2020 R2 – Sneak Peek

This Webinar, hosted by Product Manager Scott Fowler, shows you what’s coming in the Tecplot 360 2020 R2 release.

  • Tecplot Chorus has returned with support for 4K monitors and newer operating systems.
  • Variable calculations are up to 11x faster.
  • Support added for CGNS 4 files.
  • Reference variables by name in macros, layouts, and stylesheets.

Get all the details by watching the webinar, and learn more about Tecplot Chorus.

The post Tecplot 360 2020 Release 2 – Sneak Peek appeared first on Tecplot.

► Tecplot Europe Signs Distributor Agreement with Pointwise
    7 Oct, 2020

BELLEVUE, WA (October 8, 2020) – Tecplot, Inc. has signed an agreement to market, sell and support Pointwise computational fluid dynamics (CFD) preprocessing software in Europe.

“Tecplot and Pointwise have cooperated on many projects over the years, and they consistently demonstrate the professional, friendly business approach we always try to achieve. Through this new partnership, clients in Europe will receive a responsive level of service that we cannot provide from the U.S., “ said Rick Matus, Pointwise’s executive vice president.

Pointwise, Inc.

“Pointwise and Tecplot are a winning combination used by many leading companies in aerospace and beyond. We are thrilled to enter into this distribution agreement, and for the opportunity to work with some of our best friends in industry,” said Tom Chan, Tecplot’s president. “Providing Pointwise users with the high level of sales and support that Genias Graphics, our European office, offers is an honor.”

“After visiting with many of our clients we are convinced that there is a significant need for Pointwise throughout Europe. We are excited to utilize our team of experienced engineers to help educate users, provide hands-on support and design workflows to facilitate great CFD meshing with Pointwise,” says Lothar Lippert, Tecplot Europe’s regional manager.

More information and contact Tecplot Europe.

About Tecplot, Inc.

An operating company of Vela Software International, Inc., itself an operating group of Toronto-based Constellation Software, Inc. (CSI), Tecplot is the leading independent developer of visualization and analysis software for engineers and scientists. CSI is a public company listed on the Toronto Stock Exchange (TSX:CSU). CSI acquires, manages, and builds software businesses that provide mission-critical solutions in specific vertical markets. 

About Pointwise, Inc.

Pointwise, Inc. is solving the top problem facing computational fluid dynamics (CFD) today – reliably generating high-fidelity meshes. The company’s Pointwise software generates structured, unstructured, overset and hybrid meshes; interfaces with CFD solvers such as ANSYS FLUENT® and CFX®, STAR-CCM+®, OpenFOAM®, and SU2 as well as many neutral formats, such as CGNS; runs on Windows, Linux, and Mac, and has a scripting language, Glyph, that can automate CFD meshing. Manufacturing firms and research organizations worldwide have relied on Pointwise as their complete CFD preprocessing solution since 1994.

Pointwise is a registered trademark of Pointwise, Inc. in the USA and the E.U. Pointwise Glyph, T-Rex and Let’s Talk Meshing are trademarks of Pointwise, Inc. All other trademarks are property of their respective owner.

For more information:
Margaret Connelly

The post Tecplot Europe Signs Distributor Agreement with Pointwise appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► Happy Thanksgiving!
  25 Nov, 2020

Wishing you and yours a peaceful holiday.

► Schneider Electric to acquire a controlling stake in ETAP
  23 Nov, 2020

Schneider Electric to acquire a controlling stake in ETAP

Super quick: Schneider Electric announced that it will acquire a controlling stake in ETAP Automation Inc. You may be familiar with ETAP: their platform for modeling electrical power systems is integrated with just about every plant design system on the market.

ETAP will continue to operate as an independent software vendor and will remain manufacturer-agnostic (in other words, not favoring Schneider Electric’s energy solutions). That said, Schneider Electric believes that “ETAP’s solutions will strengthen Schneider Electric’s position as a major player in electrical design, by offering customers unique software capabilities to model, simulate and operate utilities and energy intensive systems as designed, and further enhance Schneider Electric’s digital twin capabilities in Power, Grid and mission-critical sectors, following the Group’s recent strategic investment in IGE+XAO and Alpi”.

The company continued, this “strategic transaction is in line with Schneider Electric’s vision to grow its suite of best in class, end-to-end software and its commitment to helping customers on their digital transformation journey to drive sustainability, efficiency and resiliency across the lifecycle from CapEx to OpEx”.

Details weren’t announced and it sounds as though there are still some regulatory hurdles to overcome.

This is interesting for many reasons, especially since the companies know one another very well. Back in 2015, Schneider Electric announced that its services business would standardize on ETAP for its projects in order to “leverage the advanced, next-generation technology of the integrated ETAP software suite to further increase its productivity through greater efficiencies. ETAP provides Schneider Electric higher design reliability and quality, rule-based analysis, and automation capabilities that will help to optimize and fast track project engineering design and analysis processes.”

We may find out more about the transaction when Schneider Electric announced Q4 results early next year.

► Aspen Tech acquires Camo Analytics for … analytics
  17 Nov, 2020

Aspen Tech acquires Camo Analytics for … analytics

It’s been a bit quiet on the acquisition front lately but Aspen Technology (AspenTech) just announced that it has acquired Camo Analytics AS, a provider of industrial analytics solutions.

Camo Analytics makes something called the Camo Analytics Unscrambler suite (awesome, awesome product name, marketing people) which mines production data to discover how a process manufacturer can increase consistency and quality in production, and improve efficiency. Camo says Unscrambler is used by 25,000 scientists, researchers, and engineers worldwide,

AspenTech CEO Antonio Pietri said, “Camo Analytics is a great addition to our portfolio and supports the expansion of our capabilities to the pharmaceuticals industry and several other industries. We were impressed by the depth of knowledge and drive for innovation from the Camo Analytics team and look forward to continuing investment in the solutions and bringing them to new customers and markets.” The company plans to add Camo’s capabilities to its Process Analytical Technology (PAT) and Overall Equipment Effectiveness (OEE) solutions.

Terms of the deal were not disclosed. From the financial statements on Camo’s website, it looks as though the company had revenue of 31.7 million Norwegian Krone, about US$3.5 million, in 2019, up 13% from 2018. Before you get all multilple-y, Camo reported a hefty net loss for both 2018 and 2019. It appears as though Camo had planned to issue stocks or bonds this Fall to raise more cash, a plan that was perhaps short-circuited by AspenTech’s acquisition.

► Quickies: Autodesk acquires & Dassault Systèmes updates guidance
  17 Nov, 2020

Quickies: Autodesk acquires & Dassault Systèmes updates guidance

Autodesk University starts today, virtually, so expect a lot of news. First up, Autodesk announced that it is acquiring Spacemaker, a company that makes artificial intelligence (AI) and generative design solutions for architects, urban designers, and real estate developers. With Spacemaker, Autodesk says, “design professionals can rapidly create and evaluate options for a building or urban development. With AI as a partner to the architect, the Spacemaker platform enables users to quickly generate, optimize, and iterate on design alternatives, all while considering design criteria and data like terrain, maps, wind, lighting, traffic, zoning, etc. Spacemaker quickly returns design alternatives optimized for the full potential of the site. This leads to better outcomes from the start and allows designers to focus on the creative part of their professional work.”

Autodesk is paying $240 million net of cash for Spacemaker, whose revenues were not disclosed. The transaction is expected to close during Autodesk’sFQ4, so by the end of January 31, 2021. You can read more in Autodesk’s summary of the deal,, here.

One interesting note: Autodesk says that real estate developers are the primary early adopters of Spacemaker. That’s a segment Autodesk isn’t very active in today (having sold off their facilities management solutions decades ago). It’s also a market segment that’s seeing a huge wave of change as we all get used to a post-COVID world. Reconfiguring office spaces, changing chopping arcades and malls to meet new types of consumer demand, repurposing existing structures, and planning new ones 00 if ever there was a place for AI to help make better decisions, faster, this is it.

Next, Dassault Systèmes is hosting an investor event today at which the company announced its new 5-year plan, through 2024. CEO Bernard Charlès said “As we look to the next five years I believe we are positioned to accelerate Dassault Systèmes’ contribution to industries, the environment and to human health leveraging industry platformization and data intelligence. The Manufacturing sector is accelerating its sustainable innovation initiatives thus creating demand for data modeling and simulation, a sweet spot for Dassault Systèmes … In the Life Sciences & Healthcare sector, we are working with industry participants to move toward a patient-centric perspective. Finally, we are advancing initiatives with multiple industries, government entities and new emerging disruptors to reinvent Infrastructure & Cities to create a sustainable future.”

CFO Pascal Daloz quantified this, saying that DS is targeting non-IFRS revenue growth of about 10% at constant currencies by 2024 with non-IFRS revenue from cloud software reaching €2 billion by 2025. He sees “double-digit non-IFRS revenue growth in Life Sciences & Healthcare as well as Infrastructure & Cities…”

We’ll learn more as the investor event progresses, but the comments above would imply revenue of as much as €6.5 billion by 2024 — a massive increase from the €4.44 billion to 4.46 billion the company forecasts for 2020. Nothing in the press release about organic versus acquired growth — we’ll have to tune in for the actual session to get that.

► It’s good to be a CAE supplier: notes on Ansys and Altair earnings
  16 Nov, 2020

It’s good to be a CAE supplier: notes on Ansys and Altair earnings

Altair and Ansys reported results for the quarter ended September 30, 2020, a few weeks agoand it’s time to catch up. First, a quick recap of the individual results, then what investment analysts call a read-across — what it means in a bigger-picture world.

For Q3, 2020 Ansys reported

  • Total GAAP revenue of $367 million, up 7% as reported and up 5% in constant currencies (cc)
  • Software license revenue was $142 million, up 3%
  • By license type, lease revenue was $79 million, up 12% (up 10% cc); revenue from perpetual license sales was $63 million, down 6% (down7% cc) while maintenance revenue was $212 million, up 10% (up 8% cc). Ansys said that it continues to see customers shifting from perpetual licenses to time-based licenses “as a result of the economic impacts of COVID-19, and we expect it to continue into the foreseeable future.”
  • Services revenue was $13 million, down 1% (down 3% cc) as it continues to be difficult to get people to customer sites
  • By geo, revenue from the Americas was $162 million, up 6% (up 6% cc); from Europe, $107 millon, up 7% (up 2% cc); and from Asia, $99 millon, up 7% (up 7% cc)
  • Looking at end-industries, commercial aviation continues to suffer as no one flies, but defense spending remained strong, booting revenue in military aerospace and defense. High-tech, semiconductor, and automotive did well in Q3, especially in Asia
  • By channel, direct revenue was 75% of total, slightly below the year-to-date average of 76% and last year’s Q3 77% of total. Ansys announced that it signed its second-largest-ever contract in Q3, a five-year, $72 million lease deal — that follows a record $100 million, five-year deal signed in Q2 2020. We know big deals are lumpy and hard to close (and management attention-grabbing); even so, the channel recorded double-digit revenue growth in Q3
  • Ansys expects Q4 revenue between $541 million and $581 million, which means a fiscal 2020 revenue between $1,599 million and $1,638 million, up 6% at the midpoint. Ansys raised the bottom end of guidance by $40 million and the top end by $5 million from its Q2 earnings release

While for Q3, Altair reported:

  • Total revenue of $107 million, up 6%
  • Total software revenue of $88 million, 13%
  • Within software, license revenue was $55 million, up 17% maintenance and services revenue was $33 million, up 6%
  • Software-related services revenue was $6 million, down 22%, while client engineering services revenue was down 15% to $11 million, as CFO Howard Morof said, “customers continue to adjust [downward] their external project spend in response to market conditions”. He noted that both segments actually improved from Q2, when the year/year declines were 331% and 22%
  • Altair gives little data on goes and verticals, though CEO Jim Scapa did tell investors that ” [auto and aero have] been a fairly stable part of the business. We have very, very high recurring revenues in those accounts [but] there has been more softness and in terms of growth in those accounts. This year, I am seeing improved sentiment in the automotive and aerospace markets there; there’s no doubt about that, especially in the supply base. I think it’s going to get better as time goes on”
  • Mr. Scapa also said that China and Korea have “come back very, very strongly” while “India has suffered a lot with COVID. They’ve hung in there pretty well. In general, things are operating reasonably well across the world for us and across all the different verticals. And across the different solutions that we have, everyone is coming in pretty normally a little bit muted.”
  • More broadly, he said, “the number of new customers in Q3 was very similar to the prior year. And year-to-date things are worse off because of Q2. In general, things were pretty, pretty normal in Q3″
  • For Q4, Altair guides total revenue of $112 million to $117million, which means the full 1010 target is now $448 million to $453 million, a decline from 2019 of around 2% because of the services decreases; it expects software revenue to be between $373 million and $377 million, up 2% to 3% from 2019.

One thing you might immediately notice when we put these two earnings recaps side by side: Altair sees its strong performance in Q3 and low-balls Q4 (and therefore the year) because of the uncertainty it sees with “prudent caution”. Ansys, on the other hand, used its strong Q3 to raise guidance and expectations. I don’t know if either approach is right or wrong — it is interesting to note the differences.

OK. Now, the read-across.

CAE is becoming ever-more important, as its reach expands outwards from the traditional simulations as an offshoot of the design process. It’s being embedded into CAD for design via generative design and other techniques, used in CAM processes to predict whether products coming imperfectly off the assembly one are OK or need to be scrapped, and in operations, as we see Ansys partnering with SAP for failure prediction. We’re adding data-driven or simulation-driven artificial intelligence into many processes, as well. What was theoretically possible a few years ago, is increasingly becoming real — and that’s exciting.

Everything these companies say (and their customers tell me) points to the fact that simulation isn’t optional. Ansys is able to close these huge deals while many people are working from home, complicating the selling process on the vendor’s end, and the purchase approval process at the buyer’s. The two sides make it work, underscoring how important these technologies are. Both of these huge deals have 5-year terms, which Ansys CEO Ajey Gopal says is due to the fact that “these particular customers [may] already [be] two or three cycles into multi-year leases, so they’ve got a lot of confidence. And they also tend to be in verticals where the product life cycles are much longer. They’ve been using simulation for much longer than other customers. The combination of all those factors drives them to have the confidence to extend the term so that they can plan and we can work with them on the successful deployment of particularly new technologies that they’re trying to roll out across their R&D teams.”

To paraphrase, companies that are deep into using simulation look ahead to what’s coming technologically and in their own product plans. They roll out technologies that can help achieve those plans, as they come onto the market. They grow both the types of simulation and the humans using them. They look at simulation as a set of essential tools that will grow and adapt to their needs and be flexible enough to support them as they change.

The biggest CAE players all boast huge product catalogs, from concept discovery to material design to IoT applications. All claim varying degrees of openness and interoperability with outside solutions. A user could opt for best-of-breed and create a custom platform specifically suited to their needs, but it’s increasingly true that it’s just simpler to standardize on one set of solutions from one of the big players to take advantage of integrations, suites/volume pricing, and other benefits.

It’s good to be Altair and ANSYS. Even with much of the world still struggling through Covid, the need for more and better simulation technologies shows no (long-term) signs of slowing down.

► Tech Soft 3D continues buildout, acquires Visual Kinematics
  11 Nov, 2020

Tech Soft 3D continues buildout, acquires Visual Kinematics

Tech Soft 3D today announced that it has acquired Visual Kinematics (VKI), which makes DevTools, a suite of component software development kits (SDKs) for CAE applications. This comes just a few weeks after Tech Soft acquired Ceetron, and began adding its visualization components to the overall offering.

I’ve heard of VKI but was not aware of its offerings to any level of detail. Tech Soft 3D CEO Ron Fritz told me that VKI’s SDKs are used by many of the main CAE players for mesh generation, pre-processing, solving, and post-processing — so across the entire CAE workflow — as well as for interoperability between workflows. Mr. Fritz told me that VKI component tools are used across many types of physics, from mechanical to fluids, electro-magnetic, and multiphysics applications.

Tech Soft’s CTO, Gavin Bridgeman said that VKI’s portfolio “is the perfect complement to our HOOPS toolkits, as well as our recently acquired Ceetron SDKs for visualization of CAE results. We will be working hard to integrate our tool sets, offering the most complete and robust CAE component technology solution on the market.”

What the combo of historical Tech Soft 3D + Ceetron + VKI brings is best shown in this slide, from Mr. Fritz’s briefing deck:

I asked Mr. Fritz about the logos across the bottom–and the fact that he’s missing Altair. Altair is a client, he told me, but the team wanted to show big players as well as new entrants — the point being that Tech Soft is an agnostic supplier of components across the full PLMish spectrum, from large to small, platform to niche. And that new entrants can perhaps more quickly come to market using Tech Soft’s components than developing everything in-house.

Terms of the deal weren’t announced, but the VKI team is already at work under the Tech Soft 3D banner. Mr. Fritz said that Tech Soft 3D will continue to maintain and support VKI’s existing customers and partners, while growing its market reach through Tech Soft’s existing partner ecosystem.


Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: