Home >

## CFD Blog Feeds

### Another Fine Meshtop

► Recap of Six Recent CFD Success Stories with a Meshing Assist
9 Sep, 2020
No one generates a mesh just to generate a mesh. The proof of a mesh’s suitability is successful use in a CFD simulation. That success can be predicated on many factors including the availability of a broad range of mesh … Continue reading
► Use of Grand Challenge Problems to Assess Progress Toward the CFD Vision 2030
8 Sep, 2020
Join the AIAA’s CFD 2030 Integration Committee at SciTech 2021 this coming January for four invited talks and an extended Q&A session on formulation of grand challenge problems that would provide a basis for assessing progress toward the CFD Vision … Continue reading
► This Week in CFD
4 Sep, 2020
This week’s CFD news brings some excellent reading as we head into a 3-day weekend, at least here in the U.S. It begins with a research article on undergraduate education that’s certain spark thinking if not debate. And our friends … Continue reading
► This Week in CFD
28 Aug, 2020
This week’s CFD news includes articles that pose questions about open source software. Does it have a people problem? And are people prejudiced against it? Proving that good things never get old, there’s a multi-part video series on fluid mechanics … Continue reading
► It’s all in the numbering – mesh renumbering may improve simulation speed
27 Aug, 2020
We all know that the mesh plays a vital role in CFD simulations. Yet, not many realize that renumbering (ordering) of the cells in the Finite Volume Method (FVM) can affect the performance of the linear solver and thus the … Continue reading
► Reducing Boiler Emissions Through Shape Optimization
25 Aug, 2020
In this work, a flexible framework for discrete adjoint-based reactive flow optimization in SU2 is presented. The implementation is based on a low-Mach number solver and a flamelet progress variable model for strongly cooled laminar premixed flames. Besides the combustion … Continue reading

### F*** Yeah Fluid Dynamicstop

► Dendritic
17 Sep, 2020

“What happens when two scientists, a composer, a cellist, and a planetarium animator make art?” The answer is “Dendritic,” a musical composition built directly on the tree-like branching patterns found when a less viscous fluid is injected into a more viscous one sandwiched between two plates.

Normally this viscous fingering instability results in dense, branching fingers, but when there’s directional dependence in the fluid, the pattern transitions instead to one that’s dendritic. In this case, that directionality comes from liquid crystals, whose are rod-like shape makes it easier for liquid to flow in the direction aligned with the rods.

For more on the science, math, and music behind the piece, check out this description from the scientists and composer. (Video, image, and submission credit: I. Bischofberger et al.)

► Bright Volcanic Clouds
16 Sep, 2020

Every day human activity pumps aerosol particles into the atmosphere, potentially altering our weather patterns. But tracking the effects of those emissions is difficult with so many variables changing at once. It’s easier to see how such particles affect weather patterns somewhere like the Sandwich Islands, where we can observe the effects of a single, known source like a volcano.

That’s what we see in this false-color satellite image. Mount Michael has a permanent lava lake in its central crater, and so often releases sulfur dioxide and other gases. As those gases rise and mix with the passing atmosphere, they can create bright, persistent cloud trails like the one seen here. The brightening comes from the additional small cloud droplets that form around the extra particles emitted from the volcano.

As a bonus, this image includes some extra fluid dynamical goodness. Check out the wave clouds and von Karman vortices in the wake of the neighboring islands! (Image credit: J. Stevens; via NASA Earth Observatory)

► Bacterial Turbulence
15 Sep, 2020

Conventional fluid dynamical wisdom posits that any flows at the microscale should be laminar. Tiny swimmers like microorganisms live in a world dominated by viscosity, therefore, there can be no turbulence. But experiments with bacterial colonies have shown that’s not entirely true. With enough micro-swimmers moving around, even these viscous, small-scale flows become turbulent.

That’s what is shown in Image 2, where tracer particles show the complex motion of fluid around a bacterial swarm. By tracking both the bacteria motion and the fluid motion, researchers were able to describe the flow using statistical methods similar to those used for conventional turbulence. The characteristics of this bacterial turbulence are not identical to larger-scale turbulence, but they are certainly more turbulent than laminar. (Image credits: bacterium – A. Weiner, bacterial turbulence – J. Dunkel et al.; research credit: J. Dunkel et al.; submitted by Jeff M.)

► How Canal Locks Work
14 Sep, 2020

For thousands of years, boats have been a critical component of trade, efficiently enabling transport of goods over large distances. But water’s self-leveling creates challenges when moving up and downstream through rivers and canals. To get around this, engineers use locks, which act as a sort of gravity-driven elevator to lift and lower boats to the appropriate water level. In this video from Practical Engineering, we learn about the basic physics behind locks as well as some of the methods engineers use to limit water loss through the lock. (Image and video credit: Practical Engineering)

► Fluorescent Dancing Droplets
11 Sep, 2020

These fluorescent droplets of glowstick liquid jiggle and dance in a solution of sodium hydroxide. Some droplets jitter. Some rotate. And some undergo one coalescence after another. It’s always fun to see how fluid dynamics and chemistry combine! (Image and video credit: Beauty of Science)

► Why Slicing Tomatoes Works
10 Sep, 2020

Picture it: a nice, ripe tomato. Your not-so-recently sharpened kitchen knife. You press the blade down into the soft flesh and… it explodes. Soft solids – like a tomato – don’t react well to cutting, but they slice just fine. Examining why that’s the case is at the heart of this model.

Tomatoes are essentially a gel encased in a thin skin. Gels are a kind of hybrid material — not quite liquid and not quite solid. They consist of a network of particles or polymers bonded together and immersed in a liquid. To cut that network apart, the downward force of the blade has to strain the gel past its limits, which squeezes out the surrounding liquid.

The researchers found that this liquid layer is key to how force from the knife’s motion gets transmitted. In particular, they found that the horizontal motion of a slice is necessary to initiate a cut, and that the gel parts most easily when the downward knife velocity is no more than 24% of the horizontal cutting speed. Press down any faster and the strain propagation fluctuates, creating that unfortunate tomato explosion. (Image credit: G. Fring; research credit: S. Mora and Y. Pomeau; via Ars Technica; submitted by Kam-Yung Soh)

### Symscapetop

► CFD Simulates Distant Past
25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

► Background on the Caedium v6.0 Release
31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

► Long-Necked Dinosaurs Succumb To CFD
14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

► CFD Provides Insight Into Mystery Fossils
23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

► Wind Turbine Design According to Insects
14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

Dragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

► Runners Discover Drafting
1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

### CFD Onlinetop

► RANS Grid Sensitivity Divergence on LES Grid
31 Aug, 2020
Reference on not changing y+ while doing a grid sensitivity study:

Quote:
 Originally Posted by sbaffini Indeed, if y+ =4 is relative to the finest grid, it is confirmed to be a wall function problem. I can't double check now, but I'm pretty sure that the k-omega sst model in CFX uses an all y+ wall function, which means that a wall function is always active. While, in theory, such wall functions should be insensitive to the specific y+ value, they are not perfect and your case is very far from the typical wall function scenario (equilibrium boundary layer), so what you obtain is actually expected. The only viable solution here, and I suggest you to investigate it also for your other models, is to redistribute cells in your grid to be always within y+ = 1-2, but no more. In any case, the important thing is that you can't have y+ changing between the grids when doing a grid refinement. EDIT: I know, it sucks...
► Y+ value for Large Eddy Simulation
31 Aug, 2020
Explanation of Y+ as it relates to viscous sublayer and advection scheme:

Quote:
 Originally Posted by cfdnewbie yes, at least in the viscous sublayer. The size of your grid cell (or the number of points per unit length) determine the smallest scale you can catch on a given grid. From information theory, the Nyquist theorem tells us that we need at least 2 points per wavelength to represent a frequency (we need to be able to detect the sign change). However, 2 points per wavelength is just for Fourier-type approximations. For other schemes like O1 FV you need a lot more, maybe 6 to 10 to accurately capture a wavelength. Let's assume that you have the same grid in all of the flow (i.e. high resolution everywhere, no grid stretching or such). Then the smallest scale you can capture is determined by your grid and scheme, the better/finer, the smaller the scale. OF course, most grids will coarsen away from the wall, so the smallest scale will "grow bigger" away from the wall as well Ha, that's the crux of LES :) of course, the bigger y+, the fewer the small scales you will catch, but does that change the result of the bigger scales? The answer is not straight forward, but I'll try to make it short: Let's talk about NS-equations (or any non-linear conservation eqns). The scales represented in the equations are coupled by the non-linearity of the equations, i.e. what happens on one scale will (eventually) reach all other scales (also known as the butterfly effect). So the NS eqns represent the full "nature" with all its scales and interactions. We now truncate our "nature" by resolving only the larger scales, since our grid is too coarse.... what will happen? Will the large scales be influenced by the lack of small scales? Hell, yeah, they will. We are lacking the balancing interaction of the small scales, since we don't have these scales. We are also lacking the physical effects that take place at small scales (dissipation).... so we have production of turbulence at large scales, the energy is handed down through the medium scales but is NOT dissipated at the small scales, since they are simply not present in our computation. Will that influence the large scales? Definitely! That's why LES people add some type of viscosity (effect of small scales) to their computations, otherwise, their simulations would very likely just blow up! hope this help! cheers
► Rans
31 Aug, 2020
Quote:
 Originally Posted by vinerm That's a wrong notion that RANS or EVM models are introduced to get faster results or are expected to be used with coarse mesh. There is no such assumption behind development of these models. The only assumption in EVM is that the turbulence is isotropic and non-EVM RANS, such as, RSM don't even have that assumption. And when it comes to wall treatment, it is not directly linked with turbulence model; even LES requires wall treatment. is a non-dimensional (Reynolds) number and for almost all industrial fluids, theoretically as well as experimentally, it is found that up to of 5. And if it is linear within this limit, it does not matter if you have 10 points or just 1 point, the line would be same. So, being smaller than 1 is an overkill and does not help within anything. Boundary conditions for both k and at the wall is 0.
► What I've done in the past years and may need someone else to pick it back up
18 Aug, 2020
This is a blog post aimed to pass on the baton of the work I've done in the past to anyone who wants to pick it back up partially or completely, which I was still doing (or trying to do) until Hanging my volunteer gloves and moving to a new phase of my life.

This blog post could potentially be edited as time goes on and I remember about things I've done in the past and which should be picked up by someone else:
1. Generating version template pages and logos for said versions at openfoamwiki.net - this is explained here: https://openfoamwiki.net/index.php/F...n_templates.3F and here https://openfoamwiki.net/index.php/F...AM_versions.3F
2. Writing and testing installation instructions at https://openfoamwiki.net/index.php/Installation/Linux - The objective was to ensure that the less knowledgeable user would still be able to compile+install OpenFOAM from source code with a much higher success rate, than following the succinct instructions available at the official websites.
3. Updating the release version links at the top right-most corner of openfoamwiki.net
4. Uh... several other things listed at openfoamwiki.net, mostly listed here: http://openfoamwiki.net/index.php?ti...arget=Wyldckat
5. Contributing to bug reports and fixes at openfoam.com
6. Moderator work here at the forum, including:
1. Hunting down spam, which nowadays is mostly automated, but not fully automated.
2. Moving threads to the correct sub-forums.
3. Re-arranging forums to make it easier for people to ask and answer questions, as well as finding existing answers.
4. Warning forum members when they've not followed the rules...
5. I wanted to have pruned all of the threads on the main OpenFOAM forum and place them in their correct sub-forums, but never got around to it. There is a thread on the moderator forum that explains how to streamline the process.
6. I wanted to have finished moving posts into independent threads out of this still large thread: https://www.cfd-online.com/Forums/op...ed-topics.html
7. Also out of this one: https://www.cfd-online.com/Forums/op...am-extend.html
7. Had a list of posts/threads I wanted to look into... which is now written on this wiki page on my central repository for these kinds of notes: What I wanted to still have done for the OpenFOAM community, but never managed to find the time for it
8. And had a list of bugs I wanted to solve: Bugs on OpenFOAM's bug tracker I wanted to tackle, but never managed to find the time for it
9. I have over 50 repositories at https://github.com/wyldckat - most of them related to OpenFOAM and which will be left as-is for the years to come. If you want to continue working on them and even take over maintenance, open an issue on the respective repository.
► Hanging my volunteer gloves and moving to a new phase of my life
18 Aug, 2020
TL;DR: As of 2020, I can only help during office hours, at work, if paid and/or affects our projects, namely what we use in OpenFOAM itself and blueCFD-Core.

Full post:
So nearly 2 years after my blog post Why I contribute to the OpenFOAM forum(s), wiki(s) and the public community, I'm writing this blog post you are reading now.

My last 3 thread posts at the forums in CFD-Online this year, were on May 7th, February 27th and January 20th. And before that, it was 10 posts over my winter vacation on the last week of 2019. Before that, it averaged out to around 1 post/month. I have 10,956 posts here at the forum and it still averages to 2.62 post/day.

I'm currently vacation, mid August 2020 and am writing this, since I'm unable to help the way I used to in the past.

So what happened?
In a short description: borderline-burning-out + ~30kg overweight.

In other words, I was still able to work, but having difficulty maintaining a stable life, which wasn't healthy to begin for years now, along with overly stressed, even if there was not much of a reason to be stressed...

What am I doing now, since early 2020?
1. Changed my diet, namely changed my eating regiment to something I should have done over 20 years ago.
2. Increased my physical activity to a much healthier dosage.
3. Am moving on with my life to a new phase where I actually have to behave as a grown up, specially given I'm already 40 years old as I write this.

What does this mean for what I can do to help in the community?
Given my past efforts over a period of 10 years, I'm writing this blog post as an official stance on how much I will be able to help in the future:
1. The majority (~99.9%) of the public contributions will be done within working hours at my job; in other words, during office hours, at work, if paid and/or affects our projects, namely what we use in OpenFOAM itself and blueCFD-Core.
2. The remaining 0.1% outside of my job will mostly be the bug tracker at openfoam.org, given that I can't be at both openfoam.org and openfoam.com :(
3. Everything else where I've helped in the past, will be once in a blue moon, may it be at the forum or openfoamwiki.net
4. I don't know how many or which community/official OpenFOAM workshops I will attend in the future. I already had to give up on the Iberian User workshop of 2018, due to health reasons, i.e. what has finally led me to this decision this year of 2020.
This has been gradually occurring since at least 2015, but it has effectively come to this stopping point.

Associated to this blog post, I'm writing another blog post which I may need to update in the near future: What I've done in the past years and may need someone else to pick it back up
edit: Aiming to wrap up writing said blog post by the end of the 19th of August 2020.

Signing off for now:
I've written some years ago in a forum post, where someone asked a vague question and I went on a rant over "as people grow older, the more they know and the more responsibilities they have, therefore the less free time they have to come and help here... so the less information you provide, the less likely you will get the answer you need".

In a way, my time has come and I need to move on with my life. But I was stressing out too much to notice it sooner. Fortunately I should still be on time to keep going forward and hopefully be able to help more the community in the future.

This has happened to the various authors of code that is currently and was in OpenFOAM in the past, where they helped people publicly over several years and ended up having to pull away from the community, because it's not easy to achieve a balance between life and working as a volunteer.

Fun fact:
Even if I don't post in the next 20 years it would still give me a rate of 1 post/month... :cool::rolleyes:
► 10 crucial parameters to check before committing to a CFD software for academia
4 Aug, 2020
I have put together a comprehensive list of 10 crucial parameters that you, as a researcher or a teacher, should check with the CFD software provider, before committing to their software.

Attached Thumbnails

### curiosityFluidstop

► Creating curves in blockMesh (An Example)
29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

$y=H*\sin\left(\pi x \right)$

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
=========                 |
\\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
\\    /   O peration     | Website:  https://openfoam.org
\\  /    A nd           | Version:  6
\\/     M anipulation  |
\*---------------------------------------------------------------------------*/
FoamFile
{
version     2.0;
format      ascii;
class       dictionary;
object      blockMeshDict;
}

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

vertices
(
(-1 0 0)    // 0
(0 0 0)     // 1
(1 0 0)     // 2
(2 0 0)     // 3
(-1 2 0)    // 4
(0 2 0)     // 5
(1 2 0)     // 6
(2 2 0)     // 7

(-1 0 1)    // 8
(0 0 1)     // 9
(1 0 1)     // 10
(2 0 1)     // 11
(-1 2 1)    // 12
(0 2 1)     // 13
(1 2 1)     // 14
(2 2 1)     // 15
);

blocks
(
hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);

edges
(
);

boundary
(
inlet
{
type patch;
faces
(
(0 8 12 4)
);
}
outlet
{
type patch;
faces
(
(3 7 15 11)
);
}
lowerWall
{
type wall;
faces
(
(0 1 9 8)
(1 2 10 9)
(2 3 11 10)
);
}
upperWall
{
type patch;
faces
(
(4 12 13 5)
(5 13 14 6)
(6 14 15 7)
);
}
frontAndBack
{
type empty;
faces
(
(8 9 13 12)
(9 10 14 13)
(10 11 15 14)
(1 0 4 5)
(2 1 5 6)
(3 2 6 7)
);
}
);

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

edges
(
polyLine 1 2
(
(0	0       0)
(0.1	0.0309016994    0)
(0.2	0.0587785252    0)
(0.3	0.0809016994    0)
(0.4	0.0951056516    0)
(0.5	0.1     0)
(0.6	0.0951056516    0)
(0.7	0.0809016994    0)
(0.8	0.0587785252    0)
(0.9	0.0309016994    0)
(1	0       0)
)

polyLine 9 10
(
(0	0       1)
(0.1	0.0309016994    1)
(0.2	0.0587785252    1)
(0.3	0.0809016994    1)
(0.4	0.0951056516    1)
(0.5	0.1     1)
(0.6	0.0951056516    1)
(0.7	0.0809016994    1)
(0.8	0.0587785252    1)
(0.9	0.0309016994    1)
(1	0       1)
)
);

The sub-dictionary above is just a list of points on the curve $y=H\sin(\pi x)$. The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!

Cheers.

This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

#### So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

#### Vertical Knife Edge

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

$\nabla^2\left[\right] = \nabla \cdot \nabla \left[\right]$

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

## So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/

The law given by:

$\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}$

It is also often simplified (as it is in OpenFOAM) to:

$\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}$

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

 Temparature (K) Viscosity (Pa.s) 200 0.000012924 400 0.000022217 600 0.000029602 800 0.000035932 1000 0.000041597 1200 0.000046812 1400 0.000051704 1600 0.000056357 1800 0.000060829 2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)

Next we input the data:

T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()



And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!

## Summary

In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

https://wiki.openfoam.com/%223_weeks%22_series

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.

## Summary

Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
22 Apr, 2019

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!

https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.

## Instructions

(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh

PS
You need to run this with python 3, and you need to have numpy installed

## Inputs

The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.

## Examples

### 12% Joukowski Airfoil

Inputs:

With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

### Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:

With these inputs, the result looks like this:

Mesh Quality:

Visualizing the mesh quality:

### MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).

Inputs:

Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality

## Summary

Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM®  and OpenCFD®  trademarks.﻿

► Normal Shock Calculator
20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

### Hanley Innovationstop

► Accurate Aircraft Performance Predictions using Stallion 3D
26 Feb, 2020

Stallion 3D uses your CAD design to simulate the performance of your aircraft.  This enables you to verify your design and compute quantities such as cruise speed, power required and range at a given cruise altitude. Stallion 3D is used to optimize the design before moving forward with building and testing prototypes.

The table below shows the results of Stallion 3D around the cruise angles of attack of the Cessna 402c aircraft.  The CAD design can be obtained from the OpenVSP hangar.

The results were obtained by simulating 5 angles of attack in Stallion 3D on an ordinary laptop computer running MS Windows 10 .  Given the aircraft geometry and flight conditions, Stallion 3D computed the CL, CD, L/D and other aerodynamic quantities.  With this accurate aerodynamics results, the preliminary performance data such as cruise speed, power, range and endurance can be obtained.

Lift Coefficient versus Angle of Attack computed with Stallion 3D

Lift to Drag Ratio versus True Airspeed at 10,000 feet

Power Required versus True Airspeed at 10,000 feet

The Stallion 3D results shows good agreement with the published data for the Cessna 402.  For example, the cruse speed of the aircraft at 10,000 feet is around 140 knots. This coincides with the speed at the maximum L/D (best range) shown in the graph and table above.

http://www.hanleyinnovations.com/stallion3d.html

Hanley Innovations is a pioneer in developing user friendly and accurate software that is accessible to engineers, designers and students.  For more information, please visit > http://www.hanleyinnovations.com

► 5 Tips For Excellent Aerodynamic Analysis and Design
8 Feb, 2020
Stallion 3D analysis of Uber Elevate eCRM-100 model

Being the best aerodynamics engineer requires meticulous planning and execution.  Here are 5 steps you can following to start your journey to being one of the best aerodynamicist.

1.  Airfoils analysis (VisualFoil) - the wing will not be better than the airfoil. Start with the best airfoil for the design.

2.  Wing analysis (3Dfoil) - know the benefits/limits of taper, geometric & aerodynamic twist, dihedral angles, sweep, induced drag and aspect ratio.

3. Stability analysis (3Dfoil) - longitudinal & lateral static & dynamic stability analysis.  If the airplane is not stable, it might not fly (well).

4. High Lift (MultiElement Airfoils) - airfoil arrangements can do wonders for takeoff, climb, cruise and landing.

5. Analyze the whole arrangement (Stallion 3D) - this is the best information you will get until you flight test the design.

Hanley Innovations is a pioneer in developing user friendly and accurate software the is accessible to engineers, designs and students.  For more information, please visit > http://www.hanleyinnovations.com

► Accurate Aerodynamics with Stallion 3D
17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling.

The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention.

Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.

Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.

Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.

Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

► Hanley Innovations Upgrades Stallion 3D to Version 5.0
18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse

Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

Version 5.0 has the following features:
• Built-in automatic grid generation
• Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
• Built-in 3D laminar Navier-Stokes solver
• Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
• Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
• Inputs STL files for processing
• Built-in wing/hydrofoil geometry creation tool
• Enables stability derivative computation using quasi-steady rigid body rotation
• Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
• Reports the lift, drag and moment coefficients
• Reports the lift, drag and moment magnitudes
• Plots surface pressure, velocity, Mach number and temperatures
• Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or$8,000.  The software is also available in Lab and Class Packages.

► Airfoil Digitizer
18 Jun, 2017

Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.

http:/www.hanleyinnovations.com/airfoildigitizerhelp.html

15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

http://www.hanleyinnovations.com/stallion3d.html

Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

### CFD and others...top

► Facts, Myths and Alternative Facts at an Important Juncture
21 Jun, 2020
We live in an extraordinary time in modern human history. A global pandemic did the unthinkable to billions of people: a nearly total lock-down for months.  Like many universities in the world, KU closed its doors to students since early March of 2020, and all courses were offered online.

Millions watched in horror when George Floyd was murdered, and when a 75 year old man was shoved to the ground and started bleeding from the back of his skull...

Meanwhile, Trump and his allies routinely ignore facts, fabricate alternative facts, and advocate often-debunked conspiracy theories to push his agenda. The political system designed by the founding fathers is assaulted from all directions. The rule of law and the free press are attacked on a daily basis. One often wonders how we managed to get to this point, and if the political system can survive the constant sabotage...It appears the struggle between facts, myths and alternative facts hangs in the balance.

In any scientific discipline, conclusions are drawn, and decisions are made based on verifiable facts. Of course, we are humans, and honest mistakes can be made. There are others, who push alternative facts or misinformation with ulterior motives. Unfortunately, mistaken conclusions and wrong beliefs are sometimes followed widely and become accepted myths. Fortunately, we can always use verifiable scientific facts to debunk them.

There have been many myths in CFD, and quite a few have been rebutted. Some have continued to persist. I'd like to refute several in this blog. I understand some of the topics can be very controversial, but I welcome fact-based debate.

Myth No. 1 - My LES/DNS solution has no numerical dissipation because a central-difference scheme is used.

A central finite difference scheme is indeed free of numerical dissipation in space. However, the time integration scheme inevitably introduces both numerical dissipation and dispersion. Since DNS/LES is unsteady in nature, the solution is not free of numerical dissipation.

Myth No. 2 - You should use non-dissipative schemes in LES/DNS because upwind schemes have too much numerical dissipation.

It sounds reasonable, but far from being true. We all agree that fully upwind schemes (the stencil shown in Figure 1) are bad. Upwind-biased schemes, on the other hand, are not necessarily bad at all. In fact, in a numerical test with the Burgers equation [1], the upwind biased scheme performed better than the central difference scheme because of its smaller dispersion error. In addition, the numerical dissipation in the upwind-biased scheme makes the simulation more robust since under-resolved high-frequency waves are naturally damped.

 Figure 1. Various discretization stencils for the red point
The Riemann solver used in the DG/FR/CPR scheme also introduces a small amount of dissipation. However, because of its small dispersion error, it out-performs the central difference and upwind-biased schemes. This study shows that both dissipation and dispersion characteristics are equally important. Higher order schemes clearly perform better than a low order non-dissipative central difference scheme.

Myth No. 3 - Smagorisky model is a physics based sub-grid-scale (SGS) model.

There have been numerous studies based on experimental or DNS data, which show that the SGS stress produced with the Smagorisky model does not correlate with the true SGS stress. The role of the model is then to add numerical dissipation to stablize the simulations. The model coefficient is usually determined by matching a certain turbulent energy spectrum. The fact suggests that the model is purely numerical in nature, but calibrated for certain numerical schemes using a particular turbulent energy spectrum. This calibration is not universal because many simulations produced worse results with the model.

► What Happens When You Run a LES on a RANS Mesh?
27 Dec, 2019

Surely, you will get garbage because there is no way your LES will have any chance of resolving the turbulent boundary layer. As a result, your skin friction will be way off. Therefore, your drag and lift will be a total disaster.

To actually demonstrate this point of view, we recently embarked upon a numerical experiment to run an implicit large eddy simulation (ILES) of the NASA CRM high-lift configuration from the 3rd AIAA High-Lift Prediction Workshop. The flow conditions are: Mach = 0.2, Reynolds number = 3.26 million based on the mean aerodynamic chord, and the angle of attack = 16 degrees.

A quadratic (Q2) mesh was generated by Dr. Steve Karman of Pointwise, and is shown in Figure 1.

Figure 1. Quadratic mesh for the NASA CRM high-lift configuration (generated by Pointwise)

The mesh has roughly 2.2 million mixed elements, and is highly clustered near the wall with an average equivalent y+ value smaller than one. A p-refinement study was conducted to assess the mesh sensitivity using our high-order LES tool based on the FR/CPR method, hpMusic. Simulations were performed with solution polynomial degrees of p = 1, 2 and 3, corresponding to 2nd, 3rd and 4th orders in accuracy respectively. No wall-model was used. Needless to say, the higher order simulations captured finer turbulence scales, as shown in Figure 2, which displays the iso-surfaces of the Q-criteria colored by the Mach number.

 p = 1

 p = 2

 p = 3
Figure 2. Iso-surfaces of the Q-criteria colored by the Mach number

Clearly the flow is mostly laminar on the pressure side, and transitional/turbulent on the suction side of the main wing and the flap. Although the p = 1 simulation captured the least scales, it still correctly identified the laminar and turbulent regions.

The drag and lift coefficients from the present p-refinement study are compared with experimental data from NASA in Table I. Although the 2nd order results (p = 1) are quite different than those of higher orders, the 3rd and 4th order results are very close, demonstrating very good p-convergence in both the lift and drag coefficients. The lift agrees better with experimental data than the drag, bearing in mind that the experiment has wind tunnel wall effects, and other small instruments which are not present in the computational model.

Table I. Comparison of lift and drag coefficients with experimental data
 CL CD p = 1 2.020 0.293 p = 2 2.411 0.282 p = 3 2.413 0.283 Experiment 2.479 0.252

This exercise seems to contradict the common sense logic stated in the beginning of this blog. So what happened? The answer is that in this high-lift configuration, the dominant force is due to pressure, rather than friction. In fact, 98.65% of the drag and 99.98% of the lift are due to the pressure force. For such flow problems, running a LES on a RANS mesh (with sufficient accuracy) may produce reasonable predictions in drag and lift. More studies are needed to draw any definite conclusion. We would like to hear from you if you have done something similar.

This study will be presented in the forthcoming AIAA SciTech conference, to be held on January 6th to 10th, 2020 in Orlando, Florida.

► Not All Numerical Methods are Born Equal for LES
15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS).

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up.

► Are High-Order CFD Solvers Ready for Industrial LES?
1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

• Ability to handle complex geometries, and ease of mesh generation
• Robustness for a wide variety of flow problems
• Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications.

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit hocfd.com to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
• Leicester won the Premier League in England defying odds of 5000 to 1
• Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference.

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people.

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.

A Tank outside my taxi

A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion

To close, I wish all of you a very happy 2017!

1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.

### Convergent Science Blogtop

► Leveling Up Scaling with CONVERGE 3.0
14 Aug, 2020

In a competitive market, predictive computational fluid dynamics (CFD) can give you an edge when it comes to product design and development. Not only can you predict problem areas in your product before manufacturing, but you can also optimize your design computationally and devote fewer resources to testing physical models. To get accurate predictions in CFD, you need to have high-resolution grid-convergent meshes, detailed physical models, high-order numerics, and robust chemistry—all of which are computationally expensive. Using simulation to expedite product design works only if you can run your simulations in a reasonable amount of time.

The introduction of high-performance computing (HPC) drastically furthered our ability to obtain accurate results in shorter periods of time. By running simulations in parallel on multiple cores, we can now solve cases with millions of cells and complicated physics that otherwise would have taken a prohibitively long time to complete.

However, simply running cases on more cores doesn’t necessarily lead to a significant speedup. The speedup from HPC is only as good as your code’s parallelization algorithm. Hence, to get a faster turnaround on product development, we need to improve our parallelization algorithm.

Breaking a problem into parts and solving these parts simultaneously on multiple interlinked processors is known as parallelization. An ideally parallelized problem will scale inversely with the number of cores—twice the number of cores, half the runtime.

A common task in HPC is measuring the scalability, also referred to as scaling efficiency, of an application. Scalability is the study of how the simulation runtime is affected by changing the number of cores or processors. The scaling trend can be visualized by plotting the speedup against the number of cores.

### How Does CONVERGE Parallelize?

#### Parallelization in CONVERGE 2.4 and Earlier

In CONVERGE versions 2.4 and earlier, parallelization is performed by partitioning the solution domain into parallel blocks, which are coarser than the base grid. CONVERGE distributes the blocks to the interlinked processors and then performs a load balance. Load balancing redistributes these parallel blocks such that each processor is assigned roughly the same number of cells.

This parallel-block technique works well unless a simulation contains high levels of embedding (regions in which the base grid is refined to a finer mesh) in the calculation domain. These cases lead to poor parallelization because the cells of a single parallel block cannot be split between multiple processors.

Figure 1 shows an example of parallel block load balancing for a test case in CONVERGE 2.4. The colors of the contour represent the cells owned by each processor. As you can see, the highly embedded region at the center is covered by only a few blocks, leading to a disproportionately high number of cells in those blocks. As a result, the cell distribution across processors is skewed. This phenomenon imposes a practical limit on the number of levels of embedding you can have in earlier versions of CONVERGE while still maintaining a reasonable load balance.

#### Parallelization in CONVERGE 3.0

In CONVERGE 3.0, instead of generating parallel blocks, parallelization is accomplished via cell-based load balancing, i.e., on a cell-by-cell basis. Because each cell can belong to any processor, there is much more flexibility in how the cells are distributed, and we no longer need to worry about our embedding levels.

Figure 2 shows the cell distribution among processors using cell-based load balancing in CONVERGE 3.0 for the same test case shown in Figure 1. You can see that without the restrictions of the parallel blocks, the cells in the highly embedded region are divided between many processors, ensuring an (approximately) equal distribution of cells.

The cell-based load balancing technique demonstrates significant improvements in scaling, even for large numbers of cores. And unlike previous versions, the load balancing itself in CONVERGE 3.0 is performed in parallel, accelerating the simulation start-up.

### Case Studies

In order to see how well the cell-based parallelization works, we have performed strong scaling studies for a number of cases. The term strong scaling means that we ran the exact same simulation (i.e., we kept the number of cells, setup parameters, etc. constant) on different core counts.

#### SI8 PFI Engine Case

Figure 3 shows scaling results for a typical SI8 port fuel injection (PFI) engine case in CONVERGE 3.0. The case was run for one full engine cycle, and the core count varied from 56 to 448. The plot compares the speedup obtained running the case in CONVERGE 3.0 with the ideal speedup. With enough CPU resources, in this case 448 cores, you can simulate one engine cycle with detailed chemistry in under two hours—which is three times faster than CONVERGE 2.4!

#### Sandia Flame D Case

If the speedup of the SI8 PFI engine simulation impressed you, then just wait until you see the scaling study for the Sandia Flame D case! Figure 4 shows the results of a strong scaling study performed for the Sandia Flame D case, in which we simulated a methane flame jet using 170 million cells. The case was run on the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA), and the core counts vary from 500 to 8,000. CONVERGE 3.0 demonstrates impressive near-linear scaling even on thousands of cores.

### Conclusion

Although earlier versions of CONVERGE show good runtime improvements with increasing core counts, speedup is limited for cases with significant local embeddings. CONVERGE 3.0 has been specifically developed to run efficiently on modern hardware configurations that have a high number of cores per node.

With CONVERGE 3.0, we have observed an increase in speedup in simulations with as few as approximately 1,500 cells per core. With its improved scaling efficiency, this new version empowers you to obtain simulation results quickly, even for massive cases, so you can reduce the time it takes to bring your product to market.

[1] The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. The NCSA Industry Program is the largest Industrial HPC outreach in the world, and it has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand computational problems at rapid speed and scale. The CONVERGE simulations were run on NCSA’s Blue Waters supercomputer, which is one of the fastest supercomputers on a university campus. Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.

► The Collaboration Effect: A Decade of Innovation
5 Aug, 2020

From the Argonne National Laboratory + Convergent Science Blog Series

The world is waiting for us to develop the tools needed to design new engine architectures, new concepts, with a finer control over the combustion process. If we can continue to make the progress we’ve achieved over the last ten years, I think society and the environment will continue to reap large rewards.

—Dr. Don Hillebrand, Division Director of the Energy Systems Division, Argonne National Laboratory

The year 2020 marks the ten-year anniversary of a fruitful collaboration between Convergent Science and the U.S. Department of Energy’s Argonne National Laboratory. Over the years, the collaboration has facilitated exciting advances in engine technology, high-performance computing and machine learning, computational methods, physical models, gas turbine and detonation engine simulations, and more. Many engineers at both Argonne and Convergent Science have contributed to these projects, but the collaboration started with one individual.

### The Story Origin

Dr. Sibendu Som was introduced to CONVERGE before it was even called CONVERGE. He was a graduate student at the University of Illinois at Chicago (UIC), and in the summer of 2006 Sibendu participated in an industry internship. He worked with engineers on a computational fluid dynamics (CFD) team who were using an internal version of a code in development by a small company named Convergent Science. When Sibendu’s internship ended, he went back to UIC and continued to work with the same CFD code—at the time called MOSES.

For his thesis, Sibendu focused on improving spray models, for which he was obtaining experimental data from Argonne. Spray modeling happens to be a specialty of Dr. Kelly Senecal, Co-Owner of Convergent Science, so Kelly assisted Sibendu in his endeavors.

“Kelly helped me quite a bit,” Sibendu says, “so I actually invited him to be a part of my thesis defense committee.”

After completing his Ph.D.—and thoroughly impressing Kelly and the rest of his committee—Sibendu became a postdoc at Argonne National Laboratory in the research group of Mr. Doug Longman, Manager of Engine Research. At the time, there was only a little CFD work being done at Argonne in the combustion and spray area, so there was an opportunity to bring in a new code. Having used CONVERGE during his thesis, Sibendu was a proponent of using the software at Argonne.

Partnering with a renowned national laboratory was a big opportunity for Convergent Science. In 2010, Convergent Science had only recently switched from being a CFD consulting company to a CFD software company, and working with Argonne lent credibility to their code. Argonne also provided access to computational resources on a scale that a small company simply could not afford on their own.

“It was also a relationship thing,” Kelly says. “The partnership just started off on the right foot, and we were really happy to work with the Argonne research team.”

### A Mutually Beneficial Partnership

Government and private industry have a long history of collaboration in the United States—and for good reason. These relationships are not only beneficial for both parties, but also for taxpayers. The mission of national laboratories is not to compete with industry, but to help support and enhance the missions of private companies for the benefit of the country.

“The national lab system in the United States is a national treasure,” says Dr. Don Hillebrand. “Our job is to look at big science, big physics, big chemistry, big engineering, and solve challenging problems that confront us. We make sure that knowledge or tools or technology solutions get transferred to industrial groups, who develop jobs and products and make the country competitive.”

National laboratories provide access to resources, including advanced technology and funding, that private companies are often unable to obtain on their own. For Convergent Science in particular, access to Argonne’s computational resources made it possible to test CONVERGE on large numbers of cores and to work on improving the scalability for clients who want to run highly parallel simulations. Getting access to these types of resources on the ground floor provides a huge advantage to industry partners.

Another important function of national labs is to investigate long-term or risky areas of research. Private companies survive on the profits they make, and investing in research that does not pay off in the end can be damaging to their business. In the same vein, companies tend to focus on products that they can bring to market relatively quickly to make sure they have a consistent revenue stream. However, long-term and riskier research is critical for developing innovative technologies that have the potential to transform our lives.

“The government drives a lot of research in cutting-edge technology,” says Dr. Dan Lee, Co-Owner of Convergent Science. “They also have advanced facilities and teams of expert engineers doing fundamental research for projects that are potentially going to shape the future.”

Of course, to have an impact on society, the technology developed in national laboratories must end up in the hands of consumers. Thus the end-goal of research and development at government institutions is to transfer that technology to industry.

Ann Schlenker, Director of the Center for Transportation Research at Argonne, spent more than 30 years in industry before transitioning to Argonne. That experience gave her a deep understanding of the synergistic relationship between government and private industry.

“You need to be extremely astute at listening to the voice of the customer. And that means understanding what the challenges are, where the hurdles and difficulties are stressing the system and how best to optimize processes. Because if you can do that, you can develop timely solutions,” Ann says.

Partnering with industry helps ensure that the research at the national labs is relevant, timely, and impactful. This is one way in which these relationships benefit the taxpayer—the results of government research directly address the needs of consumers and help make the country competitive on the world stage.

### Delivering Results

The collaboration between Argonne and Convergent Science has resulted in significant advances for the modeling community and the transportation industry. While the details of this research will be discussed in depth in upcoming blog posts, the projects from the past decade generally fall into two categories: advancing simulation for propulsion technologies and improving the scalability of CONVERGE on high-performance computing architectures.

Many projects have focused on modeling processes relevant to the internal combustion engine, such as studying fuel injection and sprays using experimental data from Argonne’s Advanced Photon Source, implementing state-of-the-art nozzle flow models in CONVERGE, simulating ignition, and investigating cycle-to-cycle variation.

Other key areas of focus have been modeling challenging phenomena in gas turbine combustors and breaking ground on simulating rotating detonation engines. Enhancing the scalability of CONVERGE has made it possible to run larger, more complex cases and to obtain more accurate, more relevant results from these simulations.

The overarching goal for these projects continues to be to create better models and establish techniques that will be instrumental in developing the transportation technologies of the future. Perhaps Ann sums it up best:

The day of learning is not over for combustion processes. It’s germane to our gross domestic product for U.S. economic vitality. Our transportation and combustion researchers and industry engineers work side-by-side to achieve the societal goals of better fuel economy and lower emissions. And these strong collaborations and this visionary work allow us to move fully forward with model-based system engineering, with high-fidelity, predictive capabilities that we trust.

The collaboration between Convergent Science and Argonne National Laboratory will certainly help propel us into the future. Learn more about the research performed during this collaboration in upcoming blog posts!

► Models On Top of Models: Thickened Flames in CONVERGE
2 Jul, 2020

Any CONVERGE user knows that our solver includes a lot of physical models. A lot of physical models! How many combinations exist? How many different ways can you set up a simulation? That’s harder to answer than you might think. There might be N turbulence models and M combustion models, but the total set of combinations isn’t N*M.

Why not? In some cases, our developers haven’t completed it yet! The ECFM and ECFM3Z combustion models, for example, could not be combined with a large eddy simulation (LES) turbulence model until CONVERGE version 3.0.11. We’re adding more features all the time. One interesting example is the thickened flame model (TFM).

The name is descriptive, of course: TFM is designed to thicken the flame. If you’re not a combustion researcher, this notion may not be intuitive. A real flame is thin (in an internal combustion engine environment, tens or hundreds of microns). Why would we want to design a model that intentionally deviates from this reality? As is often the case with physical modeling, the answer lies in what we’re trying to study.

CONVERGE is often used to study the engineering operability of a premixed internal combustion or gas turbine engine. This requires accurate simulation of macroscopic combustion dynamics (flame properties), including the laminar flamespeed. A large eddy simulation (LES) might use cells on the order of 0.1 mm

The problem may now be clear. The flame is much too thin to resolve on the grid we want to use. In fact, a detailed chemical kinetics solver like SAGE requires five or more cells across the flame in order to reproduce the correct laminar flamespeed. An under-resolved flame results in an underprediction of laminar flamespeed. Of course, we could simply decrease the cell size by an order of magnitude, but that makes for an impractical engineering calculation.

The thickened flame model is designed to solve this problem. The basic idea of Colin et al. [1] was to simulate a flame that is thicker than the physical one, but which reproduces the same laminar flamespeed. From simple scaling analysis, this can be achieved by increasing the thermal and species diffusivity while reducing the reaction rate by a factor of F. Because the flame thickening effect decreases the wrinkling of the flame front, and thus its surface area, an efficiency factor E is introduced so that the correct turbulent flamespeed is recovered.

The combination of these scaling factors allows CONVERGE to recover the correct flamespeed without actually resolving the flame itself. CONVERGE also calculates a flame sensor function so that these scaling factors are applied only at the flame front. By using TFM with SAGE detailed chemistry, a premixed combustion engineering simulation with LES becomes practical.

Hasti et al. [2] evaluated one such case using CONVERGE with LES, SAGE, and TFM. This work examined the Volvo bluff-body augmentor test rig, shown below, which has been subjected to extensive study. At the conditions of interest, the flame thickness is estimated to be about 1 mm, and so SAGE without TFM should require a grid not coarser than 0.2 mm to accurately simulate combustion.

With TFM, Hasti et al. show that CONVERGE is able to generate a grid-converged result at a minimum grid spacing of 0.3125 mm. We might expect such a calculation to take only about 40% as many core hours as a simulation with a minimum grid spacing of 0.25 mm.

Understanding the topic of study, the underlying physics, and the way those physics are affected by our choice of physical models, are critical to performing accurate simulations. If you want to combine the power of the SAGE detailed chemical kinetics solver with the transient behavior of an LES turbulence model to understand the behavior of a practical engine–and to do so without bankrupting your IT department–TFM is the enabling technology.

Want to learn more about thickened flame modeling in CONVERGE? Check out these TFM case studies from recent CONVERGE User Conferences (1, 2, 3) and keep an eye out for future Premixed Combustion Modeling advanced training sessions.

References
[1] Colin, O., Ducros, F., Veynante, D., and Poinsot, T., “A thickened flame model for large eddy simulations of turbulent premixed combustion,” Physics of Fluids, 12(1843), 2000. DOI: 10.1063/1.870436
[2] Hasti, V.R., Liu, S., Kumar, G., and Gore, J.P., “Comparison of Premixed Flamelet Generated Manifold Model and Thickened Flame Model for Bluff Body Stabilized Turbulent Premixed Flame,” 2018 AIAA Aerospace Sciences Meeting, AIAA 2018-0150, Kissimmee, Florida, January 8-12, 2018. DOI: 10.2514/6.2018-0150
[3] Sjunnesson, A., Henrikson, P., and Lofstrom, C., “CARS measurements and visualizations of reacting flows in a bluff body stabilized flame,” 28th Joint Propulsion Conference and Exhibit, AIAA 92-3650, Nashville, Tennessee, July 6-8, 1992. DOI: 10.2514/6.1992-3650

► The Search for Soot-free Diesel: Modeling Ducted Fuel Injection With CONVERGE
26 Mar, 2020

At the upcoming CONVERGE User Conference, which will be held online from March 31–April 1, Andrea Piano will present results from experimental and numerical studies of the effects of ducted fuel injection on fuel spray characteristics. Dr. Piano is a Research Assistant in the e3 group, coordinated by Prof. Federico Millo at Politecnico di Torino, and these are the first results to be reported from their ongoing collaboration with Prof. Lucio Postrioti at Università degli Studi di Perugia, Andrea Bianco at Powertech Engineering, and Francesco Pesce and Alberto Vassallo at General Motors Global Propulsion Systems. This work is a great example of how CONVERGE can be used in tandem with experimental methods to advance research at the cutting edge of engine technology. Keep reading for a preview of the results that Dr. Piano will discuss in greater detail in his online presentation.

The idea behind ducted fuel injection (DFI), originally conceived by Charles Mueller at Sandia National Laboratories, is to suppress soot formation in diesel engines by allowing the fuel to mix more thoroughly with air before it ignites1. Soot forms when a fuel doesn’t burn completely, which happens when the fuel-to-air ratio is too high. In DFI, a small tube, or duct, is placed near the nozzle of the fuel injector and directed along the axis of the fuel stream toward the autoignition zone. The fuel spray that travels through this duct is better mixed than it would be in a ductless configuration. Experiments at Sandia have shown that DFI can reduce soot formation by as much as 95%, demonstrating the enormous potential of this technology for curtailing harmful emissions from diesel engines.

While the Sandia researchers have focused on heavy-duty diesel applications, Dr. Piano and his collaborators are targeting smaller engines, such as those found in passenger cars and light-duty trucks. To understand how the fuel spray evolves in the presence of a duct, they first performed imaging and phase Doppler anemometry analyses of non-reacting sprays in a constant-volume test vessel. Figure 1 shows a sample of the experimental results. The video on the left corresponds to a free spray configuration with no duct, while the video on the right corresponds to a ducted configuration. Observe how the dark liquid breaks up and evaporates more quickly in the ducted configuration—this is the enhanced mixing that occurs in DFI.

Their next step was to develop a CFD model of the fuel spray that could be calibrated against the experimental results. Dr. Piano and his colleagues reproduced the geometry of the experimental setup in a CONVERGE environment, using physical models available in CONVERGE to simulate the processes of spray breakup, evaporation, and boiling, as well as the interactions between the spray and the duct. With fixed embedding and Adaptive Mesh Refinement, they were able to increase the grid resolution in the vicinity of the spray and the duct without a significant increase in computational cost. They simulated the spray penetration for both the free spray and the ducted configuration over a range of operating conditions and validated those results against the experimental data.

With a calibrated spray model in hand, the researchers were then able to run predictive simulations of DFI for reacting fuel sprays. They combined their spray model with the SAGE detailed chemical kinetics solver for combustion modeling, along with the Particulate Mimic model of soot formation. They ran simulations at different rail pressures and vessel temperatures to see how DFI would affect the amount of soot mass produced under engine-like operating conditions. Figures 2 and 3 show examples of the simulation results for a rail pressure of 1200 bar and a vessel temperature of 1000 K. Consistent with the findings of Mueller et al.1, these results show a dramatic reduction in the mass of soot produced during combustion in the ducted configuration as compared to the free spray configuration.

While these early results are promising, Dr. Piano and his collaborators are just getting started. They will continue using CONVERGE to investigate phenomena such as the duct thermal behavior and to explore the effects of different geometries and operating conditions, with the long-term goal of incorporating DFI into the design of a real engine. If you are interested in learning more about this work, be sure to sign up for the CONVERGE User Conference today!

References

[1] Mueller, C.J., Nilsen, C.W., Ruth, D.J., Gehmlich, R.K., Pickett, L.M., and Skeen, S.A., “Ducted fuel injection: A new approach for lowering soot emissions from direct-injection engines,” Applied Energy, 204, 206-220, 2017. DOI: 10.1016/j.apenergy.2017.07.001

► An Evening With the Experts: Scaling CFD With High-Performance Computing
25 Feb, 2020
Listen to the full audio of the panel discussion.

As computing technology continues to advance rapidly, running simulations on hundreds and even thousands of cores is becoming standard practice in the CFD industry. Likewise, CFD software is continually evolving to keep pace with the advances in hardware. For example, CONVERGE 3.0, the latest major release of our software, is specifically designed to scale well in parallel on modern high-performance computing (HPC) systems. It’s clear that HPC is the future of CFD, so how does this shift affect those of us running simulations and how can we make the most of the increased availability of computational resources? At the 2019 CONVERGE User Conference–North America, we assembled a panel of engineers from industry and government to share their expertise.

In the panel discussion, which you can listen to above, you’ll learn about the computing resources available on the cloud and at the U.S. national laboratories and how to take advantage of them. The panelists discuss the types of novel, one-of-a-kind studies that HPC enables and how to handle post-processing data from massive cases run across many cores. Additionally, you’ll get a look at where post-processing is headed in the future to manage the ever-increasing amounts of data generated form large-scale simulations. Listen to the full panel discussion above!

### Panelists

Alan Klug, Vice President of Customer Development, Tecplot

Sibendu Som, Manager of the Computational Multi-Physics Section, Argonne National Laboratory

Joris Poort, CEO and Founder, Rescale

Kelly Senecal, Co-Founder and Owner, Convergent Science

### Moderator

Tiffany Cook, Partner & Public Relations Manager, Convergent Science

19 Dec, 2019

2019 proved to be an exciting and eventful year for Convergent Science. We released the highly anticipated major rewrite of our software, CONVERGE 3.0. Our United States, European, and Indian offices all saw significant increases in employee count. We have also continued to forge ahead in new application areas, strengthening our presence in the pump, compressor, biomedical, aerospace, and aftertreatment markets, and breaking into the oil and gas industry. Of course, we remain dedicated to simulating internal combustion engines and developing new tools and resources for the automotive community. In particular, we are expanding our repertoire to encompass batteries and electric motors in addition to conventional engines. Our team at Convergent Science continues to be enthusiastic about advancing simulation capabilities and providing unmatched customer support to empower our users to tackle hard CFD problems.

### CONVERGE 3.0

As I mentioned above, this year we released a major new version of our software, CONVERGE 3.0. We have frequently discussed 3.0 in the past few months, including in my recent blog post, so I’ll keep this brief. We set out to make our code more flexible, enable massive parallel scaling, and expand CONVERGE’s capabilities. The results have been remarkable. CONVERGE 3.0 scales with near-ideal efficiencies on thousands of cores, and the addition of inlaid meshes, new physical models, and enhanced chemistry capabilities have opened the door to new applications. Our team invested a lot of effort into making 3.0 a reality, and we’re very proud of what we’ve accomplished. Of course, now that CONVERGE 3.0 has been released, we can all start eagerly anticipating our next major release, CONVERGE 3.1.

### Computational Chemistry Consortium

2019 was a big year for the Computational Chemistry Consortium (C3). In July, the first annual face-to-face meeting took place at the Convergent Science World Headquarters in Madison, Wisconsin. Members of industry and researchers from the National University of Ireland Galway, Lawrence Livermore National Laboratory, RWTH Aachen University, and Politecnico di Milano came together to discuss the work done during the first year of the consortium and establish future research paths. The consortium is working on the C3 mechanism, a gasoline and diesel surrogate mechanism that includes NOx and PAH chemistry to model emissions. The first version of the mechanism was released this fall for use by C3 members, and the mechanism will be refined over the coming years. Our goal is to create the most accurate and consistent reaction mechanism for automotive fuels. Stay tuned for future updates!

### Third Annual European User Conference

Barcelona played host to this year’s European CONVERGE User Conference. CONVERGE users from across Europe gathered to share their recent work in CFD on topics including turbulent jet ignition, machine learning for design optimization, urea thermolysis, ammonia combustion in SI engines, and gas turbines. The conference also featured some exciting networking events—we spent an evening at the beautiful and historic Poble Espanyol and organized a kart race that pitted attendees against each other in a friendly competition.

### Inaugural CONVERGE User Conference–India

This year we hosted our first-ever CONVERGE User Conference–India in Bangalore and Pune. The conference consisted of two events, each covering different application areas. The event in Bangalore focused on applications such as gas turbines, fluid-structure interaction, and rotating machinery. In Pune, the emphasis was on IC engines and aftertreatment modeling. We saw presentations from both companies and universities, including General Electric, Cummins, Caterpillar, and the Indian Institutes of Technology Bombay, Kanpur, and Madras. We had a great turnout for the conference, with more than 200 attendees across the two events.

### CONVERGE in the Big Easy

The sixth annual CONVERGE User Conference–North America took place in New Orleans, Louisiana. Attendees came from industry, academic institutions, and national laboratories in the U.S. and around the globe. The technical presentations covered a wide variety of topics, including flame spray pyrolysis, rotating detonation engines, machine learning, pre-chamber ignition, blood pumps, and aerodynamic characterization of unmanned aerial systems. This year, we hosted a panel of CFD and HPC experts to discuss scaling CFD across thousands of processors; how to take advantage of clusters, supercomputers, and the cloud to run large-scale simulations; and how to post-process large datasets. For networking events, we took a dinner cruise down the Mississippi River and encouraged our guests to explore the vibrant city of New Orleans.

### KAUST Workshop

In 2019, we hosted the First CONVERGE Training Workshop and User Meeting at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Attendees came from KAUST and other Saudi Arabian universities and companies for two days of keynote presentations, hands-on CONVERGE tutorials, and networking opportunities. The workshop focused on leveraging CONVERGE for a variety of engineering applications, and running CONVERGE on local workstations, clusters, and Shaheen II, a world-class supercomputer located at KAUST.

### Best Use of HPC in Automotive

We and our colleagues at Argonne National Laboratory and Aramco Research Center – Detroit received this year’s 2019 HPCwire Editors’ Choice Award in the category of Best Use of HPC in Automotive. We were incredibly honored to receive this award for our work using HPC and AI to quickly optimize the design of a clean, highly efficient gasoline compression ignition engine. Using CONVERGE, we tested thousands of engine design variations in parallel to improve fuel efficiency and reduce emissions. We ran the simulations in days, rather than months, on an IBM Blue Gene/Q supercomputer located at Argonne National Laboratory and employed machine learning to further reduce design time. After running the simulations, the best-performing engine design was built in the real world. The engine demonstrated a reduction in CO2 of up to 5%. Our work shows that pairing HPC and AI to rapidly optimize engine design has the potential to significantly advance clean technology for heavy-duty transportation.

### Convergent Science Around the Globe

2019 was a great year for CONVERGE and Convergent Science around the world. In the United States, we gained nearly 20 employees. We added a new Convergent Science office in Houston, Texas, to serve the oil and gas industry. In addition, we have continued to increase our market share in other areas, including automotive, gas turbine, and pumps and compressors.

In Europe, we had a record year for new license sales, up 70% from 2018. A number of new employees joined our European team, including new engineers, sales personnel, and office administrators. We attended and exhibited at tradeshows on a breadth of topics all over Europe, and we expanded our industry and university clientele.

Our Indian office celebrated its second anniversary in 2019. The employee count nearly doubled in size from 2018, with the addition of several new software developers and marketing and support engineers. The first Indian CONVERGE User Conference was a huge success–we had to increase the maximum number of registrants to accommodate everyone who wanted to attend. We have also grown our client base in the transportation sector, bringing new customers in the automotive industry on board.

In Asia, our partners at IDAJ continue to do a fantastic job supporting CONVERGE. CONVERGE sales significantly increased in 2019 compared to 2018. And at this year’s IDAJ CAE Solution Conference, speakers from major corporations presented CONVERGE results, including Toyota, Daihatsu, Mazda, and DENSO.

While we like to recognize the successes of the past year, we’re always looking toward the future. Computing technology is constantly evolving, and we are eager to keep advancing CONVERGE to make the most of the increased availability of computational resources. With the expanded functionality that CONVERGE 3.0 offers, we’re also looking forward to delving into untapped application areas and breaking into new markets. In the upcoming year, we are excited to form new collaborations and strengthen existing partnerships to promote innovation and keep CONVERGE on the cutting-edge of CFD software.

### Numerical Simulations using FLOW-3Dtop

► FLOW-3D CAST Workshops
18 Aug, 2020
FLOW-3D CAST is a state-of-the-art metal casting simulation modeling platform that combines extraordinarily accurate modeling with versatility, ease of use, and high performance CLOUD computing capabilities. Our FLOW-3D CAST workshops use hands-on exercises to show you how to set up and run successful simulations for detailed analysis of your casting design. Workshop materials provide an introduction to the FLOW-3D CAST modeling platform and detail all the steps of a successful casting model setup, from geometry import through post-processing.

#### Thursday, September 10, 2020 (US & Canada only)

• 2:00pm – 5:00pm ET

#### Thursday, September 24, 2020 (US & Canada only)

• 2:00pm – 5:00pm ET

## What will you learn?

• How to import geometry and set up models, including meshing and initial and boundary conditions
• How to apply complex physics such as air entrainment, as well as FLOW-3D CAST‘s pioneering filling and solidification models to your simulation, to analyze defects, and adjust your casting design
• Best practices for casting simulation and design analysis in FLOW-3D CAST

## What happens after the workshop?

• After the workshop, your FLOW-3D CAST license will be extended for 30 days. During this time, one of our CFD engineers will work closely with you to help you apply FLOW-3D CAST to a casting problem of your choosing. You will also have access to our web-based training videos covering introductory through advanced modeling topics.

## Who should attend?

• Process and casting engineers working in foundry or die casting industries
• Industry researchers working on new alloy developments, lightweighting, and other challenges in modern metal casting
• University students interested in CFD for casting applications
• Workshops are online, hosted through Zoom
• Registration is limited to 6 attendees

Let’s put that in some context. Bentley in 2019 was roughly half the size of Autodesk’s AEC business but nearly $200 million larger than Nemetschek. Why does that matter? Because some buyers still follow the Jack Welch maxim and only work with the #1 or #2 player in a space — Bentley is clearly that #2. But that may be the wrong comparison since the companies are now heading in different directions. Autodesk, in AEC, focuses on the design and make parts of a project, while Bentley looks more at design and operate — and, increasingly, design in the context of operations and maintenance. It’s looking to answer questions like, how would I design this better if I knew I had this maintenance plan/budget? Over the asset’s lifecycle? (More on this in another post.) Indeed, Bentley cites reports that show it holds the #1 position in several industry and application area slices, as determined by The ARC Advisory Group: “In August 2019, for Engineering Design Tools for Plants, Infrastructure, and BIM (building information modeling), ARC ranked us #2 overall, as well as #1 in each of Electric Transmission & Distribution and Communications and Water/Wastewater Distribution … [and] Collaborative BIM. In December 2019, for Asset Reliability Software & Services, ARC ranked us #1 overall for software, as well as #1 in each of Transportation, Oil and Gas, and Electric Power Transmission and Distribution”. Bentley’s always been proud of its R&D — saying things like, “We’ve invested over a billion dollars in acquisitions and R&D in the last 10 years.” We can’t verify that exactly since we’ve only got three years of data, but it’s likely true — Bentley spent$184 million on R&D (25% of revenue, on par or slightly ahead of other PLMish companies) and another $34 million on acquisitions in 2019. On the topic of acquisitions: Bentley has bought LOTS of small companies over the years, more than technology tuck-ins but nothing as splashy as arch-rival Autodesk. In 2019, it completed four acquisitions for the$34 million I mentioned above; through June 30, it has acquired four more companies for nearly $70 million. The filing says that “[Bentley’s] average historical annual revenue growth rate from acquisitions over the last six years has been approximately 1.1%” — it’s clearly not acquiring revenue, but rather technology and, perhaps, specific customer accounts. If you’re keeping track of the various companies’ race to recurring revenue, Bentley says that in 2019, “subscriptions represented 83% of our revenues, and together with [reucrring] professional services revenues bring the proportion of our recurring revenues to 86% of total revenues.” That’s on par with other companies that haven’t gone all-subs-all-the-time. I also found this fascinating: “In 2019, 96 accounts, each contributed over$1 million to our revenues, representing 32% of our revenues. 53% of our 2019 revenues came from 424 accounts, each contributing over $250,000 to our revenues. During 2019, we served 34,127 accountsNo single account provided more than 2.5% of our 2019 revenues. Additionally, we believe that we have a loyal account base, with 80% of our 2018 and 2019 total revenues coming from accounts of more than ten years’ standing, and 87% of our 2018 and 2019 total revenues coming from accounts of more than five years’ standing.” We often wonder if any one account pulls the strings at a vendor, and in Bentley’s case, at least, that’s a no. But the ability to keep 80% of its accounts for 10 years or more — I find that impressive. I am often asked how sticky these tools are — here you see, very sticky, Let’s talk Siemens. I’ve met with Bentley and Siemens separately and together, and they are 100% in on their technical partnership. The business relationship, perhaps not so smooth. Here, according to the S-1 is the backstory and then some present/future stuff (I edited this for readability): “In September 2016, we and [some of] the Bentley brothers entered into a Common Stock Purchase Agreement with Siemens, pursuant to which Siemens was authorized, and agreed, to acquire up to$100 million of our Class B common stock from our existing stockholders. Subsequent amendments increased this amount to $250 million (which, if reached), increases by$20 million on each subsequent anniversary of the date of the Common Stock Purchase Agreement so long as the Strategic Collaboration Agreement remains in effect on each subsequent anniversary. The next increase is set to occur on September 23, 2020. … As of June 30, 2020, Siemens beneficially owned 34,764,592 shares of our Class B common stock” and had paid a total of about $250 million for these shares. A bit of math shows us that this is 14% of the total Class B shares. Why, you say, does this matter? Because each Class B share carries one vote at a shareholder meeting. Class A shares, mostly owned by the Bentley family, have 29 votes each, and there are 11.6 million Class A shares in total. 35 million-ish votes for what Siemens wants versus 336 million-ish votes for what the Bentley family wants (assuming they agree). Siemens does not drive this bus, though it clearly has input. But the real thing about Bentley+Siemens is the strategic element mentioned above: “In conjunction with the Common Stock Purchase Agreement, we entered into a Strategic Collaboration Agreement with Siemens … The initial term of the agreement lasts until December 31, 2026 and automatically renews for successive one year terms unless either party elects to terminate the agreement … In addition, Siemens has the right to terminate the agreement and any related collaboration projects if the Bentleys no longer own a majority of our voting power or if we otherwise undergo a change of control”. Note that last sentence: Siemens can walk away if Bentley Systems changes ownership. And there’s more (again edited down with bold added by me for emphasis): “we … entered into the Common Stock Purchase Agreement with Siemens in September 2016, pursuant to which we … granted Siemens a right of first refusal with respect to certain deemed liquidation events, offers, sales or certain issuances of our capital stock, … Pursuant to the terms of the Common Stock Purchase Agreement, Siemens’ right of first refusal expires upon the effectiveness of a registration statement in connection with an underwritten initial public offering. Siemens contends that this right of first refusal applies to sales of common stock in an initial public offering by the Company or the Bentley family members party to the Common Stock Purchase Agreement. While we disagree with Siemens’ contention, our initial public offering of Class B common stock will be exclusively by existing holders whose transfers of capital stock are not subject to Siemens’ right of first refusal, and we have not included any shares to be issued by the Company or any shares held by the Bentley family members party to the Common Stock Purchase Agreement in the offering pursuant to this prospectus. Following the effectiveness of the registration statement … Siemens’ right of first refusal will terminate. Following the completion of this offering, we intend to evaluate opportunities to then undertake a primary offering of our Class B common stock by the Company, subject to [a bunch of stuff] … We have not engaged in any formal discussions regarding any such offering and we have not undertaken any steps to pursue such an offering. The Company lock-up contained in the underwriting agreement to be entered into by us with the underwriters in this offering will permit us and selling stockholders to sell shares of Class B common stock in an aggregate amount equal to up to 20% of our total Class B common stock outstanding at such time beginning on December 1, 2020, and such lock-up agreement expires 180 days following the date of this prospectus. “ I may be the only person who finds this interesting. I draw no conclusions but I would think armies of lawyers would have ironed this out in 2016 … And I’ll tune in on December 1, 2020, to see what happens then. After all, the rumors persist that Siemens may want to acquire all of Bentley, to add it to the DIgital Industries part of the AG. Leaving aside whatever that is with Siemens, it’s important to note that this IPO is about creating liquidity for existing shareholders and not raising money for the company. Who those “selling shareholders” are isn’t clear to me, but very explicitly does not include the four Bentley brothers who have, for years, been the face of the company. This is explained in an unexpectedly funny bit of the S-1, on page 107, where they write, “Barry, Keith, and Ray are respectively chemical, electrical, and mechanical engineers who have spent their entire careers in software. Even Greg, prior to joining the rest of us, was a successful developer of software for what he characterizes as “financial engineering.” Having engineer types in charge seems to have worked for us, perhaps because of the correspondence to our end market of infrastructure engineering. And this important bit follows: The four of us are not selling shares in this offering, nor do we contemplate any “exit” other than (as we are all aged in our early 60s) in due course following the example of Barry, who retired at the beginning of this year but remains active on our Board. We plan to continue our modest regular dividend, which will serve to encourage this orderly progression. OK. So what did I learn? That Bentley is a significant and thriving software vendor, confidently stepping out into new areas like asset operations and maintenance. That many of the people who made it so plan to stay on. That it’s profitable and generating lots of cash. None of that is surprising — we didn’t have the details before, and now we do. And that ethos of “by engineers for engineers” is 100% true to the company’s character, and has been for decades. What happens next? The offer needs to be priced, meaning the underwriters and Bentley figure out what the market will bear and line up buyers. There’s no date for that yet — but I’ll write about it once it is set. ► More details on AVEVA + OSIsoft — including about that rights offer 31 Aug, 2020 # More details on AVEVA + OSIsoft — including about that rights offer We’ve learned a bit more about the proposed combination of AVEVA and OSI since the announcement early last Tuesday. Much more will come out in six weeks or so when the official paperwork is filed but here’s what I learned while listening to the investor and industry analyst calls, as a follow-up to my post from last week: • AVEVA’s CEO Craig Hayman, Deputy CEO/CFO James Kidd and OSIsoft founder Dr. Pat Kennedy all seem genuinely excited about the potential to work together and to create separate and combines offerings that leapfrog both companies into new markets and customers • Dr. Kennedy will remain involved in a BIG way — he will become AVEVA’s single largest individual shareholder and take the title of Chairman Emeritus. I had missed that in the original announcement. • The product set will be additive rather than subtractive — yes, there is a bit of product overlap but nothing is expected to change for customers of PI System or AVEVA Historian. As Mr. Hayman said, it’s extremely unlikely that customers will willingly rip and replace; over time, AVEVA Historian customers may choose PI instead. This is the product map AVEVA shared with investors: • A big impetus for the transaction is to further diversify AVEVA away from oil and gas, the former company’s main market. OSIsoft has customers in power (not just generation, but also in transmission and distribution), oil and gas, chemicals, mining, metals and minerals, pulp and paper, and pharmaceutical manufacturing. In nearly all of these industries, PI is used throughout –as in mining, where Mr. Hayman said it is used “from pit to port”. In all, Mr. Kidd estimates, AVEVA’s oil and gas exposure would go from around 40% of revenue to 25% as adding in OSISoft “broadens out our end-market exposure”. He later added, “we see potential in power generation/transmission/distribution, especially with Schneider Electric, as the build-out of power from high voltage, medium voltage to low voltage and distribution. We also see opportunity in buildings, data centers, everywhere where electricity flows, we see opportunity for PI.” • The other thing, too, of course, will be the opportunity to cross-sell to one another’s customer base and to start selling more offerings into their joint base, as you can see below. They come to their industrial customers from different angles and see lots of opportunity to turn those differences into revenue. • The 200 plus “whitelabeled” PI System-based products aren’t expected to be affected by the combination • The companies are remarkably similar in many metrics –see the image below from the investor slide deck– and seem like a good cultural fit, too: All of that leads AVEVA to confidence: 1. that it can get the deal done and 2. that the deal is a positive development for AVEVA, its employees and customers — and for OSISoft’s as well. As Mr. Hayman said, “the combination significantly increases the depth and breadth of the Company’s portfolio brings together various sources of design assets and operational data type in the middle here is the information land, the basis of the process and production, which will be further interest through applications and data from the portfolio”. Mr. Kidd pointed out that OSI today largely sells perpetual licenses and maintenance agreements, with just a small proportion of revenue coming from subscriptions. He said that “given AVEVA’s track record in the last couple of years [of transitioning customers to subscriptions], this is an area that we believe we can accelerate and help to create new subscription offerings, particularly using AVEVA Flex.” So expect to see (again) the bump in revenue for perpetuals changing to a slower but more consistent growth curve as OSI undergoes the same transition we’ve seen over and over again in this space. And Mr. Kidd made one thing very clear, for all you OSI employees: “Like the Schneider-AVEVA merger, this deal is much more about future growth than cost-saving. But that said, we do expect there to be some level of cost synergies, mainly through consolidation of offices, combining IT systems, and integrating the back office.” So don’t expect the success of this deal to be judged on cost-cutting. AVEVA is paying$5 billion for OSIsoft, $4.4 billion in cash, and$0.6 billion in shares issued to Dr. Kennedy. That $4.4 billion in cash will come from a$3.5 billion rights issue and 0.9 billion in cash, on the balance sheet, and via new debt facilities. I wasn’t sure what a rights issue is –thanks to all who helped me learn– but now understand that it’s a legal form in the UK where shares are issued in a way that gives current shareholders first rights of refusal so they can determine if and how their holding might be diluted. Existing shareholders can subscribe to the issue in proportion to their current ownership stake –so when Schneider Electric says it supports the issue, we can presume they’ll buy 60%ish of the new shares. Of course, shareholders don’t have to ante up if they don’t want to; they can sell these rights if they choose to. We had, months ago, learned that Schneider Electric was interested in acquiring OSISoft. Mr. Kidd explained it this way: “When you’re trying to navigate strategic value, you have to think about what each company does. Schneider Electric engages in projects around industrial solutions or the power or building solutions; it’s around the build-out of those facilities. If you think about where AVEVA is, with a small 10% exception around the Greenfield CapEx in oil and gas, mostly it’s around the operational side running those facilities and providing the tools to operate those facilities. And once you think about that, then you can understand how certain acquisitions make perfect sense for AVEVA and certain acquisitions make sense for other companies, including Schneider. OSIsoft is an operational system. It’s an OpEx model. Its usage is aligned with the consumption model of the customers. And it works with many different industrial firms including Rockwell Automation, Emerson, ABB [all of which compete with Schneider Electric] in many end markets. And so AVEVA is a perfect fit for OSIsoft.” Now we know. Last thing: AVEVA announced that it was discussing this with OSISoft nearly a month ago. That gave customers plenty of time to weigh in and ask questions. Mr. Hayman said that “customers were unbelievably positive: [we got] ad hoc emails from customers telling us that they were so very excited, that it was a great strategic choice, how it was a great cultural fit, and that PI System is a great product. I remember being on one Zoom call with over a dozen with over a dozen people from all walks of life in this customer who has one thing in common, which is our relationship with them. And when someone asks about myself and PI everyone in the zoom calls, stopped, turned, looked, and gave us great thumbs up and all smiles and that’s a great product. That’s a great choice. Oh, really, that’s a great thing”. Next up, regulatory filings in many geos and more comprehensive info for shareholders. The deal is still on target to close around the end of 2020. The post More details on AVEVA + OSIsoft — including about that rights offer appeared first on Schnitger Corporation. ► Autodesk’s Q2 revenue up 15%, comes out swinging on AEC 26 Aug, 2020 # Autodesk’s Q2 revenue up 15%, comes out swinging on AEC Autodesk has lost none of its swagger, yesterday reporting that total revenue was up 15%, with results across the metrics that investors look at ahead of consensus estimates. Even so, the company’s guidance for its fiscal third disappointed, leading Autodesk’s share price to be down 3% after hours. First, the details, then quotes and comments: • Total revenue was913 million, up 15% as reported and up 16% on a constant currency basis (cc)
• Design revenue was $821 million, up 15% (up 16% cc). Autodesk defines this bucket as the maintenance and product subscriptions related to the design products — so including AutoCAD, AutoCAD LT, Industry Collections, Revit, Inventor, Maya and 3ds Max. For reasons that I don’t quite understand, this category also includes the CAM solutions that incorporate both design and make functionality; and all EBAs • Make revenue was$71 million, up 37% (up 38% cc). Make includes cloud products such as Assemble, BIM 360, BuildingConnected, PlanGrid, Fusion 360, and Shotgun — in the case of AEC, clearly used to execute (“make”) AEC assets. It’s more confusing in the case of Fusion 360, which is lumped into this category even though it includes significant design capabilities
• We also got the more traditional breakdown: Revenue from the AEC products was $397 million, up 19% • Manufacturing product revenue was$186 million, up 6%
• Media & Entertainment revenue was $53 million, up 5% • Revenue from AutoCAD and AutoCAD LT was$272 million, up 18%
• Finally, in the catchall category Other, revenue was $5 million, down 8% • Subscription plan revenue was$841 million, up 27% (up 28% cc)
• Maintenance plan revenue was $51 million, down 51% (down 49% cc) • By geo, revenue from the Americas was$372 million, up 14% (up 14% cc)
• From EMEA, $355 million, up 12% (up 16% cc) • From APAC,$187 million, up 21% (up 21% cc)

CEO Andrew Anagnost started the call by talking about COVID, and what Autodesk saw as the world slowly reopened during fiscal Q2: “We closely monitored the usage patterns of our products across the globe. In China, Korea, and Japan, we are seeing usage above pre-COVID levels. In some areas of Europe, we continue to see a recovery as well. In the Americas, we experienced a slight uptick in usage for most key products in July. We see a positive correlation between usage trends and new business performance, which gives us confidence that the green shoots we see in usage will translate to improved new business performance in subsequent quarters.” CFO Scott Herren added that “business is recovering in the markets that were impacted by the pandemic earlier on. Some of our major markets like the US and UK have stabilized, but are yet to show meaningful improvement … Second quarter new business activity was more impacted [by COVID-related issues] than Q1, with new business declining in the mid-teens percent. We think the second quarter will be the most impacted by the pandemic.”

Autodesk said it continues to see success in bringing non-compliant (aka pirated) and legacy (ie lapsed/very old version) users into the fold. The company says it igned 3 license compliance deals worth over $1M in APAC. Autodesk reports revenue, with all of the accounting treatments of subscription revenue, as well as billings — the sum of revenue and the net change in deferred revenue from the beginning to the end of the period. In other words, revenue actually received and bills sent out but not yet paid by customers. And that’s where investors were disappointed: in FQ2, billings were down 12% from a year ago to$787 million, and the company forecast billings for the year to be down by as much as 3%. Since invoices sent out this quarter turn into revenue in some future quarter, when billings decline, that’s a cause for concern. Why did Autodesk say its billings would go down? Because it saw a dip in the contribution from multiyear contracts when compared to prior quarters, a trend that Mr. Herren was beginning to reverse itself towards the end of fiscal Q2. My take on two possible reasons: first, customers have less confidence in Autodesk’s ability to deliver value to its subscriptions, meaning a growing “show me” attitude even in the face of discounts for longer periods. Second, less confidence in customers’ need for the software in the far-of future — they don’t need subscriptions for workers they aren’t sure they’ll still have. We’ll have to tune into this metric in FQ3 to see what develops.

Mr. Anagnost also commented on where billings are coming from — online versus indirect sales versus direct sales. He said that “we saw strong double-digit billings growth through the online channel during the [fiscal second] quarter. Our online sales are helping attract new customers to the Autodesk family, as nearly three out of four new customers in the quarter came in through e-commerce”. In general, he said, “We’re still trying to get that direct online business up to 25% of our total business.” That’s been the stated goal for quite a while — and since Autodesk is taking more of the high0end / large account business direct, this squeezes the reseller channel.

Autodesk didn’t release channel performance data for FQ2, but Mr. Herren said that Autodesk saw a strong quarter among smaller accounts, which matches what other PLMish companies told us — smaller decision teams, faster cycles, are possible at smaller companies. The mid-market was “tepid” as it waited for Autodesk’s multi-user to named-user deals to kick in at the start of FQ3, and as they paused to assess their prospects for the rest of the year. The named accounts (biggest prospects) “didn’t have a big Q2 [but] that seems to be heavier in the second half of the year. We’ve got a very full pipeline of large transactions, large accounts, EBA [Enterprise Business Agreements– token-based access to a pool of products over a defined period] renewals that are coming up.” [UPDATE: My bad. Autodesk did release that 30% of revenue was direct and 70% came from indirect sources, on par with prior quarters.]

Mr. Herren told investors that large deals are still hard to close and that the (re)opening activities around the world create a complex mix of business climates. He sees “varying degrees of demand in the Americas, which includes our largest end market. At the upper end of our guidance range, we are modeling meaningful recovery in the region in the third quarter, with continued improvement in the fourth quarter. At the low end of the range, we anticipate a slower recovery in the third quarter and improvement in Q4” That translates to a forecast of FQ3 revenue between $930 million and$945 million. For fiscal 2021 (ending January 31, 2021) Autodesk sees revenue of $3,715 million to$3,765 million, up 13.5% to 15.%. That’s a slight increase in the midpoint of the guidance, and a decrease in the range — as usually happens as we get closer to the yer-end.

And now, the elephant in the room: Mr. Anagnost listed a lot of AEC wins in his opening remarks, and spoke further about the unhappy architect customers who are causing such a kerfuffle by answering an investor question this way:

“[These customers] have legitimate concerns about the functionality in Revit and we take those incredibly seriously. And the fact is, is that from an architectural standpoint, Revit hasn’t gotten a lot of incremental investment. A lot of [our] AEC investments have gone to construction, to revenue enhancements targeting the engineering component and workflows– structural workflows, in particular. So, there are some real, legitimate concerns there.

The other concern they have is the move from multiuser to named users. These are large multi-user clients and they’ve seen multi-user prices drift up. They really want a pay-per-use model. We want them to have a pay-per-use model, which they would prefer to a cloud licensing. We’re all on the same page.

But that said, these customers come from a highly privileged, roughly 20% of our subscription base, that moved from maintenance to subscription and have pretty deep price protections relative to the rest of the base. And if you look at their expenditures over a five-year period, frankly, even moving out another five years, as they add seats, they are actually paying less to Autodesk than they would have under the old perpetual model. And that was a deliberate part of the transition, even as multiuser prices go up in everything. If you add up what they would have paid us for adding users over time, they actually end up paying less over a five-year period and, frankly, as they add users over a 10-year period.

We’re not concerned about that. We said very early on that we were going to take care of these maintenance customers … We did that. Lots of debates with all of you [investors] about the maintenance subscription program and 10-year price lock. It wasn’t exactly something that all of you were behind. But we think it was right. And yes, it has resulted in this.

We’re never going to be on the same page with this audience [meaning, investors] about that particular part of the equation. But remember, this is a shrinking bit of our subscription base, the protected 20% now. There’ll be less than that later. But, over time, they pay less than they used to in the old perpetual model.

That started out so well, got lost a bit in the middle, and perhaps recovered towards the end. But it didn’t address the fundamental question: when will Autodesk provide more value to these disaffected customers? Admittedly, Mr. Anagnost is in a tough spot in speaking to investors who want one thing (more revenue and profit) about customers who want lower prices (meaning, less revenue to Autodesk). But it’s completely of his own making: framing the transition to subs as a way to raise revenue per customer was never going to have any other outcome than this unless Autodesk over-delivered on product-related promises.

We did get a glimpse into Autodesk’s thought process on R&D. Later in the call, answering a question about roadmaps, Mr. Anagnost added,

“[The question] is, where we put new dollars. So, for instance, at the beginning of this year, this whole concern around architecture and architects, is something we saw coming because this has been a five-plus year kind of tension. We actually increased investment in AutoCAD Architecture at the beginning of this year. We used incremental R&D dollars to increase investment in that space.

Moving forward we will deliberately choose where we add incremental investment, and we’ve been very forthright with the construction space in terms of incremental investment. We’re not going to shift money away from that. But as we add incremental investment into next year and year after that, we’ll probably add more incremental investment into other places over time.

“We’re in the enviable position to be able to [add incremental nivestment], we’re spending more in R&D than we ever had in our history. And we have still room to invest more, we’re just going to choose deliberately to add incremental investment in certain spaces, like we did at the beginning of this year for architecture.”

Notice it’s not Revit. And if someone saw it coming, why let it get to this point?

Topic switch to manufacturing, where revenue grew 6% in FQ2. Mr. Anagnost told investors that “we’re growing faster than our biggest competitor in the space. We had good strong growth coming into the year so we’re comparing 6% to a good year last year — we’re actually happy with the performance we’re seeing right now. And it’s only going to continue to get better”.

Last thing: AutoCAD and AutoCAD LT. We don’t hear much about it, but it’s likely the most used CAD product on the planet. Mr. Anagnost said that “We used to talk about that as the canary in the coal mine for market dislocation at the low end but subscription changes everything. The subscription price point for LT is very attractive and most of the customers that are buying it are small to medium businesses. It does what they need.” Autodesk used to talk about AutoCAD and LT as entry points into the Autodesk product family; it clearly has value on its own, too.

Well. Verticals and geos. Subs and maintenance. Low- to high-end. A very wide-ranging call with investors, a solid FQ2 and decent outlook.

Note: the quotes are from my notes, checked against the recording of the earnings call, which you can get to here. Listen for yourself!

The post Autodesk’s Q2 revenue up 15%, comes out swinging on AEC appeared first on Schnitger Corporation.

return

### Layout Settings:

 Entries per feed: 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Display dates: No Yes Width of titles: 300 350 400 450 500 550 600 650 700 750 800 850 900 950 1000 Width of content: 400 450 500 550 600 650 700 750 800 850 900 950 1000