|
[Sponsors] |
Visualization is such a big part of CFD (for better or worse) that an article included in this week’s compilation of CFD news should make for interesting reading: lessons learned from developing open-source visualization software. The financial news from the … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
NASA’s CFD Vision 2030 Study stated that “most standard CFD analysis processes for the simulation of geometrically complex configurations are onerous.” A major factor contributing to this perception is the preparation of geometry models for mesh generation, a task deemed … Continue reading
The post Geometry Modeling and Mesh Generation – Part 1 first appeared on Another Fine Mesh.
This week’s CFD news, while formatted differently, includes all the usual suspects including an article about whether CAD files are going the way of the dodo. There’s a tasty CFD application involving gelato. The application case study about cars driving … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Pointwise Version 18.4 R3 is now available for download and production use. V18.4 R3 is primarily a maintenance release and includes a new native interface to the AzoreCFD flow solver. “Pointwise is committed to empowering flow solver development of any … Continue reading
The post Pointwise V18.4 R3 Now Available for CFD Mesh Generation first appeared on Another Fine Mesh.
The Y+ Calculator app is a handy tool for calculating the grid spacing to achieve a target y+ value for viscous computational fluid dynamics (CFD) computations. Simply specify the flow conditions, the desired y+ value, and compute your grid spacing. … Continue reading
The post The Handiest CFD App – the Y+ Calculator first appeared on Another Fine Mesh.
It’s been an interesting week if record-setting cold, the lack of electricity, and untreated tap water are things you find interesting. Yes, Texas sure has been putting on a show. Despite that, we were able to release a new version … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Sandro Bocci’s short film “Flux Capacitor” explores the geometry and dynamics of soap films. When you dip wire models into soapy solution, the films that cling to the model can form complicated shapes as surface tension works to minimize the overall surface area. Bocci’s macro photography highlights the intense flows going on in the narrow regions where films meet. It’s a different take on soap films and neat to see! (Image, video, and submission credit: S. Bocci et al.)
In deserts around the world, plants have adapted to collect as much moisture as they can. Geometry aids them in this endeavor because droplets on the tip of a cone will move toward its thicker base. The motion takes place due to a imbalance in surface tension forces on either end of the droplet.
As the droplet moves up a cone, it changes shape from a barrel-like drop that fully covers the conical surface to a clamshell-shaped droplet that hangs only from the bottom of the cone. (Image and research credit: J. Van Hulle et al.)
When we walk, the ground’s resistance helps propel us. Similarly, flying or swimming near a surface is easier due to ground effect. Most of the time swimmers don’t get that extra help, but a new study shows that jellyfish create their own walls to get that boost.
Of course, these walls aren’t literal, but fluid dynamically speaking, they are equivalent. Over the course of its stroke, the jellyfish creates two vortices, each with opposite rotation. One of these, the stopping vortex, lingers beneath the jellyfish until the next stroke’s starting vortex collides with it. When two vortices of equal strength and opposite rotation meet, the flow between them stagnates — it comes to halt — just as if a wall were there.
In fact, mathematically, this is how scientists represent a wall: as the stagnation line between a real vortex and a virtual one of equal strength and opposite rotation. It just turns out that jellyfish use the same trick to make virtual walls they can push off! (Image and research credit: B. Gemmell et al.; via NYTimes; submitted by Kam-Yung Soh)
The same dynamic forces that make coastlines fascinating create perennial headaches for engineers trying to maintain coastlines against erosion. This Practical Engineering video discusses some of the challenges of coastal erosion and how engineers counter them.
In a completely undeveloped coastline, waves and storms erode the shoreline while rivers and currents replenish sand through sedimentation. Manmade structures tend to strengthen erosion processes while disrupting the sedimentation that would normally counter it. Beach nourishment — where sand gets dredged up and deposited on a beach — is an engineered attempt to replace natural sedimentation.
Dunes, mangrove forests, and wetlands are all nature’s way of protecting and maintaining coastlines. We engineers are still learning how to both utilize and protect shorelines. (Image and video credit: Practical Engineering)
Whether you’re cooking with ceramic, Teflon, or a well-seasoned cast iron pan, it seems like food always wants to stick. It’s not your imagination: it’s fluid dynamics.
As the thin layer of oil in your pan heats up, it doesn’t heat evenly. The oil will be hotter near the center of the burner, which lowers the surface tension of the oil there. The relatively higher surface tension toward the outside of the pan then pulls the oil away from the hotter center, creating a hot dry spot where food can stick.
To avoid this fate, the authors recommend a thicker layer of oil, keeping the burner heat moderate, using a thicker bottomed pan (to better distribute heat), and stirring regularly. (Image and research credit: A. Fedorchenko and J. Hruby)
In Thomas Blanchard’s “Mini Planets” oil-coated paint droplets swirl on colorful backgrounds. With band-like streaks, they truly do look like miniature planets rotating. I love that a few of them even have distinctive vortices! (Image and video credit: T. Blanchard)
There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.
CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation
Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.
Conjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature
It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.
CFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study
Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).
CFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study
One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.
Dragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath
The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.
2 Hour Marathon Attempt
• RANShttps://www.cfd-online.com/Forums/bl...1&d=1610557096
• MRF
• Compressible
• K-Omega SST
• Subsonic
• Inlet T = 300 K
• Inlet p = 1 atm
• Mass flow = 0.1 Kg/s
• Rotation Speed = 50 000 rpm
Hi Alexey,
I have a problem (again) when i am following the instructions as given in https://github.com/mrklein/openfoam-...ase-&-Homebrew In particular, I have followed the steps without any problem until when I had to apply the patch with git: git apply OpenFOAM-v1912.patch When I opened the patch file, I show the flag: 404: Not Found. Where can I find the patch? When I visited your site, I show that you have patches for different versions of OpenFoam, but not for v1912. If I download the most recent one, "OpenFOAM-7-0ebbff061.patch" and execute "git apply OpenFOAM-7-0ebbff061.patch" instead, do you think it will be OK? |
Filippo Maria Denaro added an answer December 7, 2017 Lalit the Nyquist theorem says that for a step sampling dt you can describe the smallest wavelenght 2*dt (three samples describe a sine). For a given period lenght T, the ratio T/(2*dt) gives the maximum wavenunber you can represent |
In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:
As you can see, we’ll be simulating the flow over a bump defined by the curve:
First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:
/*--------------------------------*- C++ -*----------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: 6
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
FoamFile
{
version 2.0;
format ascii;
class dictionary;
object blockMeshDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
convertToMeters 1;
vertices
(
(-1 0 0) // 0
(0 0 0) // 1
(1 0 0) // 2
(2 0 0) // 3
(-1 2 0) // 4
(0 2 0) // 5
(1 2 0) // 6
(2 2 0) // 7
(-1 0 1) // 8
(0 0 1) // 9
(1 0 1) // 10
(2 0 1) // 11
(-1 2 1) // 12
(0 2 1) // 13
(1 2 1) // 14
(2 2 1) // 15
);
blocks
(
hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);
edges
(
);
boundary
(
inlet
{
type patch;
faces
(
(0 8 12 4)
);
}
outlet
{
type patch;
faces
(
(3 7 15 11)
);
}
lowerWall
{
type wall;
faces
(
(0 1 9 8)
(1 2 10 9)
(2 3 11 10)
);
}
upperWall
{
type patch;
faces
(
(4 12 13 5)
(5 13 14 6)
(6 14 15 7)
);
}
frontAndBack
{
type empty;
faces
(
(8 9 13 12)
(9 10 14 13)
(10 11 15 14)
(1 0 4 5)
(2 1 5 6)
(3 2 6 7)
);
}
);
// ************************************************************************* //
This blockMeshDict produces the following grid:
It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!
So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:
edges
(
polyLine 1 2
(
(0 0 0)
(0.1 0.0309016994 0)
(0.2 0.0587785252 0)
(0.3 0.0809016994 0)
(0.4 0.0951056516 0)
(0.5 0.1 0)
(0.6 0.0951056516 0)
(0.7 0.0809016994 0)
(0.8 0.0587785252 0)
(0.9 0.0309016994 0)
(1 0 0)
)
polyLine 9 10
(
(0 0 1)
(0.1 0.0309016994 1)
(0.2 0.0587785252 1)
(0.3 0.0809016994 1)
(0.4 0.0951056516 1)
(0.5 0.1 1)
(0.6 0.0951056516 1)
(0.7 0.0809016994 1)
(0.8 0.0587785252 1)
(0.9 0.0309016994 1)
(1 0 1)
)
);
The sub-dictionary above is just a list of points on the curve . The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.
The following mesh is produced:
Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!
Cheers.
This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trademarks.
Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.
Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.
In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.
Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).
In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.
For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).
In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.
Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.
In ParaView the necessary tool for this is:
Gradient of Unstructured DataSet:
Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:
To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:
There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.
To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:
The results look pretty realistic:
The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:
Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!
To do this, we just have to use the Gradient of Unstructured DataSet tool again:
This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.
Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:
Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.
This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.
Hopefully this post will be helpful to some of you out there. Cheers!
Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/
The law given by:
It is also often simplified (as it is in OpenFOAM) to:
In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.
So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.
So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.
By far the simplest way to achieve this is using Python and the Scipy.optimize package.
Step 1: Get Data
The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:
Temparature (K) | Viscosity (Pa.s) |
200 |
0.000012924 |
400 | 0.000022217 |
600 | 0.000029602 |
800 | 0.000035932 |
1000 | 0.000041597 |
1200 | 0.000046812 |
1400 | 0.000051704 |
1600 | 0.000056357 |
1800 | 0.000060829 |
2000 | 0.000065162 |
This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).
Step 2: Use python to fit the data
If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.
First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
Now we define the sutherland function:
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
Next we input the data:
T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.
popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
Now we can just output our data to the screen and plot the results if we so wish:
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
Overall the entire code looks like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!
In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.
This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.
The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.
There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.
While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.
Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:
(1) Understand CFD
This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:
(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish
(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera
(c) Computational fluid dynamics – the basics with applications – By John D. Anderson
(2) Understand fluid dynamics
Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.
(3) Avoid building cases from scratch
Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!
As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.
(4) Using Ubuntu makes things much easier
This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.
I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.
(5) If you’re struggling, simplify
Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.
(6) Familiarize yourself with the cfd-online forum
If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.
(7) The results from checkMesh matter
If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:
http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf
(8) CFL Number Matters
If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.
For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:
https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam
For the record, this points falls into point (1) of Understanding CFD.
(9) Work through the OpenFOAM Wiki “3 Week” Series
If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:
https://wiki.openfoam.com/%223_weeks%22_series
If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.
(10) OpenFOAM is not a second-tier software – it is top tier
I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).
In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.
(11) Meshing… Ugh Meshing
For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.
Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.
Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.
This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trade marks.
Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.
Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.
The two main ways that I have meshed airfoils to date has been:
(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.
But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.
The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections
In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.
There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!
Hopefully, this is useful to some of you out there!
You can download the script here:
https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher
Here you will also find a template based on the airfoil2D OpenFOAM tutorial.
(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh
PS
You need to run this with python 3, and you need to have numpy installed
The inputs for the script are very simple:
ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.
airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.
DomainHeight: This is the height of the domain in multiples of chords.
WakeLength: Length of the wake domain in multiples of chords
firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator
growthRate: Boundary layer growth rate
MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.
The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.
BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil
LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge
TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge
inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity
trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.
Inputs:
With the above inputs, the grid looks like this:
Mesh Quality:
These are some pretty good mesh statistics. We can also view them in paraView:
The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:
With these inputs, the result looks like this:
Mesh Quality:
Visualizing the mesh quality:
Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).
Inputs:
Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.
Grid Quality:
Visualizing the grid quality
Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.
The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!
Comments and bug reporting encouraged!
DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM® and OpenCFD® trademarks.
Here is a useful little tool for calculating the properties across a normal shock.
If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!
Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.
Happy 2021!
The year of 2020 will be remembered in history more than the year of 1918, when the last great pandemic hit the globe. As we speak, daily new cases in the US are on the order of 200,000, while the daily death toll oscillates around 3,000. According to many infectious disease experts, the darkest days may still be to come. In the next three months, we all need to do our very best by wearing a mask, practicing social distancing and washing our hands. We are also seeing a glimmer of hope with several recently approved COVID vaccines.
2020 will be remembered more for what Trump tried and is still trying to do, to overturn the results of a fair election. His accusations of wide-spread election fraud were proven wrong in Georgia and Wisconsin through multiple hand recounts. If there was any truth to the accusations, the paper recounts would have uncovered the fraud because computer hackers or software cannot change paper votes.
Trump's dictatorial habits were there for the world to see in the last four years. Given another 4-year term, he might just turn a democracy into a Trump dictatorship. That's precisely why so many voted in the middle of a pandemic. Biden won the popular vote by over 7 million, and won the electoral college in a landslide. Many churchgoers support Trump because they dislike Democrats' stances on abortion, LGBT rights, et al. However, if a Trump dictatorship becomes reality, religious freedom may not exist any more in the US.
Is the darkest day going to be January 6th, 2021, when Trump will make a last-ditch effort to overturn the election results in the Electoral College certification process? Everybody knows it is futile, but it will give Trump another opportunity to extort money from his supporters.
But, the dawn will always come. Biden will be the president on January 20, 2021, and the pandemic will be over, perhaps as soon as 2021.
The future of CFD is, however, as bright as ever. On the front of large eddy simulation (LES), high-order methods and GPU computing are making LES more efficient and affordable. See a recent story from GE.
![]() |
Figure 1. Various discretization stencils for the red point |
![]() ![]() |
p = 1 |
![]() ![]() |
p = 2 |
![]() ![]() |
p = 3 |
CL
| CD
| |
p = 1 | 2.020 | 0.293 |
p = 2 | 2.411 | 0.282 |
p = 3 | 2.413 | 0.283 |
Experiment | 2.479 | 0.252 |
We’ve reached the end of 2020, and I think it’s fair to say this year did not go as planned. The coronavirus pandemic disrupted our lives and brought on unexpected challenges and hardships. However, this difficult time has also highlighted the resiliency of people all around the globe—we have adapted and innovated to meet these challenges head on. At Convergent Science, that meant finding new ways to communicate and collaborate to ensure we could continue to deliver the best possible software and support to our users, all while keeping our employees safe.
Despite the pandemic, we experienced exciting opportunities, advancements, and milestones at Convergent Science this past year. We hosted two virtual conferences, continued to expand into new markets and new application areas, began new collaborations, increased our employee count, and, of course, continued to improve and develop CONVERGE.
We have spent much of 2020 developing the next major release of our CONVERGE CFD software: version 3.1. There’s a lot to look forward to in CONVERGE 3.1, which will be released next year. In CONVERGE 3.0, we added the ability to incorporate stationary inlaid meshes into a simulation. In 3.1, these inlaid meshes will be able to move within the underlying Cartesian grid. For example, you will be able to create an inlaid mesh around each of the intake valves in an IC engine simulation, and the mesh will move with the valve as it opens and closes. With this method, you can achieve high grid resolution normal to the valve surface using significantly fewer cells than with traditional fixed embedding.
Another enhancement will allow you to use different solvers, meshes, physical models, and chemical mechanisms for different streams (i.e., portions of the domain). This means you will be able to tailor your simulation settings to each stream, which will improve solver speed and numerical performance. CONVERGE 3.1 will also feature new sealing capabilities that enable you to have any objects come into contact with one another in your simulation or have objects enter or leave your simulation.
Furthermore, CONVERGE 3.1 will support solid- and gas-phase parcels in addition to the traditional liquid-phase parcels. This can be useful when modeling, for example, soot or injectors operating at flash-boiling conditions. CONVERGE 3.1 will also feature an improved steady-state solver that will provide significant improvements in speed, and we have enhanced our fluid-structure interaction, volume of fluid, combustion, and emissions modeling capabilities. There are many more exciting features and enhancements coming in 3.1, so stay tuned for more information!
Improving the scalability of CONVERGE continues to be a strong focus of our development efforts. We work with several companies and institutions, testing CONVERGE on different high-performance computing (HPC) architectures and optimizing our software to ensure good scaling. To that end, we were thrilled to begin a new collaboration this year with Oracle, a leader in cloud computing and enterprise software. In our benchmark testing, we have seen near perfect scaling of CONVERGE on Oracle Cloud Infrastructure on thousands of cores. This collaboration presents a great opportunity for CONVERGE users to take advantage of Oracle’s advanced HPC resources to efficiently run large-scale simulations in the cloud.
For the second year in a row, we were honored to win an HPCwire award for research performed with our colleagues at Aramco Research Center–Detroit and Argonne National Laboratory. This year, we received the HPCwire Readers’ Choice Award for Best Use of HPC in Industry for our work using HPC and machine learning to accelerate injector design optimization for next-generation high-efficiency, low-emissions engines. Our collaborative work is forging the way to leverage HPC, novel experimental measurements, and CFD to perform rapid optimization studies and reduce our carbon footprint from transportation.
In another collaborative effort, the Computational Chemistry Consortium (C3) made significant progress in 2020. Co-founded by Convergent Science, C3 is working to create the most accurate and comprehensive chemical reaction mechanism for automotive fuels that includes NOx and PAH chemistry to model emissions. The first version of the mechanism was completed last year and is currently available to C3’s industry sponsors. Once the mechanism is published, it will be released to the public on fuelmech.org. This past year, C3 has continued to refine the mechanism, which has now reached version 2.1. The results of these efforts have been rewarding—we’ve seen a significant decrease in error in selected validation cases. The next year of the consortium will focus on increasing the accuracy of the NOx and PAH chemistry. To that end, C3 welcomed a new member this year, Dr. Stephen Klippenstein from Argonne National Laboratory. Dr. Klippenstein will perform high-level ab initio calculations of rate constants in NOx chemistry. Ultimately, the C3 mechanism is expected to be the first publicly available mechanism that includes everything from hydrogen chemistry all the way up to PAH chemistry in a single high-fidelity mechanism.
In 2020, we celebrated our 10-year anniversary of collaboration with Argonne National Laboratory. Over the past decade, this collaboration has helped us extend CONVERGE’s capabilities and broach new application areas. We have performed cutting-edge research in the transportation field, developing new methods and models that are proving to be instrumental in designing the next generation of engines. In the aerospace field, we’ve broken ground in applying CFD to gas turbines, rotating detonation engines, drones, and more. We’ve made great strides in the last ten years, and we’re looking forward to the next decade of collaboration!
Every year, we look forward to getting together with our users, discussing the latest exciting CONVERGE research and having some fun at our user conferences. When the pandemic struck and countries began locking down earlier this year, we were determined to still hold our 2020 CONVERGE User Conference–Europe, even if it looked a bit different. Our conference was scheduled for the end of March, so we didn’t have much time to transition from an in-person to an online event, but our team was up for the challenge. In less than three weeks, we planned a whole new event and successfully held one of the first pandemic-era virtual conferences. We were so pleased with the result! More than 400 attendees from around the world tuned in for an excellent lineup of technical presentations, which spanned topics from IC engines to compressors to electric motors and battery packs.
While we hoped to hold our North American user conference in Detroit later in the year, the continued pandemic made that impossible. Once again, we took to the internet. We incorporated some more networking opportunities, including various social groups and discussion topics, and created some fun polls to help attendees get to know one another. We were also able to offer our usual slate of conference-week CONVERGE training and virtual exhibit booths for our sponsors. The presentations at this conference showcased the breadth and diversity of applications for which CONVERGE is suited, with speakers discussing rockets, gas turbines, exhaust aftertreatment, biomedical applications, renewable energy, and electromobility in addition to a host of IC engine-related topics.
It’s hard to know what 2021 will look like, but rest assured we will be hosting more conferences, virtual or otherwise. We’re looking forward to the day we can get together in person once again!
Even with the pandemic, 2020 was an exciting and productive year for Convergent Science around the globe. We gained nearly a dozen new employees, including bringing on team members in newly created roles to help expand our relationships with universities and to increase our in-house CAD design capabilities. We also continued to find new markets for CONVERGE as we entered the emobility, rocket, and burner industries.
Our Indian office flourished in 2020. Since its creation three years ago, Convergent Science India has grown to more than 20 employees, adding nine new team members this year alone. To accommodate our growing team, we moved to a spacious new building in Pune. Our team in India expanded our global reach, bringing new academic and industry clients on board. In addition, we continued to work on growing our presence in new applications such as gas turbines, aftertreatment, motor cooling, battery failure, oil churning, and spray painting.
In Europe, despite the challenging circumstances, we increased our client base and our license sales considerably, and we were able to successfully and seamlessly support our customers to help them achieve their CFD goals. In addition to moving our European CONVERGE user conference online in record time, we attended and exhibited at many virtual tradeshows and events and are looking forward to attending in-person conferences as soon as it is safe to do so.
Our partners at IDAJ continued to do excellent work supporting our customers in Japan, China, and Korea. Due to the pandemic, they held their first-ever IDAJ Conference Online 2020, where they had both live lectures and Q&A sessions as well as on-demand streaming content. While they support many IC engine clients, they are also supporting clients working on other applications such as motor cooling, battery failure, oil churning, and spray painting.
2020 was a difficult year for many of us, but I am impressed and inspired by the way the CFD community and beyond has come together to make the most of a challenging situation. And the future looks bright! We’re looking forward to releasing CONVERGE 3.1 and helping our users take advantage of the increased functionality and new features that will be available. We’re excited to expand our presence in electromobility, renewable energy, aerospace, and other new fields. In the upcoming year, we look forward to forming new collaborations and strengthening existing partnerships to promote innovation and keep CONVERGE on the cutting-edge of CFD software.
Can we help you meet your 2021 CFD goals? Contact us today!
In my first year of graduate school, a friend always filled up her water bottle, dropped some ice cubes into it, and then shook it up in order to cool the water faster. If she had added the ice cubes and let the water bottle sit, eventually all the water would equilibrate to the same temperature, but that would take a while without any movement—the water next to the ice cubes would cool down quickly, but the water farther away would cool down at a much slower rate. By shaking it up, she agitated the water and ice so that the ice came into contact with more of the warm water that needed to be cooled. This “cocktail shaker effect,” I would later find out, also applies to cooling engines.
Combustion in an internal combustion (IC) engine occurs on top of the piston, which means that there is an extraordinary amount of heat generated on the piston crown. If left unmediated, this heat can cause the piston to break. The threat of piston damage is particularly high in diesel engines because more heat is generated in the cylinder than in a traditional gasoline engine. Unlike a bottle of warm water, though, we can’t just drop a few ice cubes into the cylinder to act as a heat sink.
Here we see how engineers can use CONVERGE to efficiently solve the problem of cooling the piston so that it isn’t damaged by heat. The idea is simple—use engine oil as a heat sink—but the implementation is complex since the piston is constantly moving and nothing can be in contact with the piston crown inside the cylinder.
Since the heat sink can’t be inside the cylinder on the piston crown, there is an oil gallery in contact with the undercrown of the piston, as shown in Figure 1. Engine oil is taken through a pump, pressurized, and constantly sprayed at the oil gallery inlet hole. In the video below, you will see how the oil enters the gallery, and, as the piston motion continues, the oil sloshes inside the oil gallery, absorbing heat from the piston before exiting the outlet hole on the other side of the gallery.
There are several factors that are important to consider when designing this type of cooling system, all of which CONVERGE is well-equipped to handle. What size and shape should the inlet and outlet holes be to capture the stream of oil? How much oil will enter the gallery compared to how much was sprayed (i.e., capture ratio)? What is the best design of the gallery so that the oil effectively absorbs heat from the piston? What ratio of the gallery volume should be occupied (i.e., fill ratio) to ensure that the oil can move and absorb heat efficiently? CONVERGE provides answers to these questions and others through a volume of fluid (VOF) simulation.
Because a simple boundary condition is not predictive of the heat transfer throughout the entire piston, we use conjugate heat transfer (CHT) to more accurately predict the piston cooling by solving the heat distribution inside the piston. Understanding how heat transfer affects the whole piston is an essential step toward designing a geometry that will effectively cool more than just the piston surface. While CHT can be computationally expensive due to the difference in time-scales of heat transfer in the solid and fluid regions, CONVERGE provides the option to use super-cycling, which can significantly reduce the computational cost of this type of simulation.
In the video below, you will see how the above factors have been optimized to dissipate heat from the piston crown and throughout the piston as a whole. In the video on the left, you can watch the temperature contours change during the simulation as heat dissipates. The second view shows how CONVERGE’s Adaptive Mesh Refinement (AMR) is in action throughout the simulation, providing increased grid resolution near the inlet and around the oil gallery, where it is needed most.
Ready to run your own simulations to optimize oil jet piston cooling? Contact us today!
From the Argonne National Laboratory + Convergent Science Blog Series
Through the collaboration between Argonne National Laboratory and Convergent Science, we provide fundamental research that enables manufacturers to design cleaner and more efficient engines by optimizing combustion.
–Doug Longman, Manager of Engine Research at Argonne National Laboratory
The internal combustion engine has come a long way since its inception—the engine in your car today is significantly quieter, cleaner, and more efficient than its 1800s-era counterpart. For many years, the primary means of achieving these advances was experimentation. Indeed, we have experiments to thank for a myriad of innovations, from fuel injection systems to turbocharging to Wankel engines.
More recently, a new tool was added to the engine designer’s toolbox: simulation. Beginning in the 1970s and ‘80s, computational fluid dynamics (CFD) opened the door to a new level of refinement and optimization.
“One of the really cool things about simulation is that you can look at physics that cannot be easily captured in an experiment—details of the flow that might be blocked from view, for example,” says Eric Pomraning, Co-Owner of Convergent Science.
Of course, experiments remain vitally important to engine research, since CFD simulations model physical processes, and experiments are necessary to validate your results and ground your simulations in reality.
Argonne National Laboratory and Convergent Science combine these two approaches—experiments and simulation—to further improve the internal combustion engine. Two of the main levers we have to control the efficiency and emissions of an engine are the fuel injection system and the ignition system, both of which have been significant areas of focus during the collaboration.
The combustion process in an internal combustion engine really begins with fuel injection. The physics of injection determine how the fuel and air in the cylinder will mix, ignite, and ultimately combust.
Argonne National Laboratory is home to the Advanced Photon Source (APS), a DOE Office of Science User Facility. The APS provides a unique opportunity to characterize the internal passages of injector nozzles with incredibly high spatial resolution through the use of high-energy x-rays. This data is invaluable for developing accurate CFD models that manufacturers can use in their design processes.
Early on in the collaboration, Christopher Powell, Principal Engine Research Scientist at Argonne, and his team leveraged the APS to investigate needle motion in an injector.
“Injector manufacturers had long suspected that off-axis motion of the injector valve could be present. But they never had a way to measure it before, so they weren’t sure how it impacted fuel injection,” says Chris.
The x-ray studies performed at the APS were the first in the world to confirm that some injector needles do exhibit radial motion in addition to the intended axial motion, a phenomenon dubbed “needle wobble.” Argonne and Convergent Science engineers simulated this experimental data in CONVERGE, prescribing radial motion to the injector needle. They found that needle wobble can substantially impact the fuel distribution as it exits the injector. Manufacturers were able to apply the results of this research to design injectors with a more predictable spray pattern, which, in turn, leads to a more predictable combustion event.
More recently, researchers at Argonne have used the APS to investigate the shape of fuel injector flow passages and characterize surface roughness. Imperfections in the geometry can influence the spray and the subsequent downstream engine processes.
“If we use a CAD geometry, which is smooth, we will miss out on some of the physics, like cavitation, that can be triggered by surface imperfections,” says Sameera Wijeyakulasuriya, Senior Principal Engineer at Convergent Science. “But if we use the x-ray scanned geometry, we can incorporate those surface imperfections into our numerical models, so we can see how the flow field behaves and responds.”
Argonne and Convergent Science engineers performed internal nozzle flow simulations that used the real injector geometries and that incorporated real needle motion.1 Using the one-way coupling approach in CONVERGE, they mapped the results of the internal flow simulations to the exit of each injector orifice to initialize a multi-plume Lagrangian spray simulation. As you can see in Figure 1, the surface roughness and needle motion significantly impact the spray plume—the one-way coupling approach captures features that the standard rate of injection (ROI) method could not. In addition, the real injector parameters introduce orifice-to-orifice variability, which affects the combustion behavior down the line.
The real injector geometries not only allow for more accurate computational simulations, but they also can serve as a diagnostic tool for manufacturers to assess how well their manufacturing processes are producing the desired nozzle shape and size.
Accurately characterizing fuel injection sets the stage for the next lever we can optimize in our engine: ignition. In spark-ignition engines, the ignition event initiates the formation of the flame kernel, the growth of the flame kernel, and the flame propagation mechanism.
“In the past, ignition was just modeled as a hot source—dumping an amount of energy in a small region and hoping it transitions to a flame. The amount of physics in the process was very limited,” says Sibendu Som, Manager of the Computational Multi-Physics Section at Argonne.
These simplified models are adequate for most stable engine conditions, but you can run into trouble when you start simulating more advanced combustion concepts. In these scenarios, the simplified ignition models fall short in replicating experimental data. Over the course of their collaboration, Argonne and Convergent Science have incorporated more physics into ignition models to make them robust for a variety of engine conditions.
For example, high-performance spark-ignition engines often feature high levels of dilution and increased levels of turbulence. These conditions can have a significant impact on the ignition process, which consequently affects combustion stability and cycle-to-cycle variation (CCV). To capture the elongation and stretch experienced by the spark channel under highly turbulent conditions, Argonne and Convergent Science engineers developed a new ignition model, the hybrid Lagrangian-Eulerian spark-ignition (LESI) model.
In Figure 2, you can see that the LESI model more accurately captures the behavior of the spark under turbulent conditions compared to a commonly used energy deposition model.2 The LESI model will be available in future versions of CONVERGE, accessible to manufacturers to help them better understand ignition and mitigate CCV.
Ideally, every cycle of an internal combustion engine would be exactly identical to ensure smooth operation. In real engines, variability in the injection, ignition, and combustion means that not every cycle will be the same. Cyclic variability is especially prevalent in high-efficiency engines that push the limits of combustion stability. Extreme cycles can cause engine knock and misfires—and they can influence emissions.
“Not every engine cycle generates significant emissions. Often they’re primarily formed only during rare cycles—maybe one or two out of a hundred,” says Keith Richards, Co-Owner of Convergent Science. “Being able to capture cyclic variability will ultimately allow us to improve our predictive capabilities for emissions.”
Modeling CCV requires simulating numerous engine cycles, which is a highly (and at times prohibitively) time-consuming process. Several years ago, Keith suggested a potential solution—starting several engine cycles concurrently, each with a small perturbation to the flow field, which allows each simulation to develop into a unique solution.
Argonne and Convergent Science compared this approach—called the concurrent perturbation method (CPM)—to the traditional approach of simulating engine cycles consecutively. Figure 3 shows CCV results obtained using CPM compared to concurrently run cycles, which you can see match very well.3 This means that with sufficient computational resources, you can predict CCV in the amount of time it takes to run a single engine cycle.
The study described above, and the vast majority of all CCV simulation studies, use large eddy simulations (LES), because LES allows you to resolve some of the turbulence scales that lead to cyclic variability. Reynolds-Averaged Navier-Stokes (RANS), on the other hand, provides an ensemble average that theoretically damps out variations between cycles. At least this was the consensus among the engine modeling community until Riccardo Scarcelli, a Research Scientist at Argonne, noticed something strange.
“I was running consecutive engine cycle simulations to move away from the initial boundary conditions, and I realized that the cycles were never converged to an average solution—the cycles were never like the cycle before or the cycle after,” Riccardo says. “And that was strange because I was using RANS, not LES.”
Argonne and Convergent Science worked together to untangle this mystery, and they discovered that RANS is able to capture the deterministic component of CCV. RANS has long been the predominant turbulence model used in engine simulations, so how had this phenomenon gone unnoticed? In the past, most engine simulations modeled conventional combustion, which shows little cyclic variability in practice in either diesel or gasoline engines. The more complex combustion regimes simulated today—along with the use of finer grids and more accurate numerics—allows RANS to pick up on some of the cycle-to-cycle variations that these engines exhibit in the real world. While RANS will not provide as accurate a picture as LES, it can be a useful tool to capture CCV trends. Additionally, RANS can be run on a much coarser mesh than LES, so you can get a faster turnaround on an inherently expensive problem, making CCV studies more practical for industry timelines.
The gains in understanding and improved models developed during the Argonne and Convergent Science collaboration provide great benefit to the engine community. One of the primary missions of Argonne National Laboratory is to transfer knowledge and technology to industry. To that end, the models developed during the collaboration will continue to be implemented in CONVERGE, putting the technology in the hands of manufacturers, so they can create better engines.
What can we look forward to in the future? There will continue to be a strong focus on developing high fidelity numerics, expanding and improving chemistry tools and mechanisms, integrating machine learning into the simulation process, and speeding up CFD simulations—establishing more efficient models and further increasing the scalability of CONVERGE to take advantage of the latest computational resources. Moreover, we can look forward to seeing the innovations of the last decade of collaboration incorporated into the engines of the next decade, bringing us closer to a clean transportation future.
In case you missed the other posts in the series, you can find them here:
[1] Torelli, R., Matusik, K.E., Nelli, K.C., Kastengren, A.L., Fezzaa, K., Powell, C.F., Som, S., Pei, Y., Tzanetakis, T., Zhang, Y., Traver, M., and Cleary, D.J., “Evaluation of Shot-to-Shot In-Nozzle Flow Variations in a Heavy-Duty Diesel Injector Using Real Nozzle Geometry,” SAE Paper 2018-01-0303, 2018. DOI: 10.4271/2018-01-0303
[2] Scarcelli, R., Zhang, A., Wallner, T., Som, S., Huang, J., Wijeyakulasuriya, S., Mao, Y., Zhu, X., and Lee, S.-Y., “Development of a Hybrid Lagrangian–Eulerian Model to Describe Spark-Ignition Processes at Engine-Like Turbulent Flow Conditions,” Journal of Engineering for Gas Turbines and Power, 141(9), 2019. DOI: 10.1115/1.4043397
[3] Probst, D., Wijeyakulasuriya, S., Pomraning, E., Kodavasal, J., Scarcelli, R., and Som, S., “Predicting Cycle-to-Cycle Variation With Concurrent Cycles In A Gasoline Direct Injected Engine With Large Eddy Simulations”, Journal of Energy Resources Technology, 142(4), 2020. DOI: 10.1115/1.4044766
Renewable energy is being generated at unprecedented levels in the United States, and those levels will only continue to rise. The growth in renewable energy has been driven largely by wind power—over the last decade, wind energy generation in the U.S. has increased by 400% 1. It’s easy to see why wind power is appealing. It’s sustainable, cost-effective, and offers the opportunity for domestic energy production. But, like all energy sources, wind power doesn’t come without drawbacks. Concerns have been raised about land use, noise, consequences to wildlife habitats, and the aesthetic impact of wind turbines on the landscape 2.
However, there is a potential solution to many of these issues: what if you move wind turbines offshore? In addition to mitigating concerns over land use, noise, and visual impact, offshore wind turbines offer several other advantages. Compared to onshore, wind speeds offshore tend to be higher and steadier, leading to large gains in energy production. Also, in the U.S., a large portion of the population lives near the coasts or in the Great Lakes region, which minimizes problems associated with transporting wind-generated electricity. But despite these advantages, only 0.03% of the U.S. wind-generating capacity in 2018 came from offshore wind plants 1. So why hasn’t offshore wind energy become more prevalent? Well, one of the major challenges with offshore wind energy is a problem of engineering—wind turbine support structures must be designed to withstand the significant wind and wave loads offshore.
Today, there are computational tools that engineers can use to help design optimized support structures for offshore wind turbines. Namely, computational fluid dynamics (CFD) simulations can offer valuable insight into the interaction between waves and the wind turbine support structures.
Hannah Johlas is an NSF Graduate Research Fellow in Dr. David Schmidt’s lab at the University of Massachusetts Amherst. Hannah uses CFD to study fixed-bottom offshore wind turbines at shallow-to-intermediate water depths (up to approximately 50 meters deep). Turbines located at these depths are of particular interest because of a phenomenon called breaking waves. As waves move from deeper to shallower water, the wavelength decreases and the wave height increases in a process called shoaling. If a wave becomes steep enough, the crest can overturn and topple forward, creating a breaking wave. Breaking waves can impart substantial forces onto turbine support structures, so if you’re planning to build a wind turbine in shallower water, it’s important to know if that turbine might experience breaking waves.
Hannah uses CONVERGE CFD software to predict if waves are likely to break for ocean characteristics common to potential offshore wind turbine sites along the east coast of the U.S. She also predicts the forces from breaking waves slamming into the wind turbine support structures. The results of the CONVERGE simulations are then used to evaluate the accuracy of simplified engineering models to determine which models best capture wave behavior and wave forces and, thus, which ones should be used when designing wind turbines.
In this study, Hannah simulated 39 different wave trains in CONVERGE using a two-phase finite volume CFD model 3. She leveraged the volume of fluid (VOF) method with the Piecewise Linear Interface Calculation scheme to capture the air-water interface. Additionally, automated meshing and Adaptive Mesh Refinement ensured accurate results while minimizing the time to set up and run the simulations.
“CONVERGE’s adaptive meshing helps simulate fluid interfaces at reduced computational cost,” Hannah says. “This feature is particularly useful for resolving the complex air-water interface in breaking wave simulations.”
Some of the breaking waves were then simulated slamming into monopiles, the large cylinders used as support structures for offshore wind turbines in shallow water. The results of these CONVERGE simulations were validated against experimental data before being used to evaluate the simplified engineering models.
Four common models for predicting whether a wave will break (McCowan, Miche, Battjes, and Goda) were assessed. The models were evaluated by how frequently they produced false positives (i.e., the model predicts a wave should break, but the simulated wave does not break) and false negatives (i.e., the model predicts a wave should not break, but the simulated wave does break) and how well they predicted the steepness of the breaking waves. False positives are preferable to false negatives when designing a conservative support structure, since breaking wave loads are usually higher than non-breaking waves.
The study results indicate that none of the models perform well under all conditions, and instead which model you should use depends on the characteristics of the ocean at the site you’re considering.
“For sites with low seafloor slopes, the Goda model is the best at conservatively predicting whether a given wave will break,” Hannah says. “For higher seafloor slopes, the Battjes model is preferred.”
Four slam force models were also evaluated: Goda, Campbell-Weynberg, Cointe-Armand, and Wienke-Oumerachi. The slam models and the simulated CFD wave forces were compared for their peak total force, their force time history, and breaking wave shape.
The results show that all four slam models are conservative (i.e., predict higher peak forces than the simulated waves) and assume the worst-case shape for the breaking wave during impact. The Goda slam model is the least conservative, while the Cointe-Armand and Wienke-Oumerachi slam models are the most conservative. All four models neglect the effects of runup on the monopiles, which was present in the CFD simulations. This could explain some of the discrepancies between the forces predicted by the engineering models and the CFD simulations.
Offshore wind energy is a promising technology for clean energy production, but to gain traction in the industry, there needs to be sound engineering models to use when designing the turbines. Hannah’s research provides guidelines on which engineering models should be used for a given set of ocean characteristics. Her results also highlight the areas that could be improved upon.
“The slam force models don’t account for variety in wave shape at impact or for wave runup on the monopiles,” Hannah says. “Future studies should focus on incorporating these factors into the engineering models to improve their predictive capabilities.”
CFD has a fundamental role to play in the development of renewable energy. CONVERGE’s combination of autonomous meshing, high-fidelity physical models, and ability to easily handle complex, moving geometries make it particularly well suited to the task. Whether it’s studying the interaction of waves with offshore turbines, optimizing the design of onshore wind farms, or predicting wind loads on solar panels, CONVERGE has the tools you need to help bring about the next generation of energy production.
Interested in learning more about Hannah’s research? Check out her paper here.
[1] Marcy, C., “U.S. renewable electricity generation has doubled since 2008,” https://www.eia.gov/todayinenergy/detail.php?id=38752, accessed on Nov 11, 2016.
[2] Center for Sustainable Systems, University of Michigan, “U.S. Renewable Energy Factsheet”, http://css.umich.edu/factsheets/us-renewable-energy-factsheet, accessed on Nov 11, 2016.
[3] Johlas, H.M., Hallowell, S., Xie, S., Lomonaco, P., Lackner, M.A., Arwade, S.A., Myers, A.T., and Schmidt, D.P., “Modeling Breaking Waves for Fixed-Bottom Support Structures for Offshore Wind Turbines,” ASME 2018 1st International Offshore Wind Technical Conference, IOWTC2018-1095, San Francisco, CA, United States, Nov 4–7, 2018. DOI: 10.1115/IOWTC2018-1095
Across industries, manufacturers share many of the same goals: create quality products, boost productivity, and reduce expenses. In the pumps and compressors business, manufacturers must contend with the complexity of the machines themselves in order to reach these goals. Given the intricate geometries, moving components, and tight clearances between parts, designing pumps and compressors to be efficient and reliable is no trivial matter.
First, assessing the device’s performance by building and testing a prototype can be time-consuming and costly. And when you’re performing a design study, machining and switching out various components further compounds your expenses. There are also limitations in how many instruments you can place inside the device and where you can place them, which can make fully characterizing the machine difficult. New methods for testing and manufacturing can help streamline this process, but there remains room for alternative approaches.
Computational fluid dynamics (CFD) offers significant advantages for designing pumps and compressors. Through CFD simulations, you can obtain valuable insight into the behavior of the fluid inside your machine and the interactions between the fluid and solid components—and CONVERGE CFD software is well suited for the task.
Designed to model three-dimensional fluid flows in systems with complex geometries and moving boundaries, CONVERGE is equipped to simulate any positive displacement or dynamic pump or compressor. And with a suite of advanced models, CONVERGE allows you to computationally study the physical phenomena that affect efficiency and reliability—such as surge, pressure pulsations, cavitation, and vibration—to design an optimal machine.
CFD provides a unique opportunity to visualize the inner workings of your machine during operation, generating data on pressures, temperatures, velocities, and fluid properties without the limitations of physical measurements. The entire flow field can be analyzed with CFD, including areas that are difficult or impossible to measure experimentally. This additional data allows you to comprehensively characterize your pump or compressor and pinpoint areas for improvement.
Since CONVERGE leads the way in predictive CFD technology, you can analyze pump and compressor designs that have not yet been built and still be confident in your results. Compared to building and testing prototypes, simulations are fast and inexpensive, and altering a computer-modeled geometry is trivial. Iterating through designs virtually and building only the most promising candidates reduces the expenses associated with the design process.
While three-dimensional CFD is fast compared to experimental methods, it is typically slower than one- or two-dimensional analysis tools, which are often incorporated into the design process. However, 1D and 2D methods are inherently limited in their ability to capture the 3D nature of physical flows, and thus can miss important flow phenomena that may negatively affect performance.
CONVERGE drastically reduces the time required to set up a 3D pump or compressor simulation with its autonomous meshing capabilities. Creating a mesh by hand—which is standard practice in many CFD programs—can be a weeks-long process, particularly for cases with complex moving geometries such as pumps and compressors. With autonomous meshing, CONVERGE automatically generates an optimized Cartesian mesh based on a few simple user-defined parameters, effectively eliminating all user meshing time.
In addition, the increased computational resources available today can greatly reduce the time requirements to run CFD simulations. CONVERGE is specifically designed to enable highly parallel simulations to run on many processors and demonstrates excellent scaling on thousands of cores. Additionally, Convergent Science partners with cloud service providers, who offer affordable on-demand access to the latest computing resources, making it simple to speed up your simulations.
Accurately capturing real-world physical phenomena is critical to obtaining useful simulation results. CONVERGE features robust fluid-structure interaction (FSI) modeling capabilities. For example, you can simulate the interaction between the bulk flow and the valves to predict impact velocity, fatigue, and failure points. CONVERGE also features a conjugate heat transfer (CHT) model to resolve spatially varying surface temperature distributions, and a multi-phase model to study cavitation, oil splashing, and other free surface flows of interest.
CONVERGE has been validated on numerous types of compressors and pumps1-10, and we will discuss two common applications below.
Scroll compressors are often used in air conditioning systems, and the major design goals for these machines today are reducing noise and improving efficiency. Scroll compressors consist of a stationary scroll and an orbiting scroll, which create a complex system that can be challenging to model. Some codes use a moving mesh to simulate moving boundaries, but this can introduce diffusive error that lowers the accuracy of your results. CONVERGE automatically generates a stationary mesh at each time-step to accommodate moving boundaries, which provides high numerical accuracy. In addition, CONVERGE employs a unique Cartesian cut-cell approach to perfectly represent your compressor geometry, no matter how complex.
In this study1, CONVERGE was used to simulate a scroll compressor with a deforming reed valve. An FSI model was used to capture the motion of the discharge reed valve. Figure 1 shows the CFD-predicted mass flow rate through the scroll compressor compared to experimental values. As you can see, there is good agreement between the simulation and experiment.
This method is particularly useful for the optimization phase of design, as parametric changes to the geometry can be easily incorporated. In addition, Adaptive Mesh Refinement (AMR) allows you to accurately capture the physical phenomena of interest while maintaining a reasonable computational expense.
Next, we will look at a twin screw compressor. These compressors have two helical screws that rotate in opposite directions, and are frequently used in industrial, manufacturing, and refrigeration applications. A common challenge for designing screw compressors—and many other pumps and compressors—is the tight clearances between parts. Inevitably, there will be some leakage flow between chambers, which will affect the device’s performance.
CONVERGE offers several methods for capturing the fluid behavior in these small gaps. Using local mesh embedding and AMR, you can directly resolve the gaps. This method is highly accurate, but it can come with a high computational price tag. An alternative approach is to use one of CONVERGE’s gap models to account for the leakage flows without fully resolving the gaps. This method balances accuracy and time costs, so you can get the results you need when you need them.
Another factor that must be taken into account when designing screw compressors is thermal expansion. Heat transfer between the fluid and the solid walls means the clearances will vary down the length of the rotors. CONVERGE’s CHT model can capture the heat transfer between the solid and the fluid to account for this phenomenon.
This study2 of a dry twin screw compressor employs a gap model to account for leakage flows, CHT modeling to capture heat transfer, and AMR to resolve large-scale flow structures. Mass flow rate, power, and discharge temperature were predicted with CONVERGE and compared to experimentally measured values. This study also investigated the effects of the base grid size on the accuracy of the results. In Figure 2, you can see there is good agreement between the experimental and simulated data, particularly for the most refined grid. The method used in this study provides accurate results in a turn-around time that is practical for engineering applications.
The benefits CONVERGE offers for designing pumps and compressors directly translate to a tangible competitive advantage. CFD benefits your business by reducing costs and enabling you to bring your product to market faster, and CONVERGE features tools to help you optimize your designs and produce high-quality products for your customers. To find out how CONVERGE can benefit you, contact us today!
[1] Rowinski, D., Pham, H.-D., and Brandt, T., “Modeling a Scroll Compressor Using a Cartesian Cut-Cell Based CFD Methodology with Automatic Adaptive Meshing,” 24th International Compressor Engineering Conference at Purdue, 1252, West Lafayette, IN, United States, Jul 9–12, 2018.
[2] Rowinski, D., Li, Y., and Bansal, K., “Investigations of Automatic Meshing in Modeling a Dry Twin Screw Compressor,” 24th International Compressor Engineering Conference at Purdue, 1528, West Lafayette, IN, United States, Jul 9–12, 2018.
[3] Rowinski, D., Sadique, J., Oliveira, S., and Real, M., “Modeling a Reciprocating Compressor Using a Two-Way Coupled Fluid and Solid Solver with Automatic Grid Generation and Adaptive Mesh Refinement,” 24th International Compressor Engineering Conference at Purdue, 1587, West Lafayette, IN, United States, Jul 9–12, 2018.
[4] Rowinski, D.H., Nikolov, A., and Brümmer, A., “Modeling a Dry Running Twin-Screw Expander using a Coupled Thermal-Fluid Solver with Automatic Mesh Generation,” 10th International Conference on Screw Machines, Dortmund, Germany, Sep 18–19, 2018.
[5] da Silva, L.R., Dutra, T., Deschamps, C.J., and Rodrigues, T.T., “A New Modeling Strategy to Simulation the Compression Cycle of Reciprocating Compressors,” IIR Conference on Compressors, 0226, Bratislava, Slovakia, Sep 6–8, 2017. DOI: 10.18462/iir.compr.2017.0226
[6] Willie, J., “Analytical and Numerical Prediction of the Flow and Performance in a Claw Vacuum Pump,” 10th International Conference on Screw Machines, Dortmund, Germany, Sep 18–19, 2018. DOI: 10.1088/1757-899X/425/1/012026
[7] Jhun, C., Siedlecki, C., Xu, L., Lukic, B., Newswanger, R., Yeager, E., Reibson, J., Cysyk, J., Weiss, W., and Rosenberg, G., “Stress and Exposure Time on Von Willebrand Factor Degradation,” Artificial Organs, 2018. DOI: 10.1111/aor.13323
[8] Rowinski, D.H., “New Applications in Multi-Phase Flow Modeling With CONVERGE: Gerotor Pumps, Fuel Tank Sloshing, and Gear Churning,” 2018 CONVERGE User Conference–Europe, Bologna, Italy, Mar 19–23, 2018. https://api.convergecfd.com/wp-content/uploads/David-Rowinski_Multiphase-Modeling-Gearbox-Power-Losses-Oil-Pump-Cavitation-and-Fuel-Tank-Sloshing.pdf
[9] Willie, J., “Simulation and Optimization of Flow Inside Claw Vacuum Pumps,” 2018 CONVERGE User Conference–Europe, Bologna, Italy, Mar 19–23, 2018. https://api.convergecfd.com/wp-content/uploads/james-willie-simulation-and-optimization-of-flow-inside-claw-vacuum-pumps.pdf
[10] Scheib, C.M., Newswanger, R.K., Cysyk, J.P., Reibson, J.D., Lukic, B., Doxtater, B., Yeager, E., Leibich, P., Bletcher, K., Siedlecki, C.A., Weiss, W.J., Rosenberg, G., and Jhun, C., “LVAD Redesign: Pump Variation for Minimizing Thrombus Susceptibility Potential,” ASAIO 65th Annual Conference, San Francisco, CA, United States, Jun 26–29, 2019.
In a competitive market, predictive computational fluid dynamics (CFD) can give you an edge when it comes to product design and development. Not only can you predict problem areas in your product before manufacturing, but you can also optimize your design computationally and devote fewer resources to testing physical models. To get accurate predictions in CFD, you need to have high-resolution grid-convergent meshes, detailed physical models, high-order numerics, and robust chemistry—all of which are computationally expensive. Using simulation to expedite product design works only if you can run your simulations in a reasonable amount of time.
The introduction of high-performance computing (HPC) drastically furthered our ability to obtain accurate results in shorter periods of time. By running simulations in parallel on multiple cores, we can now solve cases with millions of cells and complicated physics that otherwise would have taken a prohibitively long time to complete.
However, simply running cases on more cores doesn’t necessarily lead to a significant speedup. The speedup from HPC is only as good as your code’s parallelization algorithm. Hence, to get a faster turnaround on product development, we need to improve our parallelization algorithm.
Breaking a problem into parts and solving these parts simultaneously on multiple interlinked processors is known as parallelization. An ideally parallelized problem will scale inversely with the number of cores—twice the number of cores, half the runtime.
A common task in HPC is measuring the scalability, also referred to as scaling efficiency, of an application. Scalability is the study of how the simulation runtime is affected by changing the number of cores or processors. The scaling trend can be visualized by plotting the speedup against the number of cores.
In CONVERGE versions 2.4 and earlier, parallelization is performed by partitioning the solution domain into parallel blocks, which are coarser than the base grid. CONVERGE distributes the blocks to the interlinked processors and then performs a load balance. Load balancing redistributes these parallel blocks such that each processor is assigned roughly the same number of cells.
This parallel-block technique works well unless a simulation contains high levels of embedding (regions in which the base grid is refined to a finer mesh) in the calculation domain. These cases lead to poor parallelization because the cells of a single parallel block cannot be split between multiple processors.
Figure 1 shows an example of parallel block load balancing for a test case in CONVERGE 2.4. The colors of the contour represent the cells owned by each processor. As you can see, the highly embedded region at the center is covered by only a few blocks, leading to a disproportionately high number of cells in those blocks. As a result, the cell distribution across processors is skewed. This phenomenon imposes a practical limit on the number of levels of embedding you can have in earlier versions of CONVERGE while still maintaining a reasonable load balance.
In CONVERGE 3.0, instead of generating parallel blocks, parallelization is accomplished via cell-based load balancing, i.e., on a cell-by-cell basis. Because each cell can belong to any processor, there is much more flexibility in how the cells are distributed, and we no longer need to worry about our embedding levels.
Figure 2 shows the cell distribution among processors using cell-based load balancing in CONVERGE 3.0 for the same test case shown in Figure 1. You can see that without the restrictions of the parallel blocks, the cells in the highly embedded region are divided between many processors, ensuring an (approximately) equal distribution of cells.
The cell-based load balancing technique demonstrates significant improvements in scaling, even for large numbers of cores. And unlike previous versions, the load balancing itself in CONVERGE 3.0 is performed in parallel, accelerating the simulation start-up.
In order to see how well the cell-based parallelization works, we have performed strong scaling studies for a number of cases. The term strong scaling means that we ran the exact same simulation (i.e., we kept the number of cells, setup parameters, etc. constant) on different core counts.
Figure 3 shows scaling results for a typical SI8 port fuel injection (PFI) engine case in CONVERGE 3.0. The case was run for one full engine cycle, and the core count varied from 56 to 448. The plot compares the speedup obtained running the case in CONVERGE 3.0 with the ideal speedup. With enough CPU resources, in this case 448 cores, you can simulate one engine cycle with detailed chemistry in under two hours—which is three times faster than CONVERGE 2.4!
Cores | Time (h) | Speedup | Efficiency | Cells per core | Engine cycles per day |
---|---|---|---|---|---|
56 | 11.51 | 1 | 100% | 12,500 | 2.1 |
112 | 5.75 | 2 | 100% | 6,200 | 4.2 |
224 | 3.08 | 3.74 | 93% | 3,100 | 7.8 |
448 | 1.91 | 6.67 | 75% | 1,600 | 12.5 |
If the speedup of the SI8 PFI engine simulation impressed you, then just wait until you see the scaling study for the Sandia Flame D case! Figure 4 shows the results of a strong scaling study performed for the Sandia Flame D case, in which we simulated a methane flame jet using 170 million cells. The case was run on the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA), and the core counts vary from 500 to 8,000. CONVERGE 3.0 demonstrates impressive near-linear scaling even on thousands of cores.
Although earlier versions of CONVERGE show good runtime improvements with increasing core counts, speedup is limited for cases with significant local embeddings. CONVERGE 3.0 has been specifically developed to run efficiently on modern hardware configurations that have a high number of cores per node.
With CONVERGE 3.0, we have observed an increase in speedup in simulations with as few as approximately 1,500 cells per core. With its improved scaling efficiency, this new version empowers you to obtain simulation results quickly, even for massive cases, so you can reduce the time it takes to bring your product to market.
Contact us to learn how you can accelerate your simulations with CONVERGE 3.0.
[1] The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. The NCSA Industry Program is the largest Industrial HPC outreach in the world, and it has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand computational problems at rapid speed and scale. The CONVERGE simulations were run on NCSA’s Blue Waters supercomputer, which is one of the fastest supercomputers on a university campus. Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.
Graphcore has used a range of technologies from Mentor, a Siemens business, to successfully design and verify its latest M2000 platform based on the Graphcore Colossus™ GC200 Intelligence Processing Unit (IPU) processor.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.
High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.
This webinar welcomes Dr. Scott Imlay, Tecplot CTO. He will discuss his research on visualization of higher-order CFD results. CFD code developers are adopting higher-order finite-element CFD methods due to their potential to reduce computation cost while maintaining accuracy. These techniques have been an area of research for many years and are becoming more widely available in popular CFD codes. The webinar content is based on Scott’s technical presentation at AIAA SciTech 2021. Note that this is preliminary research on adding higher-order element visualization to Tecplot 360.
We are looking for partners to try our prototype add-on or to provide data for testing. If you are interested, please contact us! The best way is through our website contact form, or email scottimlay@tecplot.com.
This is early research and we’ve not made definite plans for HOE support in the Tecplot file formats. We’re initially targeting reading CGNS format because it has an HOE specification. We don’t have definite plans for supporting HOE in Tecplot file formats yet. But we are looking at using the CGNS standard.
We are implementing this now as a Tecplot 360 add-on. The add-on is functional for showing the isosurfaces and the surface data. We are looking for research partners to collaborate with because we do believe that the industry will need HOE. You can contact me at scottimlay@tecplot.com.
To rephrase your question, is our assumption that we allow curved elements of the same order as the basis function for this position (which would be iso-parametric)? The answer is yes, but that isn’t the solution for the long run. Our goal is to not require iso-parametric. We know that some people are doing linear geometry and higher order basis functions for the solution. Sometimes they have higher order basis functions for the geometry than they have for the solution. So it can go either way and we would like to support that. In the add-on we do use iso-parametric. If you want to create a linear element, in this case, put your edge-center nodes and location to give you a linear element.
I’m not super familiar with how GMSH does it. But my understanding is that GMSH also uses a subdivision technique. And it does not do it selectively for the isosurfaces. I think I should leave it there because I don’t want to state that I know more about GMSH than I do. Here is a paper about GMSH.
It is a very good idea. We’ve thought about it, but we have not gone any farther than thinking about it. It is true that if you were to use Bézier representation, then they wouldn’t be nodal values anymore, but they would be the points in space that are used to adjust the shape of a B spline, for instance. And the minimum and the maximum are at those points. And so that way we could guarantee it.
The customers we have talked to are not using that as their basis functions. And so we would have to find a way to convert from more common basis functions to that before. If any of you are using those Bezier (or Bernstein polynomial) basis functions for not just the geometry, but for your solution data, I would really like to hear from you and learn more about it. Please contact me at scottimlay@tecplot.com.
The add-on does not currently change the interpolations within Tecplot 360. If you were to interpolate to another grid, it wouldn’t take advantage of that at this time, but in the long run we certainly intend to do that.
In the future, the underlying basis functions would be utilized exactly. And your interpolation to any new nodes would be based on the higher-order basis functions. In terms of fidelity, that would mean it has the same fidelity as the higher-order solution.
The second part of your question is about cost. The cost is going to be higher per cell because it must solve of a nonlinear system of equations. If you have nonlinear geometry, it will have to solve a system of nonlinear equations to do the interpolation. But you have far fewer elements generally in a higher-order mesh than you do in a linear mesh. And so that means it’s quite possible that it would be cheaper overall than if you’re just going from a linear mesh to your new mesh.
We support only the CGNS format, the quadratic or Lagrangian elements. If you are, for instance, converted into CGNS then it will have element type for each of the zones. And when we read that CGNS file, that’s how we know. The basis functions are effectively defined by the file format and element type.
The post Webinar: Visualizing Isosurfaces of Higher Order Elements appeared first on Tecplot.
Welcome back for another installment of our blog series where we talk about ways to improve your visual communication in plots & presentations. Our first post examined the importance of consistency in your plots. Today we’re going to discuss how to tailor your plots and presentations based on your audience. Creating a presentation that delves just deep enough to give your audience context and confidence in your conclusions and recommendations is the key to keeping your audience engaged but not overwhelmed.
When you spend days, weeks, or even months creating a thorough test or simulation it can be tempting to showcase every aspect of your work. But depending on who you are presenting to, that approach may not work. There are endless ways to categorize and describe different audiences – and each one is unique to an extent. For the purposes of this blog we’ll explore three very broad categories:
Each audience has different goals, background knowledge, and interest in the material. Even when discussing the exact same project or research you may wish to present different plots, takeaways, and recommendations. Let’s explore some examples of what this might look like.
When your audience is predominately folks who have an equal or greater knowledge of your discipline – it is worth taking the time to make sure they believe your results. To put it simply – technical audiences are interested in understanding the “how” for a set of analyses. For a simulation or test engineer this may take the form of presenting low-level details of how the simulation or test was set-up, what assumptions were made, and any possible sources of error.
In the world of CFD one might communicate the “how” by including an explanation of your solver settings (limiters, turbulence models, etc.), plots of any computational mesh sensitivity studies that were performed, and graph of your force & moment residuals to highlight how well your solver converged. After you have proved that your simulation or experiment was performed following best practices, you can continue to dive into the relevant results. Another great way to ensure your audience has confidence in your simulation results is to show a comparison to experimental data for a particular case, like in the example below:
When presenting simulation results it can be useful to present alongside empirical data, when available. You might not have test data for every point of interest – but showing agreement to experiment at a few key control points can give your audience greater confidence in your results. The image above shows that the chordwise pressure coefficient distributions for the simulation closely match the measurements from experiment at multiple spanwise locations.
“Generally technical” is a very vague definition – so what this audience looks like will vary widely depending on your role and the other disciplines that you interface with. For purposes of this blog post though – we’ll assume that the data you want to present from a simulation or test has implications for one or more technical groups that are working on the same project. If the technical folks wanted to know “how”, the generally technical folks want to know “ What were your results?”
If we look at the development of a gas turbine engine as an example – a CFD analysis by the turbine aero team might be important to the heat transfer team, and BOTH the CFD analysis and the thermal analysis might be important to the structures team for their finite element analysis. To take things a step further – the results of the finite element analysis may be very important to the service engineering department. As you are presenting your findings to adjacent teams you will want to avoid diving too deep into the nuances of your discipline and instead focus on presenting the assumptions & the results that are relevant to downstream activities. Look below for an example.
The plot above shows the cartesian forces along the span of a trapezoidal wing. A loads specialist might use a similar plot to communicate to downstream engineers, such as those in the structures group. It also provides the integrated quantities of interest without going into too much detail on how the values were computed or validated.
For non-technical audiences it’s not about the detailed data or your assumptions – it’s about how the project, program, or business will be affected, usually in terms of cost or schedule, by what you’ve discovered. A non-technical audience may also be interested in the results of your study as it pertains to a future state projection or desired outcome. Non-technical audiences are generally less interested in the “how” or “what”, but instead in the “why” or “so-what” (why does this matter?).
As the engineer or scientist, it is perfectly acceptable, expected even, for you to communicate some technical data in your presentation – but keep things high level and avoid using too much jargon or trade specific symbols & abbreviations. Were you on a project to reduce the weight of a component or system? Communicate what your results say about the weight reduction efforts in terms of performance to goal. Did you contribute to a preliminary design study by performing CFD analyses on the design candidates? Consider showing a pareto diagram that highlights the design point most likely to satisfy the customer requirements. You can always keep more detailed plots in your backup slides to address any specific questions.
The image below serves as an interesting example of how to use technical plots in a way that is meaningful to a non-technical audience.
The image above shows a contour plot of ice-thickness data for a glacier. Perhaps, if juxtaposed with a plot of past measurements, or future predictions, this technical plot would serve as a valuable illustration of the dire impact of climate change on glacial melt. In the context of a broader presentation about climate change this could be a powerful way to communicate the “so-what” to a non-technical audience (I.e., “So, if climate change is not reversed, we will lose glaciers, a vital part of the ecosystem, within X number of years”).
At the end of the day, nobody is going to be able to understand your audience better than you. If you have the opportunity, reach out to members of your audience before and after your presentation to learn about what they are expecting to see, and get feedback on how well they felt the important information was communicated. Take note of any questions you are asked at the end of your presentation; they may help you to better prepare for the next time around.
Learning great presentation skills – both in the building of the plots and slides, and in the live presentation itself, is a life-long process that can always be improved. If you take the time to understand what data and visualizations will be most interesting to your audience, you will reap the benefits by becoming a more effective engineer. Stay tuned for future blog posts in this series on effective visual communication to learn more ways to improve.
The post Know Your Audience: Visual Communication appeared first on Tecplot.
This post is the first in a series about running Tecplot 360 on the Amazon Web Services (AWS) cloud compute resources. As high-performance computing (HPC) workloads and computational fluid dynamics (CFD) computations are increasing in size the need for resources is growing. Use of cloud compute resources is a natural path to satisfying this growing demand. Fortunately, you aren’t limited to running your simulations in the cloud, you can do your postprocessing there as well.
The goal of this article is to help you attain a level of comfort working in the cloud, with emphasis on ensuring your experience is at least as good as on a local machine: installing the needed software quickly and tailoring RAM, SSD, CPU and GPU resources to your needs.
The setup showcased in this guide consists of a small AWS compute instance functioning as license server, a larger AWS compute instance for running Tecplot 360, and a GUI accessible via NICE DCV.
Astrid Walle, the author of this article, is a mechanical engineer with a PhD in CFD and more than a decade of experience in applied fluid mechanics. She has held several positions in gas turbine aeromechanics, R&D and AI development at Siemens Energy, Vattenfall SE and Rolls Royce. As a recognized industry expert she has recently taken on the challenge of starting her own business, CFD Solutions. As a freelancer, Astrid is following her professional determination to combine numerical simulation and data analytics.
Before you can start, you need to create a user account on AWS. Your account will come with 12 months of free tier access, which you can also use for your postprocessing jobs. Also, you need to install and make yourself familiar with the AWS CLI on your local machine. Additionally, you might want to:
This how-to guide requires a Tecplot 360 network license (single or multi-facility), as network licenses are required when running on virtual machines. The license server information, which is needed for issuing the license file, we will get later in this guide.
For accessing your created AWS Virtual Machines later on without having to enter credentials, you need to create a ssh key pair on your local machine. On Linux/Mac this works with ssh-keygen. On Windows you can use a client like PuTTY. More information can be found in the Tecplot 360 User’s Manual and in the AWS documentation.
Once you’ve created your public and private keys, following the steps in the Tecplot 360 User’s Manual or AWS documentation, you’re ready to import the SSH key pair to AWS.
Now you go to the AWS Console (Figure 1) check the settings for user account (1) and region (2) and select Services (3) -> EC2 -> Network & Security (4) -> Key Pairs (5) -> Import Key Pair (6).
Figure 1: AWS Console EC2 – Network & Security – Key pairs
Figure 2: AWS console Import key pair
In the new dialogue (Figure 2), you should give a unique, recognizable name (1), paste the content of the public key file, which you created on your local machine (default path .ssh/id_rsa.pub or C:\Users\<username>\.ssh\id_rsa.pub) and import it (2).
To run Tecplot 360 on AWS you first need to create a small compute instance that will act as your license server. This does not need much power and can even take advantage of the AWS free tier of compute instances.
To set up the compute instance for your license server (Figure 3), go to Services (1), select EC2 and then Launch Instance (2). If you need to start an instance more often you can of course script the procedure, but for explanation here, I will describe the manual process.
Figure3. AWS Console-EC2 Launch Instance
First, select the AMI you want to have installed on your instance (Figure 4). There are many AMI’s available. You can search for them (1) and then select the desired one (2). You should just keep in mind, that some license managers have strict OS requirements, so you need to check that upfront. For the Tecplot 360 license manager Ubuntu is fine.
Figure 4. AWS Console AMI Selection
In the next step, you need to select the instance type. Here you can find a table listing the specifications and prices. As the license manager does not need much compute resources and because we want to make use of the free tier, select the t2.micro instance (1).
Figure 5: AWS Console Instance Type
In the next step for configuring the instance details (EC2), stick to the default settings.
Figure 6: AWS Console Configure Instance Details
For the storage selection (Figure 7), 8GB (1) is sufficient for our needs. You will just need to select encryption (2) to increase data security.
Figure 7: AWS Console Add Storage
Adding tags (Figure 8) is recommended for any cloud project because these tags are searchable and can help a lot with the bookkeeping.
Figure 8: AWS Console Add Tags
In the last step (Figure 9) we create a new security group to which your instance is added. This is for the ssh access from your local machine. Later on, we will also add another security group to ensure communication between the license server and the instance on which Tecplot 360 will run.
Figure 9: AWS Console Configure Security Group
Figure 10: AWS Console Select Key Pair
When launching the instance, you will be asked for a key pair (Figure 10). Here you select the key pair generated earlier (1).
Now you are done with the setup! In the EC2 Instances view (Figure 11) you can see the details for your license server. To connect to the VM you will need the Public DNS (1).
Figure 11: AWS Console EC2 Instances Overview
Now that your license server instance is running, you can install and run the Tecplot 360 license server.
First, connect to the VM via ssh. Depending on the selected AMI, the username varies. In our case it is ubuntu. Launch a command prompt and ssh to the machine.
> ssh ubuntu@ec2-<public DNS>
Now you need to get the installation files for the license manager. You can download them directly:
> wget https://download.tecplot.com/rlm/12.4/rlm12.4_linux64.sh
Once downloaded you will need to change permissions and ownership of the installer, and then run it.
> sudo -i && chmod ug+x rlm12.4_linux64.sh > ./rlm12.4_linux64.sh
After the RLM installation, in the installation directory, there will be a file named myhostids.txt.
This file contains all necessary information required for issuing the network license file. Send the contents of myhostids.txt to support@tecplot.com and they will generate a license file for you. After you receive your license file, you can place it into the RLM directory and start the RLM.
> ./rlm_process start
Check the logfile teclmd.log to ensure that the license manager is running correctly.
This guide focuses on simplicity and user experience, so the selected setup is based on the AWS service parallelcluster. This is not only the perfect service for easily deploying your compute fleet in the cloud, but it also comes with NICE DCV – high performance remote desktop and application streaming – and that’s what we want to take advantage of.
A good overview and workshop for getting started with parallelcluster can be found here.
This will require a number of steps to setup:
And now we go back to the AWS console (Figure 12) to create a new security group, in which we allow all inbound / outbound traffic from and to this security group itself. As this is not totally straight forward, here are the steps listed to achieve the correct settings:
Figure 12: AWS Console Additional SG
And then we add our license server EC instance to this security group. Also we add the compute instances on which we will launch Tecplot 360 to this security group (Figure 13). By doing so we ensure a secure connection between our instances.
Figure 13: AWS Console Change SG
A Cloud9 instance gives you a web-based console which can be used for configuring your AWS parallel cluster. To create this instance, use the AWS search to find Cloud9 (Figure 14). And click on “Create environment” (Figure 15).
Figure 14. AWS console All Services
Figure 15: Cloud9 Create Environment
For creating an environment, we need to provide a name and description (not pictured). Apart from that we can stick to the default settings (Figure 16). That includes the t2.micro instance (1) again, which is free tier eligible, also the instance will be stopped automatically when idle for more than 30 minutes (2). And remember to give tags for this instance as well (3), something like “project=Tecplot”.
Figure 16: AWS Console Settings for Cloud9 Instance
Figure 17: AWS Cloud9
Once you’ve confirmed creation of the Cloud9 instance it will open the IDE in the browser (Figure 17). For configuring our Cloud9 instance and preparing the launch of our parallelcluster we start with the installation of AWS CLI and parallelcluster as well as the creation of a ssh key-pair:
> pip3 install awscli -U --user > pip3 install aws-parallelcluster -U --user > aws ec2 create-key-pair --key-name lab-2-your-key --query KeyMaterial --output text > ~/.ssh/id_rsa > chmod 600 ~/.ssh/id_rsa
(If pip3 is not already installed on your Cloud9 instance, you can call python3 –mpip instead.)
To access a parallel cluster on AWS effectively requires three steps:
The first step in creating an AWS parallel cluster is the creation of a config file which defines how the cluster is to be set up. A step-by-step guide can be found here and the documentation for all options here. Also there is a sample file attached to this article or available on GitHub, with which you can get started. A good starting point can be obtained by running:
> pcluster configure
pcluster configure will prompt for information such as Region, VPC, Subnet, Linux OS, and head/compute node instance type. Read on to understand these settings.
The important information about your Region, VPC and Subnet which need to be set in the config file can be looked up in the AWS Console in the description for your Cloud9 instance.
In the EC2 instances (Figure 18) overview (1) you should ensure that your Cloud9 instance (2) is in the same VPC and subnet (4) as your license server. These are exactly the values you also need to provide for your parallel cluster and the head node on which we will install Tecplot 360.
You can also get these values directly from the command line in the Cloud9 console using the commands, as described here:
> IFACE=$(curl --silent http://169.254.169.254/latest/meta-data/network/interfaces/macs/) > SUBNET_ID=$(curl --silent http://169.254.169.254/latest/meta-data/network/interfaces/macs/${IFACE}/subnet-id) > VPC_ID=$(curl --silent http://169.254.169.254/latest/meta-data/network/interfaces/macs/${IFACE}/vpc-id) > REGION=$(curl --silent http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
Once you’ve run these commands you can check the contents using the echo command:
> echo $IFACE > echo $SUBNET_ID > echo $VPC_ID > echo $REGIONM
Also we want to use DCV Viewer later on to access a graphical desktop session, so we need to add a line in the compute section and a respective dcv section to the config file as well:
dcv_settings = default [dcv default], enable = master
Depending on your priorities you should pay special attention on the selection of the instance types for head node and compute fleet. Here you can go for a lot of compute power or you can go for free with t2.micro. Although this is not the recommended instance type to use with DCV Viewer, it does work with low compute load. So it’s a good choice if you just want to get started and give it a try at no costs. Here you can find more examples and explanation for the usage of DCV Viewer with AWS instances. The default location for the config files is in ~/.parallelcluster/.
Figure 18: AWS Console check Cloud9 Instance
So, with the parallelcluster config file ready, we can start the head node as simple as
> pcluster create my-cluster -c ~/.parallelcluster/config
The creation will take some minutes, you can check for the status
> pcluster status my-cluster
Figure 19: Head node access via Nice DCV
When the creation is completed, you can connect to the head node either via ssh
> pcluster ssh my-cluster
But we want to connect via NICE DCV to be able to access the graphical desktop session (don’t forget to enable DCV in ~/.parallelcluster/config).
> pcluster dcv connect my-cluster
This command will either open a browser window on your machine or will display a URL in the terminal, which you can copy & paste into your browser (Figure 19).
Another possibility for further improving the user experience due to easier handling of keyboard, mouse and shortcut inputs is the usage of the NICE DCV native client, which is available for Linux, Windows and Mac and where you can establish the connection with the same provided URL.
Now we can start with the Tecplot 360 installation just as if we were on a local machine with a clean OS, but already with AWS CLI and the right credentials installed to access your AWS services, e.g., S3.
Since this is a clean OS you might want to start with installing a browser to download the Tecplot 360 installation files, then you can perform the installation as described in the Tecplot 360 Installation Guide. After starting Tecplot 360, provide the hostname of your license server which can be found in the myhostids.txt (Figure 20) or enter the command hostname on the license server VM. And now you are ready to go!
Figure 20: Tecplot 360 licensing window
Figure 21: license server VM hostname
If you have all your postprocessing scripted and you don’t need GUI, then stay tuned for the next part where we will also discuss this possibility. Besides other possible setups we will also look at best practices for getting your installation files, simulation and postprocessing results uploaded and downloaded. Also, we discuss how the entire process can be designed to save costs and time.
The post Postprocessing on AWS – Part 1 appeared first on Tecplot.
This video will take you step-by-step through using the Tecplot 360 Excel Add-in.
Get the Excel data used in the video, and download the slide deck.
You didn’t show how to plot the temperature with a different number of points? Was it because they were separated? (23:10)
They would show up as separate zones. Therefore, we would need to create a new linemap to reference the zone with temperature.
When you save the layout, where is the source data stored? (24:12)
To save a layout, for example, you need to save the data as a *.dat or *.plt data file and a *.lay file. Alternately, you could save a “packaged layout” (*.lpk). LPK files save the data alongside the layout. See this video on Tecplot file types.
Is there a way to write month as names instead of numbers? (11:54)
Yes, it is possible to load custom labels within Tecplot 360. Have a look at CustomLabels.plt and CustomLabels.lay in the installation /examples/SimpleData folder.
Is there any way to show 5 variables with the same x axis range but different y axis range in only one graph? (29:03)
Yes! You will need to define a different Y-axis. Tecplot supports up to 5 Y-axes! This can get visually complex but coloring the axis lines helps.
On Tecplot contour plots, the legend does not extend to the actual min/max of the data. Is there a way to annotate the actual min/max from the dataset next to the legend? (29:54)
Yes. There are two ways to do this. In the first, use Dynamic Text (check out the Tecplot 360 User Manual). The second way is to reset the contour levels to the actual min/max of the data.
I have a problem in importing streamline contours from COMSOL.
From my understanding the best way to import COMSOL data is using the VTU format. I’m not sure about streamlines specifically. Go ahead and email support@tecplot.com and we can help you more directly there.
Can you select part of a worksheet to Send to Tecplot, or does the whole sheet go at once?
Yes, you can select part of a worksheet. Additional sheets will show multiple regions.
The post Tecplot 360 Excel Add-in appeared first on Tecplot.
Tecplot 360 2020 R2 has additional multi-threading for variable calculations. Under the Analyze>Calculate Variables dialog, all functions listed will be fully multi-threaded. In previous versions, multi-threading was used only if there were multiple zones. Multi-threading is now used within a zone. In the video example, the improvement is 8 times faster than with earlier versions of Tecplot 360. This test was done on an 8-core windows machine computing Q Criterion. The dataset was 8.6 million polyhedrals.
I’ve loaded the dataset, calculated Q criterion, and generated an isosurface. In Tecplot 360 2020 R2 you can see that all 8 cores are used for the computation. In Tecplot 360 2020 R1 only one CPU core is effectively being used. The full computation takes 284 seconds in 2020 R1, and by the time I finish this sentence the computation in 2020 R2 will already be done. And it took only 36 seconds.
The next calculation was tested on a 32-core Windows machine. It was over 11 times faster in Tecplot 360 2020 R2 compared to the previous release. The improvement was not as large with other datasets with multiple zones, because Tecplot 360 2020 R1 already uses multi-threading across zones. But multi-threading within a zone still results in a faster computation: 1.3 times faster for the OpenFOAM dataset, and over two times faster for the Plot3D data.
The post Multi-threaded Variable Calculations appeared first on Tecplot.
It may come as a shock to some of you, but here at Tecplot we have a thing for well-made plots and the presentations that they occupy. There is something eminently satisfying about looking at a set of data that has been thoughtfully reduced into the relevant facts necessary to enable sound engineering judgment. It is in service of that ideal that we have decided to put together a short blog series on how to make plots & presentations that tell a clear, concise, and convincing story.
Our goal isn’t to present anything profound – only to share practical reminders of the importance of effective communication in engineering. An expert engineer doesn’t simply need to understand the science of their discipline – they need to also know how to convey the relevant facts to their colleagues and stimulate productive discussion.
In this first post we’ll tackle something quite tangible – consistency. Consistency is important for comparisons between datasets (or between regions of the same dataset) because it enables the audience to identify significant differences more easily. If your audience is presented with plots that convey similar datasets but use varying format, scale, color, orientation, etc., it distracts them and takes the focus away from what really matters – the story behind the data.
The image at right is an example of two pressure coefficient distribution plots at discrete spanwise locations; we’ll dive into a few of the ways this plot uses consistency to enhance its readability.
How do you decide what to keep consistent and what to vary? That will depend on the data you are presenting, the type of plot you’re using, and the differences you wish to highlight.
In this example, the two Cp plots at different spanwise locations demonstrate the relative position of the pressure change due the lambda shock structure that is characteristic of the Onera M6 wing.
Receive Tips and Tricks like this directly to your inbox – Subscribe to Tecplot »
Line plots are critical for deriving actionable insights from most engineering analyses, but they aren’t the only types of plots that can be useful. The 2D contour plot at right is a great example of how consistency can be used to compare changes in a dataset over time.
This 2D contour plot compares the pressure of fluid flow around a rotating cylinder at two different time steps. Also included is a single streamline to demonstrate the change in the vortex shedding over time.
We have kept our axis labels, markers, and annotations consistent for easy readability. In addition, we have:
If the contour levels or the streamline location varied between the two plots, you could easily make some inaccurate assumptions about the fluid flow.
Our last example is a 3D contour plot using the Onera M6 wing. The image below is similar to the 2D contour plot in that we’ve kept our contour levels, colormap, and labels consistent between frames. However, when comparing 3D plots it’s important to consider how the 3D perspective can affect your ability to make an unbiased assessment.
The image shows a fixed perspective, which includes the pan, zoom, and rotation settings. You’ll notice that because the volume slices are at different locations the wing appears to move between frames. But the wing is fixed in place. With this approach, it is immediately clear that the plot is not comparing flow states at the same slice location for two different solutions.
Ultimately there are few hard and fast rules when it comes to formatting plots. And it is certainly worth taking the time to ensure that the critical insights from your analyses are not obscured by inconsistency across your plots. Consistency has the most impact when comparing two plots side by side. But don’t underestimate the value of maintaining consistency throughout your presentations – and even between different presentations. Consistency will establish a presentation style that your audience will have an easier time digesting.
Consistency is one tactic to making your plots and your presentations easier to understand. We will explore more plotting tips and tricks in future posts.
Read Blog #2: Know Your Audience
The post Consistency is Key: Visual Communication appeared first on Tecplot.
Last week, Altair announced results for Q4 that closed 2020 on a high note, beating revenue guidance by a lot, and setting up a solid 2021.
The details:
For the full year, total revenue was up 2% to $470 million while software product revenue was up 7% to $392 million. From the company’s 10-K (annual report), for the year, revenue from the Americas was $246 million, up 5%; from EMEA, 113 million, down 3%; and from Asia, $111 million, up 2%. Also from the 10-K, we learn that the automotive industry is still big for Altair, though decreasing in importance — representing ~36%, 40%, and 45% of 2020, 2019, and 2018 revenue (no other verticals are mentioned.)
What does it all mean? A couple of takeaways:
Altair had guided to total revenue of $112 million to $117 million for Q4 — so reporting total revenue of $133 million was a major blowout. That implies two things: first, that it’s really difficult to forecast revenue in this climate. CFO Howard Morof said that the revenue beat was due to a combination of conservative forecasts and more than expected new and expansion revenue than is typical in Q4. Plus, “the quarter continued to improve as it went along, reflecting growth investments [or expansion of existing installations], strength in our customer base, and continued use and growth in adoption of technologies that are ever so critical to our customers.”
Lesson: guidance is good, but it’s not infallible.
Second, Altair had thought software revenue for the quarter would be $95 million to $99 million, and reported $114 million — meaning that at least some of the unanticipated revenue upside was from other sources. Said another way, the software-related services and client engineering services did better than expected. That’s good news since a lot of Altair customers engage in smaller pilot projects to test out technology before committing to bigger engagements. CEO Jim Scapa characterizes this as customer intimacy, saying “we are very, very actively engaged with a lot of customers in very advanced projects, in electric motors, batteries, additive manufacturing, simulation, all of that … And, [this intimacy] advances our products as well as also advancing these relationships.”
And, for those keeping score, Q4 was the second consecutive quarter where software revenue ( was up in the double digits and services saw a modest reovery. Lesson: 2021 should be at least OK, if not good.
We also learned a little more about Altair’s acquisition of Flow Simulator from GE Aviation. (But nothing financial —the deal terms are still not undisclosed.) A quick refresher: Flow Simulator integrates flow, heat transfer, and combustion for mixed fidelity simulations, originally to optimize aircraft engines, with “thousands of users inside GE”, per Mr. Scapa. Altair had been selling Flow Simulator for a few years and is now also responsible for all aspects of its R&D. This is good — Flow SImulator’s system-level design capabilities are critical in early-stage design. Too, the expanded partnership with GE Aviation will help continue Altair’s diversification away from automotive.
Mr. Scapa also spoke about how excited GE is to have a “commercial software company take responsibility for it and there’s a lot of opportunities, to leverage this, to grow the partnership in many different directions with GE, around all of our software … we’ve been doing other projects with GE, in the area of rotating machinery and others, independent of the Flow Simulator project … And it is a model for things that we think we can do with some other customers as well.” — so perhaps we’ll see more announcements of this type.
Altair is also looking to expand its sales reach beyond its traditional direct channel. Mr. Scapa said that Atair is making “some progress [building an indirect capacity] and it depends on the geography. From my point of view, we’re not making as much progress as I would like to see, particularly in the Americas. I think the indirect is getting traction more and more in Europe and continuing to APAC. And in the Americas, I think we have some more work to do, quite frankly.”
But it’s by no means all indirect. Mr. Scapa said that “we are continuing to invest in [selling to] enterprise-level customers with whom we see large opportunities, so we’re beginning to target our direct account managers more and more those opportunities and create more focus for them. We’re creating swim lanes in the market. We’re getting smarter at how we’re organizing and leveraging the sales resources and the capacity that we have.”
Incoming CFO Matthew Brown added that “Over the past 5 years, we’ve done 23 or so acquisitions and we continue to really refine our operating model. We’re going to continue to invest in our product technology and in our sales engine moving forward. And so, this is really a refocusing is the way that I would characterize it.”
According to Altair’s 10-K, only about 10% of 2020 software revenue was generated through indirect channel partners and resellers.
Finally, Altair is all about the convergence of simulation, HPC and AI. Mr. Scapa told investors that it’s a “very natural transformation that’s really happening. It’s happening within our products, and in ways that customers and users don’t even know it’s happening to some extent. And it’s also happening for customers who really are beginning to recognize the opportunity.” From a sales perspective, he added, “almost every one of our account managers has opportunities that are really taking advantage of this convergence. And it’s really starting to engage. We’re starting to understand use cases that make sense, as we have success and point those use cases to other customers. So 5 years from now, I don’t think we’re going to be talking about sort of a difference between simulation and AI. I think it’s all going to be computational science, basically that we’re talking about.”
This convergence, plus a return to more typical seasonal patterns, leads Mr. Scapa to say that he is cautiously optimistic about 2021. As does everyone else, he sees a gradual improvement as the year goes along. Mr. Brown added that “last year Q1 was not really impacted by COVID, this year Q1 has. And as we move forward throughout the rest of the year, and we’re expecting some recovery, but net-net, our expectations on everything other than software product revenue is that the year is going to be basically flat to 2020. We’re pretty optimistic about software — we’re seeing good engagement from our customers. We’re seeing a healthy pipeline.”
In all, Altair expects revenue of $138 million to $140 million in Q1, and $502 million to $510 million for the year, which would be growth of 8% or so.
A couple of days ago, amid all of the earnings reports (and ahead of its own, on Tuesday), Bentley Systems announced that it has acquired E7, an Australia-based maker of project delivery software for heavy civil construction.
E7’s platform includes mobile and web apps that digitize workflows for daily diaries, unplanned (and planned) event tracking, timesheets, daily costs, and quantity progress measurement, among other things, which lead to better resource utilization and field productivity — the very unsexy things that keep a project on time and on budget.
Bentley plans to use E7’s apps to extend its SYNCHRO construction modeling, project management and reporting capabilities to further its vision of a comprehensive 4D construction digital twin. (You know 1-2-3D; the 4th is time.)
I had heard of E7 in passing, when it was called Envision*, but had no idea it is as widely used as it is. Bentley says E7 has been deployed on over 350 projects valued at more than AUD 50 billion (around US $35 billion). On one project, as an example, E7 is used to “deliver daily productivity insights and optimize resource deployment to drive better cost and schedule outcomes. E7 ensures that data from 115 subcontractors is efficiently captured and can be used with confidence for productivity tracking, progress measurement, and payment of invoices.” A project director is quoted as saying that E7 contributes “failsafe systems that ensure large volumes of information can be processed accurately and fast. The efficiency of E7 has saved our project time and money, as it minimizes errors and maximizes productivity.”
E7’s CEO Hugh Hofmeister and CTO Adrian Smith, join Bentley as director of product management and director of product development, respectively.
It’s an interesting combination. E7’s product (and brand) are mobile-first, starting as Software-as-a-Service rather than bolting that onto an on-prem architecture. Adding to Bentley’s recent audio capture acquisition, it further integrates mobile with 3D, scheduled with as-completed and as-designed with as-built. That appeals to the large contractors and asset owners who are probably its primary clients — but E7’s apps are available on the various app stores, making it accessible to project teams, too. That matters, since project data is best collected as close to the point of creation as possible — by the worker at the construction site. We’re increasingly hearing from large firms that they’re entertaining including in their IT infrastructure the tools that have proven successful at the project level — so this two-pronged approach is a very good idea. Finally, E7’s data (versus document) focus enables what it calls “a clear line of sight” at a very granular level — an important element as Bentley and its peers talk more and more about analyzing project data.
Terms of the deal were not disclosed but it seems like it’s completed. Expect questions about it on the earnings call on Tuesday.
*Interesting side note: When I worked backward to find out how Envision became E7, I stumbled across this. In June 2020 Mr. Hofmeister explained:
“As we secure more and more projects across the globe, we have decided to change from Envision to E7 to make our name more accessible and universal across borders and languages.”E7″ is distilled from Envision (E and the seven letters that follow), in a deliberate design cue reflecting our decade-long tradition of supporting major resource, energy, and infrastructure projects… Our new logo is inspired by our focus on the capture and analysis of data from projects.”
Now we know.
It’s been another busy week of earnings and other PLMish news. Here are some bits and pieces I didn’t have the time to write about more fully:
Dassault Systèmes invested €10 million for a 15% stake in AVSimulation, joining Oktal Sydac and Renault which own, respectively, 55.25% and 29.75% of the company. Who is AVSimulation? They make software and simulators for vehicle prototyping, development, validation, and AI training. Their SCANeR platform is used by more than 100 companies worldwide to simulate vehicle dynamics, driver-o\in-the-loop, sensors, and the environment. AVSimulation plans to use the added funds to accelerate its global rollout, and the companies will work together to integrate SCANeR into DS’ 3DEXPERIENCE platform.
But that’s just one of many deals this week. You know that Autodesk will acquire Innovyze and that Altair bought Flow Simulator from GE. Well, here’s another AEC deal: Newforma acquired BIM One. Who? Newforma, one of Battery Ventures’ portfolio companies, makes Project Information Management (PIM) for solutions for the AEC world — its solutions manage communication between project stakeholders. BIM One has two units, BIM One Consulting and BIM Track, a SaaS issue management solution. BIM One will remain a standalone business. Details were not announced.
Cadence announced a few weeks ago that it was acquiring Numeca. We got a bit more strategic context during Cadence’s Q4 earnings. Cadence CEO Lip Tan said the acquisition is part of “building out our multi-physics portfolio, offering best-in-class solutions and delivering superior results compared to legacy industry solutions. … We tripled our System Analysis TAM by adding Computational Fluid Dynamics (CFD) technology through the pending NUMECA acquisition, which will bring leading CFD technology and deep domain expertise.”
CFO John Wall later spoke more about how important the System Analysis business is to Cadence: “it’s doing great; bookings and revenue grew strongly in 2020. The operating margin profile is better than EDA, which allows us to invest in the business … In relation to NUMECA, [its] impact for 2021 is pretty immaterial.”
Finally, Cadence President Anirudh Devgan said Cadence’s “customers are asking for more and more system analysis, system design capabilities, and the overall simulation … CFD is one of the biggest segments in system analysis … and has lots of vertical applications – from automotive, aero, and defense to medical. I think it’s a pretty significant expansion of our platform. We are patient and this segment is profitable so we will continue building across this.” That sounds like there may be more acquisitions in the pipeline. (The Numeca deal closed on Wednesday.)
Siemens, IBM, and Red Hat announced that the MindSphere platform will be available on Red Hat OpenShift. Remember that it started out on SAP’s cloud infrastructure, then became available on Amazon AWS and Microsoft Azure? Welp, now on Red Hat’s Kubernetes (open source, container-based) architecture. IBM says this “will enable customers to run MindSphere on-premise, unlocking speed and agility in factory and plant operations, as well as through the cloud for seamless product support, updates and enterprise connectivity.” Importantly, IBM Global Business Services and Global Technology Services will offer managed services and IoT solutions to MindSphere customers. And in case you hadn’t heard, IBM bought Red Hat in 2019 for (gulp) $34 billion.
Yes, that’s only 4 — we’ll aim for 5 next week. Enjoy your weekend!
Goodness. Not even 7:30AM here on the US East Coast and the news is flying fast.
Autodesk just announced it will acquire Innovyze, a leader in “water infrastructure software”, for $1 billion (net of cash subject to working capital and tax closing adjustments). Why? Autodesk says Innovyze will make it a “technology leader in end-to-end water infrastructure solutions from design to operations, accelerate Autodesk’s digital twin strategy, and create a clearer path to a more sustainable and digitized water industry.”
I honestly hadn’t given water utility design and operations a thought until Bentley started buying up smaller companies addressing this market (here, here, and here). Then I learned (and from speaking with my own water utility) just how complex water really is. Predicting demand, ensuring delivery capability, dealing with the infrastructure for both delivery and recovery/gathering, and meeting water quality standards is no easy task. And many utilities aren’t tech wizards, relying instead on old-school experts to make it all work. Digitalizing existing infrastructure and systems is just the first step; using modern tech like machine learning can predict and optimize many aspects of a water system.
Autodesk says, “[c]ombining Innovyze’s portfolio with the power of Autodesk’s design and analysis solutions, including Autodesk Civil 3D, Autodesk InfraWorks, and the Autodesk Construction Cloud, offers civil engineers, water utility companies and water experts the ability to better respond to issues and to improve planning.”
The transaction will be financed with cash on hand and is expected to close by April 30, 2021. You can read more about the deal here and see the FAQ here. Autodesk announces earnings tomorrow; this deal will likely be a big part of that call with investors.
Altair‘s acquisition is more modest but no less important. It will acquire Flow Simulator, an integrated flow, heat transfer, and combustion design software, from GE Aviation and, as part of the acquisition, Altair and GE Aviation have signed a memo of understanding that has Altair continue developing Flow Simulator, granting GE Aviation access to Altair’s complete software suite, along with a “deeper strategic alignment and pursue new ventures.” You can learn more here.
This is interesting and highlights a trend we’ve seen for years: industrial companies divesting their in-house software to specialists, software vendors who can better support and extend those assets. In 2018, Altair became the exclusive distributor of Flow Simulator, and today’s announcement transfers development control to Altair. Then, Flow Simulator had more than 1,500 users in aerothermal and combustion engineering — all at GE. At the time, Altair CEO Jim Scapa said a priority was to make Flow Simulator more generally applicable, but that GE’s competitors had already expressed interest in the product upon its commercial release. Presumably, this new phase of the GE/Altair relationship will make Flow SImulator even more commercially viable.
Terms of this deal were not announced — but expect it to also feature in Altair’s earnings call, this one on Friday morning.
Continuing the earnings catchup, today we tune into Hexagon, the parent company of brands such as MSC Software, Leica, and PPM (fka Intergraph PPM). In all, 2020 was a mixed year for the company, which sells quite a bit of hardware into manufacturing and other industry verticals that were affected by the shutdowns that rippled around the globe during the year.
That said, the year got progressively better, enabling the company to end with revenue down 4% as reported at €3.77 billion in 2020. The details:
For the year, Hexagon reported total revenue of €3.76 billion, down 4% ccc and down 4% as reported.
All right. Lots of ups, downs, parts of the business, and different geos. What does it mean?
Q4 was good. Strong cost control led the company to report its highest quarterly earnings and cash flow ever; it was able to return to positive organic growth overall — even if that was spotty across the businesses. And even there, there are signs of positive progress: the Manufacturing Intelligence division improved sequentially, with its reported 2% organic decline actually improved over Q3.
Software continues to be an increasingly important part of the picture for Hexagon. Mr. Rollén didn’t quantity but said that “MSC, Bricsys, and our mining software portfolio [are doing very well]. Safety & Infrastructure’s OnCall [the emergency dispatch solution] was very good, as well.”
Acquisitions continue to be a key part of Hexagon’s strategy. The company did 12 in 2020, including 4 in Q4 alone. Mr. Rollén said that Hexagon has plenty of headroom in its debt covenant for more deals, and has a good pipeline of potential acquisitions. BUT: “Prices are at record levels. So you have to be very careful making acquisitions at this moment in time. And, it might be the peak in the pricing cycle.”
Like we’ve seen across our spectrum, 2020 got better as the year went on. Mr. Rollén, who never gives forecasts, no matter how hard analysts try to get him to commit, was optimistic. During the investor call, he said, “we believe we’re going to see a sequential recovery in auto and aero [in Q1] which hampered industrial enterprise solutions in [Q4].” And, “We expect MI to turn around before PPM. And PPM could probably see a recovery throughout the year but maybe with better numbers in the second half than the first half.” When asked if he saw any unusual seasonal patterns developing in 2021, he said “It hasn’t happened yet. But I don’t think so.” Entertaining as you can imagine, but no forecasts.
Financial analysts are modeling revenue of €4 billion for 2021, which would be an increase of 6% to 7% or so. We’ll see. Hexagon reports its Q1 results on April 29.
Earlier this month, Dassault Systèmes (DS) reported results for its fourth quarter and, therefore, the full year of 2020. First a quick recap, then a bit about what it all might mean, and then some reflections on this month’s 3DExperience World (aka SolidWorks World) virtual event.
First the earnings:
For the year, total revenue was €4.45 billion, up 11% as reported and up 12% in cc. Organic, non-IFRS revenue was down 1%. Finally, total software revenue was €4.01 billion, up 13% (up 15% cc).
Such a lot of numbers, up, down, as reported, cc, organic, and with acquired revenue. What can we make of it all?
First, if we look at the different parts of DS’ business, we see that growth mostly came from the life sciences. On an organic basis, non-IFRS total revenue for 2020 was down 3% cc; but including Meditata, it was up 12%. That makes total sense given where we are right now, as drug developers hustle to get vaccines and therapies to market. But DS’ largest PLMish brand, CATIA, was down 2% for the year and its fastest-growing PLMish brand, SolidWorks, was up only 4% (versus 6% in FY2019). Both of those are worrisome.
And if we look at the bucket of “Other” in the Industrial Innovation segment –SIMULIA, DELMIA, GEOVIA, EXALEAD, 3DEXCITE– we see that their total revenue was down 9% for the quarter and down 5% for the year. For comparison, ANSYS revenue is likely to be up 7% or so for the year (not including its LSTC acquisition); if SIMULIA did something similar, the other brands declined sharply. We can’t, of course, know how much of this perceived decline is due to a switch from perpetual licenses to subscriptions, but taking this on its face value, it’s not a good trend. (ANSYS reports results later this week.)
Of course, as CFO/COO Pascal Daloz pointed out, many aspects of Q4 came in better than expected. Software revenue was up a percentage point more than forecast, license revenue decreased less than expected and so on. M. Daloz said that some of this was due to larger deals from geos outside North America and China, which had been strongest in prior quarters.
The second major takeaway for me is that we continue to be in an unpredictable economic climate. It seems to be getting better, in general, but no one seems to have enough confidence to say how that the improvement will smooth out across industries and geos. The good: improving. The not-awesome: uneven.
Last, let’s talk SolidWorks. Its revenue was up 1% (up 7% cc) to €235 million in Q4, for a total of €841 million for the year. That’s just over $1 billion at today’s exchange rates, so: Congrats to the SolidWorks team on becoming a billion-dollar brand! [UPDATE: I used today’s exchange rate for this math. A stricter methodology would have used the 2020 average exchange rate of 1.14, which would get SolidWorks to $959 million. No matter how you do it, SolidWorks is thiiiiiiis close.]
The 3DExperience World (fka SolidWorks World) event last week continued DS’ evolution of the brand to appeal to a broader audience. As DS CEO Bernard Charlès told investors during the earnings call, the 3DExperience Works platform of integrated solutions is intended to “reach new types of users. They want browser-based access on mobile and so on … expanding what they are used to getting on the desktop with cloud roles. A lot of customers are now considering [how] manufacturing connects with the supply chain. That’s another area where we are moving out from pure manufacturing engineering to really manufacturing execution — not to forget that DELMIAworks is [already] part of the 3DExperience Works family, because I think we’ve got good data points on that aspect.” M. Daloz said that 3DExperience Works could ultimately see “double-digit growth … not only coming from the traditional sectors (aerospace and defense, transportation and mobility), but also from the high tech, medtech, life sciences at large and also coming from the fashion industry as well.”
That said, M. Charlès was clear that he wasn’t abandoning the current product set and its customers: “We [want to] expand the portfolio available to the current vibrant, large, SolidWorks community [with], for example, project management, integrated analysis on the cloud, collaborative innovation on the cloud.” Listening to the sessions at the user event, the main themes were “do what you do, but better/more” and reaching out to those new user types. To that end, DS announced Maker and Student editions (available later this year), that offer access to much of the platform at significant discounts.
Back to the bigger, broader DS. What’s 2021 going to hold? The company is guiding for non-IFRS to be in the range of €4.715 billion and 4.765 billion, or cc growth of 9% to 10%. The company expects non-IFRS revenue of €1.145 billion to €1.170 billion, which would be constant currency growth of 6% to 8%.