|
[Sponsors] |
They say the only constant in life is change and that’s as true for blogs as anything else. After almost a dozen years blogging here on WordPress.com as Another Fine Mesh, it’s time to move to a new home, the … Continue reading
The post Farewell, Another Fine Mesh. Hello, Cadence CFD Blog. first appeared on Another Fine Mesh.
Welcome to the 500th edition of This Week in CFD on the Another Fine Mesh blog. Over 12 years ago we decided to start blogging to connect with CFDers across teh interwebs. “Out-teach the competition” was the mantra. Almost immediately … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Automated design optimization is a key technology in the pursuit of more efficient engineering design. It supports the design engineer in finding better designs faster. A computerized approach that systematically searches the design space and provides feedback on many more … Continue reading
The post Create Better Designs Faster with Data Analysis for CFD – A Webinar on March 28th first appeared on Another Fine Mesh.
It’s nice to see a healthy set of events in the CFD news this week and I’d be remiss if I didn’t encourage you to register for CadenceCONNECT CFD on 19 April. And I don’t even mention the International Meshing … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Some very cool applications of CFD (like the one shown here) dominate this week’s CFD news including asteroid impacts, fish, and a mesh of a mesh. For those of you with access, NAFEM’s article 100 Years of CFD is worth … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
This week’s aggregation of CFD bookmarks from around the internet clearly exhibits the quote attributed to Mark Twain, “I didn’t have time to write a short letter, so I wrote a long one instead.” Which makes no sense in this … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
A drone’s noisiness is one of its major downfalls. Standard drones are obnoxiously loud and disruptive for both humans and animals, one reason that they’re not allowed in many places. This flow visualization, courtesy of the Slow Mo Guys, helps show why. The image above shows a standard off-the-shelf drone rotor. As each blade passes through the smoke, it sheds a wingtip vortex. (Note that these vortices are constantly coming off the blade, but we only see them where they intersect with the smoke.) As the blades go by, a constant stream of regularly-spaced vortices marches downstream of the rotor. This regular spacing creates the dominant acoustic frequency that we hear from the drone.
To counter that, the company Wing uses a rotor with blades of different lengths (bottom image). This staggers the location of the shed vortices and causes some later vortices to spin up with their downstream neighbor. These interactions break up that regular spacing that generates the drone’s dominant acoustic frequency. Overall, that makes the drone sound quieter, likely without a large impact to the amount of lift it creates. (Image credit: The Slow Mo Guys)
As manta rays swim, they’re constantly doing two important — but not necessarily compatible — things: getting oxygen to breathe and collecting plankton to eat. That requires some expert filtering to send food particles toward their stomach and oxygen-rich water to their gills. Manta rays do this with a built-in filter that resembles an industrial crossflow filter. Researchers built a filter inspired by a manta ray’s geometry, and found that it has three different flow states, based on the flow speed. At low speeds, flow moves freely down the filter’s channels; in a manta, this would carry both water and particles toward the gills. At medium speeds, vortices start to form at the entrance to the filter channels. This sends large particles downstream (toward a manta’s digestive system) while water passes down the channels. At even greater speeds, each channel entrance develops a vortex. That allows water to pass down the filter channels but keeps particles out. (Image credit: manta – N. Weldingh, filter – X. Mao et al.; research credit: X. Mao et al.; via Ars Technica)
A cold region of Pacific waters stretches westward along the equator from the coast of Ecuador. Known as the equatorial cold tongue, this region exists because trade winds push surface waters away from the equator and allow colder, deeper waters to surface. Previous climate models have predicted warming for this region, but instead we’ve observed cooling — or at least a resistance to warming. Now researchers using decades of data and new simulations report that the observed cooling trend is, in fact, a result of human-caused climate changes. Like the cold tongue itself, this new cooling comes from wind patterns that change ocean mixing.
As pleasant as a cooling streak sounds, this trend has unfortunate consequences elsewhere. Scientists have found that this cooling has a direct effect on drought in East Africa and southwestern North America. (Image credit: J. Shoer; via APS News)
Photographer Jonathan Knight likes capturing waterfalls about 45 minutes after sunset, creating ghostly images that emphasize the shape of the cascading water. The dim surroundings and misty shapes remind me of old daguerreotypes. See more of his images on his website and his Instagram. (Image credit: J. Knight; via Colossal)
In the Leidenfrost effect, room-temperature droplets bounce and skitter off a surface much hotter than the drop’s boiling point. With those droplets, a layer of vapor cushions them and insulates them from the hot surface. In today’s study, researchers instead used hot or burning drops (above) and observed how they impact a room-temperature surface. While room-temperature droplets hit and stuck (below), hot and burning droplets bounced (above).
In this case, the cushioning air layer doesn’t come from vaporization. Instead, the bottom of the falling drop cools faster than the rest of it, increasing the local surface tension. That increase in surface tension creates a Marangoni flow that pulls fluid down along the edges of the drop. That flow drags nearby air with it, creating the cushioning layer that lets the drop bounce. In this case, the authors called the phenomenon “self-lubricating bouncing.” (Image and research credit: Y. Liu et al.; via Ars Technica)
Drops impacting a dry hydrophilic surface flatten into a film. Drops that impact a wet film throw up a crown-shaped splash. But what happens when a drop hits the edge of a wet surface? That’s the situation explored in this video, where blue-dyed drops interact with a red-dyed film. From every angle, the impact is complex — sending up partial crown splashes, generating capillary waves that shift the contact line, and chaotically mixing the drop and film’s liquids. (Video and image credit: A. Sauret et al.)
Hi sakro,
Sadly my experience in this subject is very limited, but here are a few threads that might guide you in the right direction:
Best regards and good luck! Bruno |
dnf install -y python3-pip m4 flex bison git git-core mercurial cmake cmake-gui openmpi openmpi-devel metis metis-devel metis64 metis64-devel llvm llvm-devel zlib zlib-devel ....
{ echo 'export PATH=/usr/local/cuda/bin:$PATH' echo 'module load mpi/openmpi-x86_64' }>> ~/.bashrc
cd ~ mkdir foam && cd foam git clone https://git.code.sf.net/p/foam-extend/foam-extend-4.1 foam-extend-4.1
{ echo '#source ~/foam/foam-extend-4.1/etc/bashrc' echo "alias fe41='source ~/foam/foam-extend-4.1/etc/bashrc' " }>> ~/.bashrc
pip install --user PyFoam
cd ~/foam/foam-extend-4.1/etc/ cp prefs.sh-EXAMPLE prefs.sh
# Specify system openmpi # ~~~~~~~~~~~~~~~~~~~~~~ export WM_MPLIB=SYSTEMOPENMPI # System installed CMake export CMAKE_SYSTEM=1 export CMAKE_DIR=/usr/bin/cmake # System installed Python export PYTHON_SYSTEM=1 export PYTHON_DIR=/usr/bin/python # System installed PyFoam export PYFOAM_SYSTEM=1 # System installed ParaView export PARAVIEW_SYSTEM=1 export PARAVIEW_DIR=/usr/bin/paraview # System installed bison export BISON_SYSTEM=1 export BISON_DIR=/usr/bin/bison # System installed flex. FLEX_DIR should point to the directory where # $FLEX_DIR/bin/flex is located export FLEX_SYSTEM=1 export FLEX_DIR=/usr/bin/flex #export FLEX_DIR=/usr # System installed m4 export M4_SYSTEM=1 export M4_DIR=/usr/bin/m4
foam Allwmake.firstInstall -j
Figure 1: Automated hexahedral meshing for an axial turbine using point cloud mapping.
Word count: 1330 / 7 minutes
Discover a novel approach of automated hexahedral meshing using CAESES GridPro integration, leveraging topology templates and point cloud mapping for efficient, high-quality meshes for CFD. Key techniques like Radial Basis Functions (RBF) morphing ensure precise adaptation to shape variants.
In the realm of CFD mesh generation, scalability is crucial, especially when dealing with multiple design variants. This is where topology template-based approaches, like those offered by GridPro CFD Solutions, shine. These block-based templates are designed with scalability in mind, allowing a single carefully constructed topology to be reused across multiple parametric shapes. This significantly reduces the simulation workflow time and the effort required for meshing, while ensuring that the grid modification remain consistent and self-similar. This consistency is invaluable for accurate comparative studies, where minor deviations in grid structure could otherwise skew results.
The block template-based approach overcomes the limitations seen in traditional structured and unstructured meshing techniques. While unstructured grids are often praised for their ease of mesh modification, they come with drawbacks like a higher number of elements, the need to constantly adjust grid size for shape changes, and compromises on cell control, simulation time, and accuracy. Structured grids, known for their cell quality and simulation accuracy, have traditionally been challenging to apply across numerous design variants due to the manual effort required.
Hexahedral meshing software, GridPro addresses these challenges by allowing simulation engineers to modify topologies manually for significant shape modification and using its in-house mesh smoothing algorithm, Ggrid, to automatically adapt and smoothen the computational mesh for smaller deviations. However, in some cases, the block positioning may not be favourable for Ggrid to ensure good mesh quality, resulting in highly skewed or folded cells.
To further streamline the process, GridPro has developed a topology mapping feature in collaboration with Caeses, that automatically maps the topology from the baseline model to its shape variants. This is achieved by using point cloud pairs to map the topology from the baseline model to the variation, ensuring that even complex design variations maintain the same level of grid quality as the original model. This optimization-based mesh morphing saves time and enhances the accuracy and reliability of simulations across multiple design iterations.
In computational fluid dynamics (CFD) and design optimization workflows, especially those involving parametric studies, mesh quality and consistency play a critical role in ensuring accurate and comparable simulation results.
Unstructured grids, while easier to adapt, have certain limitations and disadvantages:
In contrast, structured grids provide:
Topology morphing in CFD based on changes in geometry is a crucial concept. It involves adapting a predefined mesh topology to fit a parametrically changing geometry while preserving essential properties such as topology preservation, element size, aspect ratio, and overall mesh quality. This process ensures that the computational domain remains accurate and functional as the geometric design evolves.
In practice, topology adjustment can be achieved through various methods. One common approach is the spring analogy, where topology elements are connected by imaginary springs. When the geometry deforms, these springs adjust the block automatically, helping maintain a smooth transition. Additionally, smoothing algorithms can be applied to refine the block quality after the boundary nodes have been adjusted.
A more advanced technique involves using Radial Basis Function (RBF) Interpolation to fine-tune node positioning in response to shape deformation. This method is particularly effective for ensuring that the topology conforms precisely to the deformed design variants.
In our workflow, two similar parametric models are compared, we identify a random set of nodes on the initial and deformed geometries and create a map. This map is further used to morph the topology from one design to another.
By leveraging these techniques, we can effectively morph the topology to accommodate changes in geometry, ensuring consistent grid generation for accurate and reliable simulations.
To test this new adaptive meshing, an axial turbine blade was selected as the first test case. Initially, a baseline wireframe topology for the turbine blade is constructed manually in GridPro meshing software using the UI. This is the only step requiring human intervention. Once the baseline topology is established, it serves as a template for automated mesh generation across various design variants within the Caeses platform.
Next, GridPro is integrated into Caeses using an integration script, creating a closed-loop system. The script is designed to manage surface mesh generation, CAD file conversion, topology adaptation, and grid generation. In this setup, Caeses parametrically modifies the axial turbine blade shape to produce different variants, while GridPro automatically generates multi-block structured meshes for each variant without further user involvement.
For the axial turbine test case, 50 parametric modeling variants were generated by varying 7 parametric variables. The baseline topology, created in approximately 45 minutes, was used as a template to generate structured grids for the remaining 49 design exploration variants. The entire process took around 350 minutes, or roughly 6 hours.
Ready to Automate Your Meshing Workflow?
GridPro’s intelligent structured meshing automation solution reduces manual effort and maximizes accuracy—making it ideal for design optimization in CFD.
Schedule a free demo or contact us to see how GridPro can accelerate your simulation pipeline.
The adopted approach is effective in automating parametric geometric meshing. The developed workflow, which utilizes topology templates and point cloud mapping, significantly reduces the manual effort traditionally required in structured mesh modification. By leveraging techniques such as Radial Basis Function (RBF) interpolation in meshing, the method ensures that topology adapts accurately to geometric changes, preserving mesh quality and ensuring reliable simulation outcomes.
The successful application of this methodology to axial turbines, radial turbines, exit casings, compressor volutes, and centrifugal compressors demonstrates its efficiency in generating high-quality grids for multiple design variants with minimal user input. The ability to mesh 50 geometric variants within six hours highlights the approach’s scalability and robustness, reinforcing its potential for widespread adoption in industrial applications of engineering simulation.
We sincerely thank Caeses for providing all the geometries, which was crucial for generating the structured mesh. More details about Caeses work can be found at Caeses Shape deformation and morphing.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Automated Hexahedral Meshing with GridPro: Structured Meshes for Parametric Geometry Variants appeared first on GridPro Blog.
Figure 1: Structured multiblock mesh for Turbocharger compressors.
Word count: 1358 / 7 minutes
Optimizing Turbocharger Performance with CFD-Driven Compressor Design and Automated GridPro Meshing Tools
Turbochargers have transformed internal combustion engines by significantly boosting power output without increasing engine size. At the core of this innovation is the centrifugal compressor, a key component responsible for compressing and supplying air to enhance combustion efficiency. Its performance depends on careful impeller design, aerodynamic optimization, and advanced computational techniques.
CFD simulation plays a crucial role in refining compressor aerodynamics, allowing engineers to enhance compressor efficiency and turbocharger performance. Structured meshing for compressors further improves the accuracy of these simulations. Advanced structured meshing GridPro tools like Xpress Volute and Xpress Blade automate the hexahedral meshing process, reducing design iteration time while ensuring high-quality grids, which are essential for precise CFD analysis for compressors and performance optimization.
The compressor in a turbocharger plays a crucial role in enhancing engine performance by increasing the density of the intake air. By compressing incoming air, it ensures a higher oxygen supply, which leads to more efficient combustion, improved fuel efficiency in turbocharged engines, and greater power output. The effectiveness of the compressor directly influences the overall efficiency of the turbocharger, making its design a key aspect of performance optimization.
Several factors impact compressor performance, with the pressure ratio being one of the most significant. This determines the level of air compression achieved, directly affecting engine output. To deliver the required mass flow without instability, the compressor’s flow characteristics must be carefully designed. Achieving the right balance between these factors ensures maximum efficiency, durability, and aerodynamic performance.
Compressor efficiency plays a crucial role in determining the overall performance of both the turbocharger design and the engine. One of its most significant impacts is on fuel efficiency in turbocharged engines. In a turbocharged engine, higher compressor efficiency reduces the energy required for the pumping cycle, directly improving Brake Specific Fuel Consumption (BSFC). By minimizing energy losses, an efficient compressor ensures that more of the fuel’s energy is converted into useful work rather than being wasted.
A more efficient compressor also contributes to lower emissions by reducing fuel consumption, helping engines comply with increasingly stringent environmental regulations. In heavy-duty applications, where high pressure ratios are required for effective combustion, improved efficiency ensures that the necessary boost is achieved with minimal energy input. This not only enhances performance but also expands the compressor’s operating range, allowing it to function effectively across various engine speeds and loads. Additionally, optimizing compressor aerodynamics reduces noise generation, an essential consideration in modern turbomachinery design.
Designing highly efficient compressors presents a range of complex challenges that engineers must carefully address to optimize turbocharger performance. One of the most significant difficulties arises from the high tip speeds at which turbocharger compressors operate. These high speeds create intricate flow structures, including shock waves in transonic designs, which can lead to substantial efficiency losses. Managing these effects requires precise aerodynamic optimization to minimize performance penalties.
Another critical challenge is tip leakage, where airflow escapes through the gap between the blade tip and the casing. This leakage not only reduces efficiency but also increases noise levels, making it essential to develop sealing techniques and design strategies that minimize these losses. Many modern compressors incorporate splitter blades to extend their operating range, but their effectiveness depends heavily on proper design. Poorly designed splitter blades can disrupt airflow, leading to mismatches and reduced overall efficiency.
In addition to aerodynamic considerations, modern compressors must also meet stringent noise-reduction requirements. Balancing high aerodynamic efficiency with low noise emissions is a major challenge, requiring innovative aeroacoustic optimization. Engineers must also navigate the trade-offs between achieving a wide operating range and maintaining high efficiency, as improving one often comes at the cost of the other.
Furthermore, traditional manufacturing constraints can limit the ability to implement optimal blade designs, though advancements in precision manufacturing techniques, such as point milling, are helping to overcome these limitations.
Addressing these challenges demands a combination of advanced computational tools, innovative design approaches, and cutting-edge manufacturing solutions.
CFD simulation plays a vital role in modern compressor design by offering detailed insights into fluid dynamics within the impeller and volute. With CFD, engineers can analyze complex flow structures, turbulence, and loss mechanisms, such as shock waves, tip leakage, and secondary flow. This allows for the optimization of compressor aerodynamics and also helps to minimize performance losses.
Moreover, CFD analysis for compressors enables engineers to assess how design changes impact key parameters like compressor efficiency, pressure ratio, and operating range. It also provides the capability to evaluate compressor performance under various operating conditions, including off-design scenarios. By reducing the need for extensive physical testing, CFD accelerates the design process, identifies potential surge and stall conditions, and ultimately enhances the reliability and performance of the compressor.
By leveraging CFD simulations, engineers can iteratively refine designs to ensure optimal performance and reduced time-to-market.
Mesh generation is a critical component in achieving accurate CFD simulations for compressor analysis. It defines the resolution of the flow field and plays a significant role in the stability and convergence of the numerical simulation. The density of the mesh is a key consideration, as a finer mesh provides higher resolution but also increases computational costs. Mesh sensitivity studies help identify the optimal density, ensuring that the solution is independent of the mesh size.
Another important factor is the resolution of the boundary layer, which is essential for accurately capturing wall effects and predicting losses in the flow. Grid smoothness is equally crucial as it helps minimize numerical errors and ensures stable simulations. The mesh must also meet certain quality standards, such as maintaining proper aspect ratio, minimum angle, and expansion factor to guarantee reliable results.
By carefully designing and optimizing the mesh, CFD simulations can accurately capture complex flow phenomena like tip leakage, secondary flows, and shock waves, all of which play a vital role in determining compressor performance.
Hexahedral meshing is widely preferred in the CFD analysis of compressors because it offers superior accuracy and computational efficiency. One of the main advantages of hexahedral grids is their ability to reduce numerical diffusion, leading to more precise flow predictions, particularly in complex phenomena such as boundary layers and shock waves.
In addition to better accuracy, these meshes require fewer elements to achieve high resolution, which lowers computational costs and improves solver efficiency. Hexahedral meshes also enhance convergence stability by supporting smoother flow transitions and more accurate gradient resolution, making the simulation process more reliable. Furthermore, they provide efficient boundary layer capture, as their structured nature allows for well-aligned cells near solid walls, crucial for accurate near-wall flow predictions.
These characteristics make structured hexahedral meshing the ideal choice for critical regions in compressor design, such as the impeller passage, volute tongue and vaneless diffuser, where precise flow analysis is essential.
GridPro’s automated meshing tools simplify and accelerate the meshing process for compressor impellers and volutes. One of the key advantages of GridPro is its ability to enhance solution accuracy. With features like the Xpress Volute and Xpress Blade meshing tools, it captures complex flow fields with high precision, leading to better performance predictions.
The tool’s versatile blocking structure adapts to various geometric variations, providing flexibility in design. By automating the meshing process, GridPro minimizes manual errors, ensuring consistent mesh quality and improving the overall integrity of the simulation. This automation also accelerates the workflow, allowing for faster iterations and quicker optimization, which is essential for effective compressor design.
Additionally, GridPro seamlessly integrates with CAD tools and flow solvers, streamlining both the design and simulation phases. The software excels at capturing intricate flow dynamics, such as swirl patterns and the tongue region, which are crucial for optimizing turbomachinery performance. With features like 1-1 connected meshing, it improves accuracy in tip flow simulations, ultimately reducing CFD simulation time while maintaining high reliability and accuracy.
Compressors are fundamental to turbocharger performance, and their design requires detailed CFD analysis to ensure efficiency and reliability. Structured hexahedral meshing plays a crucial role in obtaining accurate CFD results, and GridPro’s automated meshing tools streamline the process, reducing time while maintaining precision. As turbocharger technology advances, leveraging automated meshing and high-fidelity CFD simulations will continue to be essential in achieving optimal compressor designs.
We sincerely thank CFDsupport for providing the compressor geometry, which was crucial for generating the structured mesh. The compressor model was created using CFturbo software. More details about CFDsupport’s work can be found at Centrifugal Compressor.
1. “3D Multi-Disciplinary Inverse Design Based Optimization of a Centrifugal Compressor Impeller”, 2013.
2. “A 3D Automatic Optimization Strategy for Design of Centrifugal Compressor Impeller Blades” – K.F.C. Yiu and M. Zangeneh, ASME, 1998.
4. “3. “ADT Publication_A DETAILED LOSS ANALYSIS METHODOLOGY FOR CENTRIFUGAL COMPRESSORS”, 2019.
5. “Application of 3D Inverse Design Method on a Transonic Compressor Stage“, 2021.
6. “Design and optimization of compressor for a fuel cell system on a commercial truck under real driving conditions”, Institution of Mechanical Engineers, 2023.
7. “Design of a Centrifugal Compressor Stage and a Radial-Inflow Turbine Stage for a Supercritical CO2 Recompression Brayton Cycle by Using 3D Inverse Design Method“, 2017.
8. “Design of a Mixed-flow Transonic Compressor for Active High-lift System Using a 3D Inverse Design Methodology”, ASME, 2020.
9. “Development of a high performance centrifugal compressor using a 3D inverse design technique“, 2010.
10. “Investigation of an Inversely Design Centrifugal Compressor Stage” – M. Schleer, S. S. Hong, M. Zangeneh, ASME, 2003.
11. “Multi Objective Design of a Transonic Turbocharger Compressor with Reduced Noise and Increased Efficiency“, ASME, 2019.
12. “Multi-point Optimisation of an Industrial Centrifugal Compressor with Return Channel by 3D Inverse Design”, ASME Turbo Expo.
13. “Optimization of 6.2to1 Pressure Ratio Centrifugal Compressor Impeller by 3D Inverse Design”, ASME Turbo Expo, 2011.
14. “Redesign Of A Transonic Compressor Rotor By Means Of A ThreeDimentional Inverse Design Method A Parametric Study“, ASME, 2005.
15. “Redesign of a Compressor Stage for a High-performance Electric Supercharger in a Heavily Downsized Engine”, ASME, 2017.
16. “Tandem Blade Centrifugal Compressor Design Optimization 3D Inverse Design” – Ricardo Oliveira, Luying Zhang, European Conference on Turbomachinery Fluid dynamics & Thermodynamics, 2023.
17. “The Design of High Temperature Heat Pump Compressor Using the Inverse Method“, ASME, 2023.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Enhancing Turbocharger Efficiency with CFD and Automated Meshing Tools appeared first on GridPro Blog.
Figure 1: Plate heat exchanger meshing using structured multiblocks.
1716 words / 9 minutes read
Thе intricatе intеrnal flow channеls in-bеtwееn thе corrugatеd platеs of platе hеat еxchangеrs rеquirе top notch, flow alignеd mеshеs. Thе systеmatically organizеd hеxahеdral cеlls found in structurеd grids arе thе idеal choicе for simulating platе hеat еxchangеrs. GridPro’s еfficiеnt blocking tools facilitate rapid grid gеnеration for thеsе complеx flow paths.
CFD is pivotal in platе hеat еxchangеr dеsign and dеvеlopmеnt, sеrving dual purposеs of dеsign rеfinеmеnt and troublеshooting.
Firstly, CFD simulations еnablе еnginееrs to optimizе thе dеsign and pеrformancе of hеat еxchangеrs by analyzing thе fluid flow and hеat transfеr within thе intricatе nеtwork of platеs. By modеling thе fluid dynamics, tеmpеraturе distribution, and prеssurе drops, CFD hеlps in optimizing thе platе arrangеmеnt and gеomеtrical paramеtеrs, ultimatеly еnhancing pеrformancе, еfficiеncy and rеducing еnеrgy consumption.
Sеcondly, CFD is invaluablе for prеdicting and troublеshooting potential issues in platе hеat еxchangеrs, such as fouling, corrosion, and unеvеn hеat transfеr. Through numеrical simulations, еnginееrs can idеntify problеmatic arеas with stagnant flow or high-vеlocity zonеs, aiding in prеvеntivе maintеnancе and еxtеnding thе еquipmеnt’s lifеspan.
In еssеncе, CFD sеrvеs as a powerful tool for both optimizing thе dеsign and еnsuring thе rеliablе opеration of platе hеat еxchangеrs, contributing to improvеd еnеrgy еfficiеncy and cost-еffеctivеnеss in various industrial applications, from HVAC systеms to chеmical procеssing.
Thеrе arе sеvеral kеy mеshing rеquirеmеnts whеn simulating platе hеat еxchangеrs which significantly influеncеs thе accuracy and еfficiеncy of thе analysis.
Firstly, it is еssеntial to crеatе a mеsh that accuratеly capturеs thе intricatе nеtwork of platеs, channеls, gaskеts and flow passagеs. This includеs еnsuring that thе mеsh rеsolution is sufficiеnt to rеprеsеnt thе intricatе dеtails, such as corrugations or surfacе irrеgularitiеs. Thе mеsh should also account for thе boundary layеrs nеar thе solid walls and platе surfacеs to accuratеly prеdict hеat transfеr and fluid flow characteristics. Additionally, a finе mеsh is nееdеd around gaskеts and sеals, which can affеct flow pattеrns and hеat transfеr.
Sеcondly, mеsh quality is crucial to maintain simulation accuracy, numеrical stability, and convеrgеncе. This involvеs еnsuring that thе mеsh еlеmеnts arе of appropriatе sizе and shapе to prеvеnt еxcеssivе skеwnеss, strеtching, or abrupt changеs in еlеmеnt sizеs, as poor mеsh quality can lеad to numеrical instability and inaccuraciеs in thе rеsults.
Lastly, thе mеsh dеnsity should be chosen carefully, as it affеcts computational rеsourcеs. A balancе must be struck bеtwееn accuracy and computational еfficiеncy. Too finе a mеsh can lеad to long simulation timеs, whilе too coarsе a mеsh may rеsult in inaccuratе results. Thеrеforе, mеsh rеfinеmеnt studiеs and grid indеpеndеncе chеcks will bе nеcеssary to dеtеrminе an optimal mеsh rеsolution that satisfiеs accuracy rеquirеmеnts whilе maintaining a rеasonablе computational cost. Adapting thе mеsh basеd on local flow conditions and hеat transfеr ratеs can also hеlp strikе this balancе and improvе thе еfficiеncy of thе simulation.
Thеsе considеrations arе еssеntial for obtaining rеliablе and timеly results in thе analysis of hеat еxchangеr pеrformancе.
Mеshing platе hеat еxchangеrs for CFD simulations posеs sеvеral significant challеngеs owing to thе intricatе gеomеtry and complеx flow pattеrns inhеrеnt in thеsе systеms.
Onе primary challеngе arisеs from thе complеx intеrnal flow channеls formеd by corrugatеd platеs. Thеsе flow channеls arе oftеn irrеgularly shapеd and can change configuration as fluids pass through. Mеshing such non-uniform flow paths can be challenging.
Anothеr hurdlе in mеshing platе hеat еxchangеrs liеs in thе variability of thеir gеomеtry. Thе platе arrangеmеnt, corrugation pattеrns, and thе numbеr of platеs can vary significantly. Each variation may rеquirе adjustmеnts to thе mеshing strategy, making it morе complеx to еnsurе uniformity in mеsh quality.
Thin gaskеts and sеals arе intеgral componеnts of platе hеat еxchangеrs, еmployеd to sеparatе fluid channеls. Mеshing thеsе dеlicatе fеaturеs accuratеly, whilе maintaining appropriatе clеarancе and contact conditions, adds an additional layеr of complеxity to thе mеshing procеss.
In scеnarios involving phasе changе, such as condеnsation or еvaporation, accuratеly modеling thе intеrfacе bеtwееn diffеrеnt phasеs bеcomеs a formidablе challеngе. Spеcializеd mеshing tеchniquеs arе rеquirеd to navigatе thе intricaciеs of phasе transitions and еnsurе accuratе rеprеsеntation of multi-phasе flows.
Collеctivеly, thеsе challеngеs undеrscorе thе intricatе and dеmanding naturе of mеshing for platе hеat еxchangеrs. Consеquеntly, thе carеful sеlеction of appropriatе mеshing stratеgiеs and softwarе bеcomеs impеrativе to ovеrcomе thеsе obstaclеs and yiеld rеsults that accuratеly rеflеct thе undеrlying physics of thеsе complеx systеms.
Whеn conducting a CFD simulation in platе hеat еxchangеrs, it is crucial to dеsign thе mеsh in a manner that accuratеly capturеs thе undеrlying flow physics. This is еssеntial to еnsurе thе rеliability of thе simulation results. Thеrе arе sеvеral kеy flow physics phеnomеna that thе mеsh must еffеctivеly capturе in platе hеat еxchangеr simulations.
Among thе еssеntial flow physics that thе mеsh must accuratеly rеprеsеnt arе laminar and turbulеnt flow rеgimеs. Thе mеsh must bе capablе of modеling thе transition from laminar to turbulеnt flow, as wеll as capturing fully turbulеnt flow phеnomеna. For fully turbulеnt flows, thе mеsh must capturе turbulеncе еffеcts, including thе formation of еddiеs, turbulеnt mixing, and fluctuations in flow vеlocity. This is important for accuratе hеat transfеr modеling.
Additionally, thе mеsh should bе ablе to idеntify and rеsolvе flow sеparation or rеcirculation zonеs that may occur, particularly in arеas with significant vеlocity variations or obstructions. Thе mеsh should rеsolvе thеsе zonеs to undеrstand thеir impact on hеat transfеr and prеssurе drop.
Thе mеsh should also bе capablе of rеsolving prеssurе and tеmpеraturе gradiеnts and thеir distribution throughout thе hеat еxchangеr. This is еssеntial for undеrstanding thе prеssurе drop and how hеat is transfеrrеd bеtwееn thе hot and cold fluids.
Furthеrmorе, in situations whеrе phasе changе procеssеs such as condеnsation or еvaporation arе prеvalеnt, thе mеsh should bе proficiеnt in modеling thеsе transitions accuratеly. This includes thе rеprеsеntation of phasе intеrfacеs and thе hеat transfеr occurring at phasе boundariеs.
To еffеctivеly capturе thеsе intricatе flow physics, thе mеsh’s dеnsity and quality must align with thе specific simulation objеctivеs. This should take into consideration thе hеat еxchangеr’s gеomеtry and thе еxpеctеd flow conditions to еnsurе thе simulation rеsults arе both rеliablе and mеaningful.
Usagе of structurеd mеshеs in platе HE simulations offеrs a rangе of bеnеfits that contributе to thе accuracy and еfficiеncy of thе numеrical modеling procеss.
To sum up, structurеd mеshеs еxcеl in accuratеly rеprеsеnting platе hеat еxchangеr gеomеtry, offеring еfficiеnt numеrical solutions and rеducеd computational dеmands. Thеir sеamlеss alignmеnt еnhancеs accuracy and stability, еstablishing thеm as a stratеgic choicе for rеliablе and rеsourcе-еfficiеnt simulations.
GridPro еxcеls in thе swift and straightforward structurеd mеshing of corrugatеd platеs, providing a comprеhеnsivе toolsеt for capturing intricatе flow channеls with gеomеtry-alignеd hеxahеdral mеshеs. Thе еfficiеncy is furthеr еnhancеd by a timе-saving approach whеrе a blocking topology dеsignеd for onе pair of platеs еxtеnds sеamlеssly to thе еntirе hеat еxchangеr, еliminating thе nееd for tеdious rеbuilding of topological blocks for nеwеr platеs.
In comparison to convеntional grid gеnеrators, GridPro’s timе еfficiеncy matchеs that of unstructurеd mеsh gеnеrators whilе providing unmatchеd mеsh consistеncy bеtwееn platеs. Thе usе of thе samе sеt of blocks for thе еntirе HE еnsurеs uniformity and mеsh consistеncy. This aspеct facilitatеs, maintaining consistеncy in solution accuracy across all thе platеs.
A nеw innovativе algorithm facilitatеs automatic alignmеnt of mеsh blocks to platе corrugations, which can accommodatе changеs in gеomеtric pattеrns еffortlеssly. This provеs invaluablе for dеsign еnginееrs еxploring various corrugation pattеrns. Irrеspеctivе of thе complеxity in corrugations, thе blocks can bе rе-alignеd to thе flow channеls, еnabling dissipation frее numеrical computations.
Thе softwarе’s robust boundary layеr clustеring tool еnsurеs accuratе rеsolution of vеlocity profilеs, transitions from laminar to turbulеnt flow, and fully turbulеnt rеgions. It maintains consistеncy in first spacing and orthogonality, еvеn for thе most subtlе dimplеs and flow channеls in thе corrugatеd platеs.
Notably, along with thе fluid flow passagеs, thе platе thicknеss can bе mеshеd еfficiеntly in GridPro. Thе samе sеt of tools usеd for discrеtising thе fluid domain can bе utilisеd for mеshing thе structural part for FEA analysis.
For еnginееrs sееking to capturе minutе flow dеtails, GridPro offеrs tools likе еnrichmеnt and nеsting for high-rеsolution local rеfinеmеnt. Thеsе tools еnablе rеfinеmеnt to small gеomеtric scalеs without affеcting thе еntirе domain. This capability facilitatеs thе crеation of an optimizеd grid tailorеd spеcifically for platе hеat еxchangеr simulations.
Achiеving an еffеctivе mеshing of thе flow channеls amidst thе corrugatеd platеs in platе hеat еxchangеrs posеs a formidablе challеngе for any mеsh gеnеrator. Thе intricatе naturе of thе gеomеtric fеaturеs, thе divеrsе corrugation pattеrns, and thе complеx flow physics collеctivеly contributе to thе daunting task of mеshing for platе hеat еxchangеrs.
Structurеd mеshеs еmеrgе as a fitting solution for discrеtizing thе platе hеat еxchangеr domain, offеring prеcisе gеomеtry rеprеsеntation and adеpt capturе of flow fеaturеs. Morеovеr, thеy providе еfficiеnt and stablе numеrical solutions whilе dеmanding rеducеd computational rеsourcеs.
In ovеrcoming thе challеngеs associatеd with convеntional structurеd mеshing softwarе, GridPro stands out with its rapid blocking and mеshing capabilities. Notably, GridPro’s advancеd gеomеtry pattеrn aligning algorithm addrеssеs thе limitations sееn in othеr softwarе availablе in thе markеt. This algorithm facilitatеs automatic alignmеnt of grid blocks with thе pattеrns in corrugatеd platеs, еnsuring swift plate heat exchanger mеshing and thе flеxibility to adapt to any variations in corrugation pattеrns.
1. “CFD investigation of Plate Channels“, Masters Dissertation, Anton Johannesson, Department of Energy Sciences Heat Transfer, Lund University, March 2022.
2. “A Study on 3D Numerical Model for Plate Heat Exchanger“, Ya-Nan Wanga et al, 13th Global Congress on Manufacturing and Management, GCMM 2016, Procedia Engineering 174 ( 2017 ) 188 – 194.
3. “Comparative gasketed plate heat exchanger performance prediction with computations, experiments, correlations and artificial neural network estimations“, Selin Aradag et al, Engineering Applications of Computational Fluid Mechanics, 11:1, 467-482, April 2017.
4. https://www.alfalaval.com/microsites/gphe/tools/gphe-vs-shell-and-tube/
5. “Multi-Scale CFD Modeling of Plate Heat Exchangers Including Offset-Strip Fins and Dimple-Type Turbulators for Automotive Applications“, Augusto Della Torre, Energies 2019, 12, 2965, 1 August 2019.
6. “INVESTIGATION OF FLOW CHARACTERISTICS IN HEAT EXCHANGERS OF VARIOUS GEOMETRIES“, Nurhan Adil ÖZTÜRK, PhD THESIS, ÇUKUROVA UNIVERSITY INSTITUTE OF NATURAL AND APPLIED SCIENCES, 2006.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Fulfilling Platе Hеat Exchangеr Meshing Needs for CFD Analysis appeared first on GridPro Blog.
Figure 1: Structured multiblock mesh for a scramjet engine.
1586 words / 8 minutes read
Over half a century has elapsed in designing a working scramjet-powered hypersonic vehicle. Considered harder than rocket engines, designing scramjets is a massive engineering challenge. However, with newer design improvisations such as airframe integration and REST design, scramjet-powered hypersonic flights are close to becoming a reality.
With China and Russia making all the buzz about making successful scramjet-powered hypersonic flights, it looks like the game is on. The West, led by NASA, started the scramjet research in the 1950s. A couple of years into the research, early scientists quickly realized the scientific difficulties of designing scramjets engines. Some say it is harder than rocketry.
This article takes you through the different aspects of scramjet technology, starting with answering the question: what is scramjet, and how is it different from jets and rockets.
What are Scramjets? And how is it different from the Jets and Rockets:
In a jet engine, the flow inside the combustion chamber is subsonic. Even if the jet is flying at supersonic speed, the intake and the compressor slow the air down to low subsonic speed. This increases the pressure and temperature. The higher the flying speed, the higher the rise in pressure and temperatures when we slow the flow down. Normally, in jets, the compressor does the job of raising the pressure and temperature. But if we are moving fast enough, the compressor can be chucked out, and so is the turbine driving it, as just slowing the flow to subsonic conditions will raise the pressure and temperature to the required levels. So, what is left behind without a compressor and turbine is the Ramjet.
A ramjet is a simple tube with an inlet to capture the air and slow it down, a combustor to inject fuel and burn it, and an exhaust nozzle to expand the combustion products to generate thrust. Ramjets can’t start from 0 speed but need about Mach 3 to get going, and they can operate up to Mach 6. Beyond that, the rise in temperature and pressure due to the ram effect is too high for proper combustion.
As a solution, what can be done is the flow can be slowed down just a little bit, thus raising pressure and temperature, but leave it largely supersonic and see if we can do combustion in it. An engine that does just that is the Scramjet – Supersonic Combustion Ramjet. Scramjets that can start operating around Mach 6 can go up to Mach 12 or 14. The upper limit is up for debate as, near the upper Mach limit, we run into the same issue of too much rise in temperature due to slow-down effects to maintain proper combustion. Additionally, near the upper limit, external drag forces become very high, and the heating problems become even more severe.
Rockets, on the other hand, don’t suck air from the atmosphere but carry their own oxygen. Because of this, they are versatile and can fly in any planetary atmosphere and empty space. At the same time, carrying oxygen makes them heavy and less fuel-efficient. So, scramjet is the most attractive option if one wants to fly at hypersonic speeds in Earth’s atmosphere.
Lastly, if one wants to compare these propulsion systems w.r.t fuel efficiency, turbojets are the most fuel-efficient system for the Mach 0 to Mach 3 range. Between Mach 3 and 6, the ramjets are the better performers, while above Mach 6, scramjets are the best. Rockets, even though can operate over all the Mach number regimes, they have the lowest fuel efficiency as they have to carry the oxidizer with them.
The first generation of scramjet engines had a pod-style design with a large axisymmetric spike for external compression. Bearing similarity to gas turbine engines, scramjet pods were designed independently of the vehicle it was meant to propel. In the end, the design was discarded as the supersonic combustion could not overcome the external drag of the spike, as it lacked the much-needed airframe integration.
Hence, from the second generation onwards, the smooth integration of the engine with the vehicle was done. The vehicle is made long and slender for low-drag purposes, and the scramjet engine, with a 2D flow path, is mounted on its belly. The engine is positioned in the shadow of the vehicle’s bow shock to ensure that the vehicle’s forebody does some part of the air compression before entering the engine. In a way, one can say the vehicle is the engine, and the engine is the vehicle in this design.
Unfortunately, even this improved airframe integration design and 2D scramjets had its pitfalls. Ground testing of these geometries revealed that 2D scramjets were not optimum for structural efficiency and overall performance. This led to the development of the current 3rd generation scramjets involving truly 3D geometries. In this design, along with integrating the scramjet into the airframe, the combustors started to have rounded or elliptical shapes.
One example of present-day 3D scramjets is the Rectangular-to-Elliptical Shape Transition or REST scramjet engines. This class of engines has a rectangular capture area that helps smooth integration with the vehicle. The rectangle cross-section gradually transitions into an elliptical cross-section as it reaches the ‘rounded’ combustor.
An elliptical shape for the combustor is preferred over a rectangular shape because it offers a reduced surface area for the same amount of airflow. This aspect of a reduced surface area significantly lowers the engine drag and cooling requirement compared to a rectangle shape. Further, the elliptical shape reduces structural weight due to the inherent strength of rounded structures. Also, the curved shape eliminates low momentum corner flows, which are observed to severely limit engine performance.
The air inside a scramjet engine passes through three distinct processes of compression, combustion, and expansion in the 3 sections: intake, combustor, and exhaust nozzle.
The Intake: The front part of the engine, the intake, does the job of capturing the air and compressing it. At station 0, the flow is undisturbed by the engine. As it moves towards station 1, the air starts to experience compression due to the flow contraction caused by the vehicle’s fore-body. Further compression is done by 3 shock waves generated in the intake. The flow passing through shock waves raises the pressure and temperature of the flow. Each shock wave aligns the flow to the walls of the intake, and by the time the flow leaves the inlet at station 2, it will be uniform and parallel to the walls of the combustor.
The Combustor: At the entrance to the combustor, between stations 2 and 3, a short duct called an isolator exists, which separates the inlet operations from the pressure rise in the combustor. At station 3, the fuel is injected and lighted. It burns in the hot air that has been compressed by the inlet.
The Nozzle: Lastly, the combustion products expand through the exhaust nozzle located between stations 4 and 10. It’s here the thrust for the vehicle gets generated.
Although functioning-wise, a scramjet engine looks simple, designing a working engine that can sustain combustion for an extended period and survive under hypersonic conditions is a daunting challenge. Several engineering difficulties exist, starting with the challenge of mixing the fuel with air and igniting it in a high-velocity flow field within less than 1 millisecond.
The second issue is the high surface heat loads generated by hypersonic flight. These can be greater than those experienced by the space shuttle on re-entry and for longer periods. The material used to build the scramjet structure needs to be lightweight and be able to withstand elevated temperatures in excess of 2000 C. Also, thermal and structural design needs to take care of thermal expansion. Materials grow as they get heated up. So, designing a structure that does not break up as its skin heats up from room temperature to 2000 C is a major engineering challenge.
Thirdly, burning fuel in a duct can sometimes lead to choking or flow blockage. So, some mechanism needs to be built to manage it. Finally, chemical reactions can freeze in the nozzle expansion, leading to incomplete combustion.
Along with these engineering challenges, there are system-level challenges. One of the major issues is scramjets don’t work below Mach 4, so there is a need for another type of propulsion system, say, a ramjet or a rocket engine, to get it up to speed. Lastly, the nature of the scramjet operation changes considerably with the Mach number. Hence, acceleration over a large Mach range will be difficult as needed to get to space.
Given their characteristics of better fuel efficiency and high manoeuvrability, scramjets are preferred over rockets for hypersonic flights in the Earth’s atmosphere. They will likely find applications in hypersonic aeroplanes or cruisers and recoverable space launchers or accelerators. Cruisers could be a vehicle that is boosted to a certain speed by a jet-ramjet combo engine and may spend most of its time at constant velocity in the upper atmosphere. On the other hand, accelerators could probably be a part of a multi-stage rocket-scramjet combo system for low-cost reusable access to space.
Scramjets have come a long way over the last 60 years. 3D scramjet idealogy has proliferated in recent times and is been widely adopted by researchers worldwide. Also, 3D scramjets like REST have opened up the available design space, allowing possibilities for newer design variants to be tested and explored. Hopefully, this will lead to better engines with improved performance and make hypersonic flights a reality in the near future.
1. “Parametric Geometry, Structured Grid Generation, and Initial Design Study for REST-Class Hypersonic Inlets“, Paul G. Ferlemann, et al., JANNAF Airbreathing Propulsion Subcommittee Meeting, La Jolla, California, 2009.
2. “Investigation of REST-class Hypersonic Inlet Designs“, Rowan J. Gollan et al., 17th AIAA International Space Planes and Hypersonic Systems and Technologies Conference, 11-14th April 2011, San Francisco, California.
3. ” “Design of three-dimensional hypersonic inlets with rectangular-to-elliptical shape transition“, Smart, M. K et al., Journal of Propulsion and Power, Vol. 15, No. 3, 1999, pp. 408–416.
4. “Free-jet Testing of a REST Scramjet at Off-Design Conditions“, Michael K Smart et al., Smart, Michael K, et al., 25th AIAA Aerodynamic Measurement Technology and Ground Testing Conference, 5-8 June 2006, San Francisco, California.
5. “Scramjet Inlets“, Professor Michael K. Smart, RTO-EN-AVT-185.
6. “Hypersonic Airbreathing Propulsion“, David M. Van Wie, et al., Johns Hopkins APL Technical Digest, Volume 26, Number 4 (2005).
7. “Hypersonic Speed Through Scramjet Technology“, Kevin Dirscherl et al., University of Colorado at Boulder, Boulder, Colorado 80302, December 17, 2015.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Hypersonic Flights by Scramjet Engines appeared first on GridPro Blog.
Figure 1: Hexahedral mesh for accurate capturing of leakage gaps in screw compressors.
1106 words / 6 minutes read
Leakage flows stand out as the primary factor leading to decreased efficiency within screw compressors. Precisely capturing these leakage flows using enhanced grids plays a crucial role in achieving precise CFD predictions of their behavior and their consequential impact on the overall performance of the screw compressor.
Rotating volumetric machines like screw compressors or tooth compressors are used extensively in many industrial applications. It is reported that nearly 15 percent of all-electric energy produced is used for powering compressors. Even a small improvement in the efficiency of these rotary compressors will result in a significant reduction in energy consumption. In fact, a small variation in rotor shape hardly visible to the naked eye can cause a notable change in efficiency.
Research indicates that the primary factor leading to efficiency reductions in screw compressors is leakage. This leakage occurs as a result of gaps present between rotors and between rotors and the casing. Among various thermo-fluid behaviours, internal leakage has a more substantial impact, particularly when operating at lower speeds and higher pressure ratios.
With improvement in energy efficiency becoming the main objective of design and development teams, there is a growing interest in flow patterns within screw compressors, particularly focusing on the phenomenon of leakage flows.
Screw compressors operate by altering the volume of the compression chamber, leading to corresponding variations in internal pressure and temperature. As pressure builds up during compression, the compressed gas seeks to move into lower-pressure chambers through the leakage gaps.
Unfortunately, due to the helical nature of the compression process in positive displacement machines, it is very difficult to visually appreciate this leakage flow by any experimental methods. Also, the complex flows in screw compressors demand more detailed studies, which makes conducting physical experimentation very expensive. Hence, experimental studies in these machines have become less attractive, while CFD, with accurate prediction abilities along with detailed 3D flow measurement and visualization capabilities, has been accepted as the workable alternative.
In positive displacement machines, leakage flow is an inescapable devil. Due to the nature of the mating parts and the need for clearances between them, the compressor is bound to have several leakage paths. About 6 different leakage paths have been identified, as shown in Figure 2.
Out of these, only the cusp blow holes have a constant geometry, while the rest of the paths have a geometry and flow resistance that varies periodically in a way unique to each individual path. Further, the pressure difference driving the fluid along a leakage path also varies periodically in a manner that is unique to each leakage path.
Leakages can broadly be categorized into two groups. In the first kind, the leakage happens from the enclosed cavity or discharge chamber to the suction chamber. This causes a reduction in both volumetric and indicated efficiencies. While in the second group, leakage flow occurs from the enclosed cavity or the discharge chamber to the following enclosed cavity. Although the indicated efficiency reduces in this mode, the volumetric efficiency does not.
Each leakage path uniquely influences the performance of the compressor. Hence, it is important to understand the attributes of the leakage through each leakage path and the percentage by which it can impact the machine’s efficiency. This is essential because it helps prioritise the design procedures in general and specifically enhancing the rotor lobe profile.
The critical factor which affects the CFD performance prediction of twin screw compressors is the accuracy with which leakage gaps are captured by gridding strategies. Since the working chamber of a screw machine is transient in nature, we need a grid that could accurately represent the domain deformation.
One approach is simply increasing the grid points on the rotor profile. Studies have shown that grid refinement in the circumferential direction directly influences mass flow rate prediction. In contrast, it has a lesser influence on predicting pressure and power. However, since we want to do a transient simulation in a deforming domain, this gridding approach will cause quicker deterioration in grid quality and a rapid rise in computational time.
Alternatively, another effective way to tackle this discretization challenge is to locally refine only the interlobe space region. This particular area holds utmost significance in managing leakage flows. By confining the increase in cell count to the interlobe gaps and blow-hole areas, the overall grid dimensions can be maintained under control.
The benefits of mesh refinement in the vicinity of interlobe gap and blow-hole area can be seen in improved accuracy in predicting mass flow rate and leakage flows. Interlobe refinement improves the curvature capturing of rotor profiles and also the mesh quality. This is reflected in the CFD predictions.
The difference between experimental indicated power and CFD predictions on the base grid is about 2.7% at 6000 rpm and 6.6% at 8000 rpm. With interlobe grid refinement, the difference reduces to 1.4% at 6000 rpm and 2.8% at 8000 rpm.
The enhancement of the interlobe grid refinement significantly impacts the flow rate. The contrast between experimental outcomes and CFD projections on the base grid is noticeable, registering at 11% and 8.7% for 6000 rpm and 8000 rpm respectively. These disparities decrease notably to approximately 5.5% and 2.9% following grid refinement.
The volumetric efficiency prediction on the base grid is 7% lower than the experiment. With refinement, the difference reduces to 3%. As with other variables, the difference is smaller at 8000 rpm than at 6000 rpm.
Specific indicated power, reliant on indicated power and mass flow rate, displays sensitivity. At 6000 rpm, the difference between the base grid CFD prediction and experimental-specific indicated power is about 0.2 kW/m3/min, which reduces to 0.15 kW/m3/min with refinement. At 8000 rpm, the CFD predictions match with the experiment, as can be seen in Figure 7.
The findings suggest that employing finer grids leads to better capturing of the rotor geometry, thereby enhancing the accuracy of leakage loss representation. With successively refined grids, the reduction in leakage losses becomes apparent. As a result, the CFD predictions gradually align more closely with experimental data.
1. Challenges in Meshing Scroll Compressors
2. Automation of Hexahedral Meshing for Scroll Compressors
3. The Art and Science of Meshing Turbine Blades
1.“The Analysis of Leakage in a Twin Screw Compressor and its Application to Performance Improvement”, John Fleming et al., Proc Instn Mcch Engrs Vol 209, 1995.
2. “Analytical Grid Generation for accurate representation of clearances in CFD for Screw Machines”, S Rane et al., Article in British Food Journal · August 2015.
3. “Grid Generation and CFD Analysis of Variable Geometry Screw Machines”, Sham Ramchandra Rane, PhD Thesis, City University London School of Mathematics, Computer Science and Engineering August 2015.
4. “ CFD Simulations of Single- and Twin-Screw Machines with OpenFOAM”, Nicola Casari et al., Designs 2020.
5. “Numerical Modelling and Experimental Validation of Twin-Screw Expanders” Kisorthman Vimalakanthan et al., Energies 2020, 13, 4700.
6. “New insights in twin screw expander performance for small scale ORC systems from 3D CFD Analysis”, Iva Papes et al., Journal of Applied Thermal Engineering, July 15, 2015.
7. “A GRID GENERATOR FOR FLOW CALCULATIONS IN ROTARY VOLUMETRIC COMPRESSORS”, John Vande Voorde et al., European Congress on Computational Methods in Applied Sciences and Engineering, ECCOMAS 2004.
8. “CFD SIMULATION OF A TWIN SCREW EXPANDER INCLUDING LEAKAGE FLOWS”, Rainer ANDRES et al., 23rd International Compressor Engineering Conference at Purdue, July 11-14, 2016.
9. “Calculation of clearances in twin screw compressors”, Ermin Husak et al., International Conference on Compressors and their Systems 2019.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Accurate Capturing of Leakage Gaps in Screw Compressors with Hex Grids appeared first on GridPro Blog.
Figure 1: Automated gerotor pump meshing with GridPro’s structured multiblock grid generator.
1454 words / 7 minutes read
Automated hexahedral mesher empowers engineers to effortlessly scrutinize the flow behaviour, vividly understand the change in flow with the change in clearance gap, and explicitly bring out the differences in the gerotor design variant’s performances.
The unique characteristics of gerotor pumps have made them a widely used pumping device in various industries. They are compact, reliable, and inexpensive, making them a cost-effective option for fluid transfer applications. Additionally, they offer high tolerance to fluid contamination, aeration, and cavitation. By providing excellent flow control, minimal flow pulsation and low noise, they have a strong footprint in the aerospace, automotive and manufacturing sectors.
The aerospace industry uses them for cooling, lubrication, and fuel boost and transfer processes. In manufacturing, they are used for dosing, filling, dispensing, and coating applications. Gerotor pumps are also extensively used in the automotive, agriculture, and construction fields, particularly for low-pressure applications. With the progress of technology, gerotor pumps are finding new applications in the life science, industrial, and mechanical engineering sectors.
This expansion in applicability across various industries is driving the gerotor pump research for further improvement. Also, the growing environmental concern in various industries is creating a need for newer applications, which demand pumps that can improve their efficiency. Gerotor pumps, with their simple design, have presented themselves as an attractive option for these newer applications. However, the increasing demand for pumps that meet stringent specifications and shorter design cycles necessitates a cost-effective design process that can lead to optimal performance and efficiency.
This has driven further research on gerotor pumps, focusing on improving design through numerical simulation, allowing designers to identify potential performance issues and optimize their designs before building physical prototypes. By leveraging this approach, researchers are leading the way towards more efficient and reliable gerotor pump designs that meet the growing demand for pump applications in various industries.
CFD is an essential tool for the design and optimization of gerotor pumps. CFD simulations accurately predict the effect of cavitation and fluid-body interaction on performance by providing a detailed description of the fluid’s behaviour inside the pump. Due to its accuracy, CFD is often used as a benchmark for pump experiments when no experimental comparison data is available.
However, there are certain challenges in using CFD for gerotor pump design. The CFD process requires large simulation time and memory requirements, and there is a need to re-mesh the entire domain at each angular step. Further, meshing the inter-teeth clearance and constantly changing fluid domain could be a challenging task.
These constraints can delay the design verification stage, making the process time-consuming. The design engineer must mesh the volume chambers each time the design changes and perform a time-consuming simulation. In most cases, the simulation of a geometric configuration takes up to a day to generate results. This workflow hinders the effectiveness of rapid design methodologies or the easy testing of a large number of geometric configurations of the pump in a reasonable time.
The primary focus of research w.r.t meshing positive displacement machines is the development of methodologies to support rapid simulation of any geometry. Efforts are made to develop meshing methods to automatically generate high-resolution grids with optimal cell size and high quality without human intervention.
However, gerotor pump meshing is challenging due to the rotating and deforming fluid volumes created during their working cycle. The rapid transformation of the deforming fluid zone from a large pocket region to a narrow passage makes meshing extremely difficult, w.r.t maintaining cell resolution, cell quality and mesh size. Trying to attain one of these meshing objectives results in the failure of the other. On top of this, coming up with a meshing procedure to avoid human intervention further ups the difficulty levels.
Additionally, the tight clearance space, which plays a significant role in determining volumetric efficiency, presents another obstacle for CFD simulations. These clearances are extremely small, often in the range of a few microns, and impact various aspects of the pump’s performance, such as flow leakage, flow ripple, cavitation, pressure lock, torque, and power. Out of these, the flow ripple parameter is significantly affected by the design of the tip and side gaps. A high ripple in the outlet flow can cause high levels of vibration and noise in the pump.
Hence it is critically important to accurately represent these narrow gaps with high-resolution, high-quality meshes to bring out their effects in high clarity. Low-resolution coarse grids will decrease the accuracy and may lead to over or underestimation of the flow variables. Maintaining a certain mesh quality is also important, as it enables CFD to easily analyse variations in clearances and other tendencies.
Various meshing techniques have been employed over time to discretize the gerotor fluid space. Among them, overlapping meshing methods, deform and remesh methods and customised structured meshing are the most popular ones.
Overlapping meshing methods, including the overset and immersed boundary methods, are frequently used. Although they are quick to generate, they often fall short of properly resolving the boundary layer and narrow clearance gaps while also employing an excessive number of cells.
The deform and remesh method is another popular approach that offers automation but often generates grids with a large cell count. Unfortunately, these methods can cause interpolation errors and stability issues while running the CFD solvers.
While manual customised grid generation methods provide the best mesh in terms of cell quality and grid size, they demand excessive time and human effort to generate the mesh. Unlike the generic moving mesh methods, such as the immersed boundary method, manual gridding approaches, such as the structured moving/sliding methods, accurately represent the dynamic gaps.
In the structured moving/sliding mesh approach, the fluid volume of the rotor chamber is isolated from the stationary fluid volumes related to the suction and delivery port. The rotor volume is topologically similar to a ring, making it easy to create an initial structured mesh for this shape. This zone being an extrudable domain, a 2D grid is created, which is later extruded to get a 3D mesh.
The stationary fluid volumes of the suction and delivery port are meshed using unstructured approaches. They are linked to the rotor mesh volume via non-conformal interfaces.
When the inner gear surface shifts to a new position, the mesh on the surface does not simply move with it. Instead, the mesh “slides” on the inner gear surface while adjusting to conform to the new clearance between the inner and outer gear surfaces. Simultaneously, the interface connections between the rotor volume and other fluid volumes are updated. These meshing steps ensure good resolution of the clearance space while maintaining good cell quality.
GridPro addresses the gerotor pump meshing challenge with its unique single-topology multi-configuration approach. To start with, for a given instance of the inner and the outer gear position, a 2D wireframe topology is built. Since the meshing zone is 2.5D in nature, a grid in 2D is good enough, which is later extruded in the perpendicular direction to get the 3D grid. The 2D topology acts as a template, to be later used repeatedly to generate mesh for all instances of the inner and outer gear positioning.
An automated python script ensures the grids for all angular steps are generated in an automated, hands-free environment. The script rotates the inner and outer gear at a user-specified angular step of 0.1 degrees and gives out a grid with consistent mesh quality. Since the topology is the same, the mesh generated for each angular step is practically the same. This particular aspect brings in significant positive benefits when compared to an unstructured re-meshing approach where the cell count and connectivity are completely different from one angular step grid to another.
This consistency in grids generated for all instances of the gear position aids in generating superior flow field simulation results. The automated meshing environment saves time and human effort and provides the much-needed trust of the design engineer in the simulated CFD data.
Engineers can enhance their workflow for 3D CFD analyses of gerotor pumps with an automated hexahedral mesher. It will empower engineers to effortlessly scrutinize the flow behaviour inside the working chambers, vividly understand the change in flow physics with variation in clearance gap, and explicitly bring out the differences in parametric design variant’s performances.
More importantly, an automated mesher brings the engineers’ focus back to the design aspects of the pump rather than on the meshing.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Automated Hexahedral Meshing of Gerotor Pumps appeared first on GridPro Blog.
We used 3DFoil to perform aerodynamic simulations for a rectangular wing based on a NACA 0012 airfoil. Results were compared with a NACA experiment performed in 1938, Ref. [1]. NACA tested a full sized wing with a span dimension of 36 feet and chord of 6 feet. The tests were performed in a full scale wind tunnel. We compared the results of 3DFoil, our vortex lattice package, against the experiments conducted on the NACA 0012 version of the wing. The results show excellent agreement between 3DFoil and the experiments for the lift and drag coefficients.
References:
Goett, H. J., & Bullivant, W. K. (1938). Tests of NACA 0009, 0012, and 0018 airfoils in the full-scale tunnel. Washington, DC, USA: US Government Printing Office.
3DFoil empowers engineers, designers and students alike to design and analyze 3D wings, hydrofoils, and more. The software seamlessly blends speed and accuracy, using a vortex lattice method and boundary layer solver to calculate lift, drag, moments, and even stability. Its user-friendly interface allows for flexible design with taper, twist, and sweep, making it ideal for creating winglets, kite hydrofoils, and various other aerodynamic surfaces. Notably, 3DFoil surpasses traditional 2D analysis by considering finite wing span for more realistic performance predictions, helping users optimize their designs with confidence.
See also: https://www.hanleyinnovations.com/3dwingaerodynamics.html
Visit 👉 Hanley Innovations for more information
Start the design process now with Stallion 3D. It is a complete computational fluid dynamics, CFD, tool based on RANS that quickly and accurately simulate complex designs. Simply enter your CAD, in the STL format form OpenVSP or other tools, to discover the full potential of your design.
Learn more 👉 https://www.hanleyinnovations.com/stallion3d.html
Stallion 3D is a tool designed for you, the designer, to successfully fly your designs on schedule:
Stallion 3D empowers you to take your designs to the next level. The picture above shows the aerodynamics of an amphibious Lockheed C-130 concept. A Windows 11 laptop was used for the complete calculation. Stallion 3D is ideal for down selecting conceptual designs so you can move to the next step with an optimized aircraft.
Do not hesitate to contact us at hanley@hanleyinnovations.com if you have any questions. Thanks 😀
VisualFoil Plus is a version of VisualFoil that has a built-in compressible flow solver for transonic and supersonic airfoil analysis. As VisualFoil Plus is currently not in active development, the perpetual license is only $189.
Learn more 👉 https://www.hanleyinnovations.com/air_16.html
VisualFoil Plus has the following features:
The picture above, shows referrals the solution of the NACA 0012 airfoil at a Mach number of 0.825.
Please visit us at https://www.hanleyinnovations.com/air_16.html for more information.
When choosing a CFD (Computational Fluid Dynamics) software for beginners, it's essential to consider factors that balance ease of use with computational power. Here are some key qualities to look for:
1. User-Friendly Interface:
Popular Aerodynamics software Software Options for Beginners offered by Hanley Innovations are:
By considering these factors, you can start to work on your aerodynamics and make significant progress in a short period of time.
Here are instructions on how to import a surface CSV file from Stallion 3D into ParaView using the Point Dataset Interpolator:*
In Stallion 3D
Open ParaView.
Convert CSV to points:
Load the target surface mesh:
Apply Point Dataset Interpolator:
5. Visualize:
Take flight with your next project! Hanley Innovations offers powerful software solutions for airfoil design, wing analysis, and CFD simulations.
Here's what's taking off:
Hanley Innovations: Empowering engineers, students, and enthusiasts to turn aerodynamic dreams into reality.
Ready to soar? Visit www.hanleyinnovations.com and take your designs to new heights.
Stay tuned for more updates!
#airfoil #cfd #wingdesign #aerodynamics #iAerodynamics
A Perfect Celebration
On December 5-7, 2024, a symposium, Emerging Trends in Computational Fluid Dynamics: Towards Industrial Applications, was successfully held at Stanford University to celebrate the 90th birthday of CFD legend, Professor Antony Jameson. I am very grateful to Antony for giving Professor Chongam Kim and myself an opportunity to celebrate our 60th birthday in conjunction with his. Thus, the symposium is also called the Jameson-Kim-Wang (JKW) symposium.
An organizing committee led by Professor Siva Nadarajah (McGill University) and composed of Professors Chunlei Liang (Clarkson University), Meilin Yu (UMBC), and Hojun You (Sejong University) did a fantastic job in organizing a flawless symposium. The list of speakers includes the who's who and rising stars in CFD. A special shoutout goes to Professor Juan Alonso and the sponsors for their support of the Symposium. A photo of the attendees is shown in Figure 1. Some good-looking posters from the sponsors are shown in Figure 2.
Antony's many pioneering contributions to CFD have been well documented in the literature. His various CFD and design optimization codes have shaped the design of commercial aircraft for many decades. Several aircraft manufacturers told stories about Antony's impact. We look forward to the release of the Symposium videos next year.
Next, I'd like to touch upon my personal connection to Antony. I first heard of his name and his work in China from my graduate advisor, Academician Zhang Hanxin. I still recall reading his paper on the successes and challenges in computational aerodynamics. I believe I first met Antony at an AIAA conference when he came to my talk on conservative Chimera. I did not get an opportunity to introduce myself. Our 2nd meeting took place in China during an Asian CFD conference in 2000 where both of us were invited speakers. We sat at the same table with Charlotte (Mrs. Jameson) in a banquet. This time I was able to properly introduce myself.
Soon after that, we started collaborating on high-order methods, from spectral difference to flux reconstruction. I visited Antony's lab and co-organized his 70th birthday celebration at Stanford in late 2004. During a visit to his home, Antony shared his fascination on the aerodynamics of hummingbirds. I still recall receiving his phone call about proving the stability of the SD method with Gauss points as the flux points on a Saturday when I was at my son's soccer game!
The Symposium also gave me an opportunity to see many of my former students, some of whom I have not seen for more than two decades: Yanbing, Khyati, Prasad, Chunlei, Varun, Takanori, Meilin, Lei, Cheng, Feilin, Eduardo and Justin. It is very gratifying to hear their stories after so many years.
The Symposium concluded with an amazing banquet. My friend and collaborator, H.T. Huynh, did a hilarious roast of me and I cannot stop laughing the whole time. H.T. has the talent of a standup comedian. Everything went smoothly and we had a perfect symposium!
In the computation of turbulent flow, there are three main approaches: Reynolds averaged Navier-Stokes (RANS), large eddy simulation (LES), and direct numerical simulation (DNS). LES and DNS belong to the scale-resolving methods, in which some turbulent scales (or eddies) are resolved rather than modeled. In contrast to LES, all turbulent scales are modeled in RANS.
Another scale-resolving method is the hybrid RANS/LES approach, in which the boundary layer is computed with a RANS approach while some turbulent scales outside the boundary layer are resolved, as shown in Figure 1. In this figure, the red arrows denote resolved turbulent eddies and their relative size.
Depending on whether near-wall eddies are resolved or modeled, LES can be further divided into two types: wall-resolved LES (WRLES) and wall-modeled LES (WMLES). To resolve the near-wall eddies, the mesh needs to have enough resolution in both the wall-normal (y+ ~ 1) and wall-parallel directions (x+ and z+ ~ 10-50) in terms of the wall viscous scale as shown in Figure 1. For high-Reyolds number flows, the cost of resolving these near-wall eddies can be prohibitively high because of their small size.
In WMLES, the eddies in the outer part of the boundary layer are resolved while the near-wall eddies are modeled as shown in Figure 1. The near-wall mesh size in both the wall-normal and wall-parallel directions is on the order of a fraction of the boundary layer thickness. Wall-model data in the form of velocity, density, and viscosity are obtained from the eddy-resolved region of the boundary layer and used to compute the wall shear stress. The shear stress is then used as a boundary condition to update the flow variables.
During the past summer, AIAA successfully organized the 4th High Lift Prediction Workshop (HLPW-4) concurrently with the 3rd Geometry and Mesh Generation Workshop (GMGW-3), and the results are documented on a NASA website. For the first time in the workshop's history, scale-resolving approaches have been included in addition to the Reynolds Averaged Navier-Stokes (RANS) approach. Such approaches were covered by three Technology Focus Groups (TFGs): High Order Discretization, Hybrid RANS/LES, Wall-Modeled LES (WMLES) and Lattice-Boltzmann.
The benchmark problem is the well-known NASA high-lift Common Research Model (CRM-HL), which is shown in the following figure. It contains many difficult-to-mesh features such as narrow gaps and slat brackets. The Reynolds number based on the mean aerodynamic chord (MAC) is 5.49 million, which makes wall-resolved LES (WRLES) prohibitively expensive.
![]() |
The geometry of the high lift Common Research Model |
University of Kansas (KU) participated in two TFGs: High Order Discretization and WMLES. We learned a lot during the productive discussions in both TFGs. Our workshop results demonstrated the potential of high-order LES in reducing the number of degrees of freedom (DOFs) but also contained some inconsistency in the surface oil-flow prediction. After the workshop, we continued to refine the WMLES methodology. With the addition of an explicit subgrid-scale (SGS) model, the wall-adapting local eddy-viscosity (WALE) model, and the use of an isotropic tetrahedral mesh produced by the Barcelona Supercomputing Center, we obtained very good results in comparison to the experimental data.
At the angle of attack of 19.57 degrees (free-air), the computed surface oil flows agree well with the experiment with a 4th-order method using a mesh of 2 million isotropic tetrahedral elements (for a total of 42 million DOFs/equation), as shown in the following figures. The pizza-slice-like separations and the critical points on the engine nacelle are captured well. Almost all computations produced a separation bubble on top of the nacelle, which was not observed in the experiment. This difference may be caused by a wire near the tip of the nacelle used to trip the flow in the experiment. The computed lift coefficient is within 2.5% of the experimental value. A movie is shown here.
![]() |
Comparison of surface oil flows between computation and experiment |
![]() |
Comparison of surface oil flows between computation and experiment |
Multiple international workshops on high-order CFD methods (e.g., 1, 2, 3, 4, 5) have demonstrated the advantage of high-order methods for scale-resolving simulation such as large eddy simulation (LES) and direct numerical simulation (DNS). The most popular benchmark from the workshops has been the Taylor-Green (TG) vortex case. I believe the following reasons contributed to its popularity:
Using this case, we are able to assess the relative efficiency of high-order schemes over a 2nd order one with the 3-stage SSP Runge-Kutta algorithm for time integration. The 3rd order FR/CPR scheme turns out to be 55 times faster than the 2nd order scheme to achieve a similar resolution. The results will be presented in the upcoming 2021 AIAA Aviation Forum.
Unfortunately the TG vortex case cannot assess turbulence-wall interactions. To overcome this deficiency, we recommend the well-known Taylor-Couette (TC) flow, as shown in Figure 1.
Figure 1. Schematic of the Taylor-Couette flow (r_i/r_o = 1/2)
The problem has a simple geometry and boundary conditions. The Reynolds number (Re) is based on the gap width and the inner wall velocity. When Re is low (~10), the problem has a steady laminar solution, which can be used to verify the order of accuracy for high-order mesh implementations. We choose Re = 4000, at which the flow is turbulent. In addition, we mimic the TG vortex by designing a smooth initial condition, and also employing enstrophy as the resolution indicator. Enstrophy is the integrated vorticity magnitude squared, which has been an excellent resolution indicator for the TG vortex. Through a p-refinement study, we are able to establish the DNS resolution. The DNS data can be used to evaluate the performance of LES methods and tools.
Figure 2. Enstrophy histories in a p-refinement study
Happy 2021!
The year of 2020 will be remembered in history more than the year of 1918, when the last great pandemic hit the globe. As we speak, daily new cases in the US are on the order of 200,000, while the daily death toll oscillates around 3,000. According to many infectious disease experts, the darkest days may still be to come. In the next three months, we all need to do our very best by wearing a mask, practicing social distancing and washing our hands. We are also seeing a glimmer of hope with several recently approved COVID vaccines.
2020 will be remembered more for what Trump tried and is still trying to do, to overturn the results of a fair election. His accusations of wide-spread election fraud were proven wrong in Georgia and Wisconsin through multiple hand recounts. If there was any truth to the accusations, the paper recounts would have uncovered the fraud because computer hackers or software cannot change paper votes.
Trump's dictatorial habits were there for the world to see in the last four years. Given another 4-year term, he might just turn a democracy into a Trump dictatorship. That's precisely why so many voted in the middle of a pandemic. Biden won the popular vote by over 7 million, and won the electoral college in a landslide. Many churchgoers support Trump because they dislike Democrats' stances on abortion, LGBT rights, et al. However, if a Trump dictatorship becomes reality, religious freedom may not exist any more in the US.
Is the darkest day going to be January 6th, 2021, when Trump will make a last-ditch effort to overturn the election results in the Electoral College certification process? Everybody knows it is futile, but it will give Trump another opportunity to extort money from his supporters.
But, the dawn will always come. Biden will be the president on January 20, 2021, and the pandemic will be over, perhaps as soon as 2021.
The future of CFD is, however, as bright as ever. On the front of large eddy simulation (LES), high-order methods and GPU computing are making LES more efficient and affordable. See a recent story from GE.
![]() |
Figure 1. Various discretization stencils for the red point |
Author:
Allie Yuxin Lin
Marketing Writer
In my first year of university, I became enamored with science fiction novels, particularly those dealing with the subgenre of time travel. During one of my literary pursuits, I came across the story of a 20th century nurse who manages to save the lives of many 16th century soldiers because she engineered a modern syringe from a viper’s hollow fang. While the modern hypodermic needle was not invented until the 1850s, the first syringe (not necessarily hypodermic) was created in 1650 based on Pascal’s Law, which states that a pressure applied at any point in a confined fluid will be directly transmitted throughout the fluid. I would later learn of another indispensable part of modern civilization that is also based on Pascal’s Law, and, you could say, transforming lives in its own way.
A piston pump is a type of reciprocating pump in which the reciprocating motion of a piston forms a chamber. When the pump expands, the chamber draws in fluid through a valve; when the pump contracts, the chamber expels fluid through a separate valve. A syringe’s plunger works by the same mechanism, as do hand soap dispensers, well pumps, bicycle pumps, and more. These machines have a simple design, which has allowed them to become a critical part of the oil and gas industry, where they are primarily used to transfer fluids at high pressures during extraction and processing operations. Their function as a positive displacement device, as well as their ability to generate high pressures and handle a wide range of fluid types, make piston pumps particularly attractive for the oil and gas industry. In particular, they are used in tasks such as well stimulation (including hydraulic fracturing and acidizing), mud pumping during drilling, chemical injection for corrosion inhibition, flow assurance, wellhead service, and high-pressure fluid transfer in pipelines and processing facilities.
Given their importance in industry, finding the right tools to model piston pumps can offer valuable insights into the design and application of these ubiquitous tools. However, piston pumps often involve complex moving boundaries, as well as intricate piston motion and valve dynamics, which may pose a challenge for simulation. These apparatuses are also prone to cavitation, which refers to the formation and collapse of vapor bubbles in the pump’s fluid. This happens when the working pressure inside the pump falls below the fluid’s vapor pressure, causing localized vaporization. When these bubbles collapse, they create shock waves that may lead to undesired vibrations, machinery damage, and reduced efficiency over time.
CONVERGE is a useful tool for piston pump simulations because it can efficiently overcome many of the challenges associated with these devices. Our solver automatically generates the computational mesh at each time-step, eliminating the need for complex re-meshing strategies. Adaptive Mesh Refinement (AMR) ensures high resolution where it is needed without incurring extensive computational costs. Fluid-structure interaction (FSI) modeling can accurately track the interaction between the piston, the valves, and surrounding fluid to predict pressure and flow behavior. Furthermore, CONVERGE includes several built-in cavitation models and multi-phase capabilities that help predict vapor formation, bubble collapse, and pressure spikes.
In this CONVERGE case study, we simulated a piston pump with plate valves to regulate the pressure and suction sides and compared our results to experimental data.1 In this geometry,2 the fluid (water) is induced by an oscillatory movement of the plunger. As the plunger reaches its minimum displacement, the pump begins its suction stroke; similarly, as the plunger reaches its maximum displacement, the pump begins its discharge stroke.
CONVERGE’s FSI modeling captured the dynamic relationship between the fluid and the plate valves, the pump chamber, and the suction and pressure pipes. The two-way coupled FSI approach predicted the rigid-body motion of the plate valves resulting from the balance between the fluid load and suction pressure on one side and the spring loads on the other. In this study, both forces were set up as 1DOF FSI objects, i.e., they could only move translationally, along the x-axis. The FSI spring feature models spring forces between a fixed object and a rigid FSI object (valve). The model approximates the force of a linear coil spring, with specified parameters for stiffness, damping constant, length, and pre-load.
Other CONVERGE features that aided in this simulation include the RNG k-epsilon model, which accounted for the turbulent flow in the pump. The phase change between the liquid and vapor phases was captured using cavitation modeling, specifically, the homogenous relaxation model (HRM). HRM predicts the mass exchange between the liquid and vapor and describes the rate at which the liquid-vapor mass interchange approaches equilibrium. In this case, we used time scale coefficients for the condensation and evaporation of water to predict mass flow rate and discharge.
For a more accurate simulation, velocity- and void fraction-based AMR were applied to refine and coarsen the mesh depending on the resolution requirements. In addition, fixed embedding was employed around the valves and the piston crown to maintain a fine resolution while keeping the rest of the grid coarse. Pressure-velocity coupling was captured with the Pressure Implicit Splitting of Operators (PISO) scheme, which performs the PISO algorithm in a loop until it reaches a user-specified PISO tolerance value.
Overall, there was good agreement between the experimental values and CONVERGE data, as measured by the valve lift. In addition to accurately capturing the amount of displaced volume in the pump, our simulation effectively predicted compressibility effects.
Much like the inventive syringe, piston pumps—which are rooted in the same scientific principles—are an indispensable part of modern industry. Their simple yet powerful design, based on Pascal’s Law, allows them to perform critical tasks in the oil and gas sector, in spite of challenges such as multi-phase dynamics and cavitation. In this case study, we leveraged CONVERGE’s innovative tools, including FSI and multi-phase flow modeling, to simulate two-phase flow in a reciprocating displacement pump incorporating fluid-actuated valve movement. Advanced simulations such as the one outlined in this blog help refine our understanding of piston pumps, ensuring they continue to function efficiently and effectively under all circumstances.
[1] Anciger, D., “Numerische Simulation der Fluid-Struktur-Interaktion fluidgesteuerter Ventile in oszillierenden Verdrängerpumpen.” Ph.D. thesis, Technische Universität München, Munich, Germany, 2012.
[2] Deimel, C., et al. “Numerical 3D Simulation of the Fluid-Actuated Valve Motion in a Positive Displacement Pump with Resolution of the Cavitation-Induced Shock Dynamics.” Eighth International Conference on Computational Fluid Dynamics (ICCFD8), ICCFD8-2014-0433, Chengdu, China, July 14-18, 2014. DOI: 10.13140/2.1.3443.2326
Author:
Allie Yuxin Lin
Marketing Writer
Allow me to paint a picture for you. You’re an auto manufacturer, and you realize that the increased demand for fuel efficiency is pushing the industry toward new engine designs that can reduce fuel consumption while abiding by stricter governmental regulations on emissions. To accommodate this, you must follow the industry standard and rely on both experimental prototyping and numerical modeling. As you learn more about numerical simulation, you see that there are two approaches that you could take, so you start exploring these in depth. The design of experiments (DoE) technique explores the design space through many simulations and creates a response surface to optimize outcomes. This approach allows you to run many concurrent simulations to achieve quick design times. However, traditional linear regression-based response surface methods (RSMs) are unable to capture the complex, non-linear interactions in engine combustion. The second option involves the application of genetic algorithms (GAs), which optimize designs through multiple simulations over many generations.1 Your research shows that the GA method is very effective at exploring optimal design strategies, but it typically requires many generations to converge, leading to an extended design turnaround of up to several months.
Now you’re facing a difficult predicament. You have two options in front of you, one which will solve the problem within a reasonable timeframe but might miss out on the optimal solution, and another that is robust but computationally costly.
Enter machine learning (ML) optimization. Offering rapid project turnaround, cost-efficiency, and knowledge of the full design space, ML optimization is a game-changer in the field.2 Trained on DoE data, the ML tool has access to a wealth of information across the entire design space that would not be obtained through traditional sequential optimization methods. With a sufficiently complex ML model, you can capture the non-linear relationships that a DoE alone cannot, while keeping the optimization turnaround time low.
In previous versions of our software, optimization could be accomplished through our in-house CONVERGE Genetic Optimization (CONGO) utility, which enables you to run a GA optimization or a DoE interrogation study. A GA takes a survival-of-the-fittest approach to optimize a design, in which input parameters are randomly generated to create a population of parameters with the highest user-defined merit.
In late 2024, we released an ML tool in CONVERGE Studio that enables rapid optimization. First, you will identify the parameters that you want to vary during your optimization study (e.g., injection pressure, EGR ratio), and define the performance metrics you will use to assess the merit of your simulation results (e.g., minimum fuel consumption, minimum NOx emissions). The tool will then initialize a DoE by systematically generating a set of input variables for CONVERGE simulations that span your design space. A Latin hypercube sampling approach can be used to maximize the minimum distance between DoE sample points, producing a quasi-random sample that better captures the underlying data distribution compared to a random sample. After generating input files for the DoE, CONVERGE users can run their cases concurrently on CONVERGE Horizon, our cloud computing service that provides affordable, on-demand access to the latest high-performance computing (HPC) technologies.
The results from the DoE can now serve as the training data for the ML model. Since the most appropriate ML algorithm for a particular set of data cannot be determined a priori, the ML tool will combine several different algorithms through ensemble learning: ridge regression, random forest, gradient boosting, support vector machine, and neural network. This ML meta learning model will identify the combination of the five algorithms that best emulates the CFD setup. You can then use the trained ML meta model to predict the optimal case, evaluated with your predefined performance metrics. Finally, you can run the predicted best case in CONVERGE to confirm the results.
The ML tool offers a streamlined process for rapid and accurate optimization. The goal is not to replace CFD with ML, but rather to use ML in conjunction with CFD to enable fast, optimization-based design. A simplified schematic of the process can be seen in Figure 1.
While CONVERGE’s ML tool can be called within a user-defined function (UDF) for different purposes, such as reduced-order modeling, the approach is primarily targeted for design optimization. Its flexibility and ease of use enables the tool to process copious amounts of data, uncover nuanced patterns, and provide actionable insights.
To increase the efficiency of internal combustion engines, we partnered with Polaris and Oracle Cloud in 2021 to combine ML, CFD, and HPC for an exhaust port optimization study.
After identifying five exhaust port parameters to vary and parametrizing the geometry, the team used Latin hypercube sampling to set up a DoE study with 256 cases. The cases were run on CONVERGE Horizon in less than a day. We separated the wealth of data generated by the DoE study to train (using 90% of the DoE data) and test (using 10% of the DoE data) an ML emulator. This two-step process ensures the ML emulator can genuinely predict designs, rather than simply regurgitating the data from the DoE. Having confirmed the efficacy of the ML emulator, the team then used the trained emulator to predict the optimal case that minimized the exhaust port pumping work. The optimization study produced a small yet significant improvement in exhaust port efficiency. With traditional methods, an experimental optimization would have been far more expensive and taken significantly more time. However, thanks to the use of ML and HPC, this study was completed in a few days rather than several months. For more information, read our blog, which goes into detail about the design, methodology, results, and future outlooks of this study.
Harnessing wind energy is a cornerstone of the global agenda toward sustainability, since it provides a renewable power source with minimal environmental impact. Advancements in wind turbine technology enable the establishment of wind farms, which can generate significantly more power than a single turbine.
Wind farm layout can influence overall energy output, operational efficiency, and total project costs. In a poorly laid out wind farm, wake effects generated by upwind turbines may decrease the performance of downwind turbines. In such scenarios, ML can help optimize wind farm layout by accurately predicting turbine interactions to ensure each turbine receives optimal wind flow.
For a wind farm of 25 NREL 5MW wind turbines with constant wind speed and neutral atmospheric conditions, CONVERGE’s ML tool optimized the layout of the center five turbines for maximum power production. A DoE study produced the data to train the ensemble ML model, which was used to predict the optimal layout. The ML model, which was fully trained in 1 minute, returned four optimums, which were run in CONVERGE to confirm the configuration that produced the most power. Figure 2 shows the optimized wind farm layout, where the turbines in the center row are staggered.
Having concluded your research, you breathe a sigh of relief. CONVERGE’s ML tool has the potential to not only transform the engine industry, but also impart important insights in applications such as wind farm layout and reduced-order modeling. By training the model with DoE data, you have access to the entire design space and can uncover hidden patterns that were previously out of reach. With the speed and flexibility of CONVERGE’s ML tool, you no longer have to choose between quick results and accuracy—you could have both.
[1] Pei, Y., Pal, P., Zhang, Y., Traver, M., Cleary, D., Futterer, C., Brenner, M., Probst, D., and Som, S., “CFD-Guided Combustion System Optimization of a Gasoline Range Fuel in a Heavy-Duty Compression Ignition Engine Using Automatic Piston Geometry Generation and a Supercomputer,” SAE Technical Paper 2019-01-0001, 2019, doi:10.4271/2019-01-0001.
[2] Moiz, A.A., Pal, P., Probst, D., Pei, Y., Zhang, Y., Som, S., and Kodavasal, J., “A Machine Learning-Genetic Algorithm (ML-GA) Approach for Rapid Optimization Using High-Performance Computing,” SAE Technical Paper 2018-01-0190, 2018, doi:10.4271/2018-01-0190
Author:
Elizabeth Favreau
Marketing Writing Team Lead
When you’re starting a business, you need every edge you can get. Anything that can save you time, reduce your expenses, or help you design higher quality products is an advantage—and even better if you find a solution that can do all three.
When Karan Bansal founded his company Karban Envirotech Private Limited, he knew computational fluid dynamics (CFD) would be the key to getting his business off the ground. Karban is an innovative home appliance company based in India that aims to address consumer needs while also prioritizing efficiency and sustainability.
“We decided to start Karban because we found a gap in the market where we felt consumer appliances were not sustainable, especially when scaling to not just the Indian market but the worldwide market,” said Karan. “We wanted to bridge that gap and provide products that are focused on design, energy efficiency, and sustainability.”
The company’s first offering, the Karban Airzone, is a combination of a bladeless ceiling fan, air purifier, and chandelier light. The idea behind the product is that combining three appliances into one will help reduce clutter in people’s homes and offices, increase energy efficiency, and reduce the amount of plastic and packaging waste.
To come up with the initial design for their product, Karan and his team relied heavily on CFD modeling. “Hardware is hard,” said Karan. “It’s expensive and capital-intensive, and prototyping is also very expensive. But using CFD, we could design all of the CAD models that we wanted to try out and simulate them to assess their performance. Then we could optimize our initial designs using CFD to figure out how to achieve the maximum amount of air flow for the least amount of energy.”
For Karan, using CONVERGE was an obvious choice. Having previously worked at Convergent Science on the New Applications team for six years, he was well acquainted with CONVERGE’s benefits for flow-related devices.
“Of course, the best feature is not making any mesh,” said Karan. “Especially when you’re creating so many design iterations, meshing can be very complicated and it can consume a lot of your time. So the best feature was the automated meshing and Adaptive Mesh Refinement capabilities.”
Because their product contains a number of rotating components, CONVERGE’s multiple reference frame (MRF) approach also came in handy. The MRF approach simplifies simulations that include moving geometries by modeling the moving geometry as stationary. The user specifies a region of the domain as a local rotating reference frame, which moves relative to the stationary, or inertial, reference frame. This method provides accurate results at a fraction of the computational cost required for a fully moving geometry simulation.
In addition, the Karban team made use of CONVERGE’s porous media modeling to simulate filtration in the air purifier. In porous media, the flow occurs through a region of fine-scale geometrical structures which are too small to be resolved directly. Porous media modeling simulates these effects by converting them to distributed momentum resistances.
Using this combination of features, the Karban team conducted around 50-60 design iterations to identify the design they wanted to build as a prototype. The physical prototype demonstrated very similar results to what they observed in their CFD simulations, confirming the accuracy of their model. After the initial prototype, they conducted a few more rounds of CFD modeling to further optimize the design, resulting in their first product offering.
Integrating CFD into their design workflow, Karban was able to save a significant amount of time and money during their initial prototyping and optimization phases. And they plan to continue taking advantage of this tool in the future
“The future includes more aerodynamic products, so CFD will be an integral part of any product that we design from here on out,” said Karan. “We plan to use CONVERGE for all of our next sets of products to get to that optimized appliance design that we’re looking for.”
Learn more about Karban on their website!
Interested in incorporating CONVERGE into your own product design process? Contact us below.
Author:
Elizabeth Favreau
Marketing Writing Team Lead
As someone who grew up in Northern Minnesota, where no one bats an eye at temperatures below 0°F (-18°C) in the winter, I’m acutely aware of how important it is to have a reliable and effective method of heating your home. At the same time, we’ve all become well aware of the need to reduce greenhouse gas (GHG) emissions, and heating and cooling buildings contributes to a significant portion of today’s GHG emissions. According to the U.S. Department of Energy (DOE), the building sector accounted for about 35% of total GHG emissions in 2021, and 8% of total GHG emissions came from on-site combustion.1 Transitioning from traditional furnaces to heat pumps is one way that we can reduce those on-site building emissions.
Heat pumps use electricity to transfer heat from outside to inside to heat your building, or vice versa to cool your building. Heat pumps are highly energy efficient because they don’t generate heat, as a furnace does; instead, they just move heat from one area to another. In addition, because they can both heat and cool a building, they can reduce the number of required HVAC systems. However, heat pumps can struggle in colder climates—like where I grew up—and they use liquid refrigerants, which can have high global warming potentials (GWPs). To make heat pumps a more widely viable and environmentally friendly solution, we need to develop heat pumps that are compact, have low power requirements, and can operate on low-GWP refrigerants in extreme climate conditions.
Current design methods for heat pumps tend to rely on simplified thermodynamic cycle analysis and 0D/1D simulations, which struggle to capture important physical phenomena such as turbulent flow through the expansion valves and phase change in the evaporators and condensers. In addition, these methods require experimental data for empirical models, which can be very expensive to generate.
Researchers in the Advanced Propulsion and Power Department at the DOE Argonne National Laboratory, together with Convergent Science, are using innovative simulation techniques to overcome these limitations. Three-dimensional computational fluid dynamics (CFD) simulations offer a predictive approach that can substantially reduce the time and costs associated with the heat pump design cycle. With accurate submodels, CFD can replicate the complex physics in heat pump components to provide deeper insight into the flow and heat transfer phenomena that cannot be captured with simplified approaches or easily studied with experimental methods. In particular, CONVERGE’s autonomous meshing and advanced physical models make it well suited to simulations of complex geometries with multi-phase flows.
In a project funded by the DOE, Argonne researchers Muhsin Ameen and Katherine Asztalos, along with Convergent Science engineers Ameya Waikar, Michael Xu, and David Rowinski, are employing multi-fidelity simulations coupled with high-performance computing (HPC) to model and optimize heat pump components, starting with microchannel condensers.
Compared to macrochannel condensers, microchannel condensers exhibit superior heat transfer due to their greater surface area-to-volume ratio, making them well suited for compact systems. They are also typically lighter weight and require a smaller refrigerant charge. Microchannel condensers are suitable for applications with very high heat flux (≥10,000 W/m2), finding uses in HVAC systems, heat pump water heaters, refrigeration systems, and electronics.
The physics of microchannel condensers differs significantly from their macrochannel counterparts; for example, condensation in microchannels is dominated by surface tension forces, as opposed to macrochannels where gravity is the dominant force. Various parameters affect the mechanism for condensation in microchannels, including heat flux, vapor quality, fluid properties, and channel geometry; CFD provides researchers with a valuable tool for examining how these parameters impact the performance of the condenser.
To investigate the performance of microchannel condensers, the team from Argonne and Convergent Science conducted multi-phase CFD simulations in CONVERGE, validating the model against experimental data available in the literature.2,3 The team used CONVERGE’s volume of fluid (VOF) modeling, with the High Resolution Interface Capturing (HRIC) scheme, in conjunction with the Lee condensation model to simulate the multi-phase flow.
For the initial validation, the team performed simulations of FC-72—the liquid coolant used in the experimental setup—flowing along parallel square microchannels. They investigated three different mass flow rates (ṁ) at the inlet and compared the predicted liquid mass fraction at the outlet with the experimental measurements. The results, shown in Figure 2, show good agreement between the simulations and experiments.
Having validated the CFD model, the research team next wanted to investigate the effects of changing various parameters on the performance of the microchannel condenser. They started by looking at two different low-GWP refrigerants, R-1234yf (GWP < 1) and R290 (GWP = 3), and compared them to the performance of FC-72. They found that similar performance could be achieved between the low-GWP refrigerants and FC-72 by modifying the inlet operating conditions and boundary conditions. Figure 3 shows an example, where similar performance was achieved with a high mass flow rate of FC-72 and a low mass flow rate of R-1234yf. The spatial distributions of the refrigerants in the microchannels also show similar patterns under these conditions.
The next parameter the team investigated was the effect of the cross-sectional geometry on the performance of the microchannel condenser. They tested a circular cross-section and a square cross-section, using FC-72 as the refrigerant and similar operating conditions for each case. They found improved performance with a circular cross-section, as shown in Figure 4.
Finally, the research team turned their attention to the effects of adding a turbulence model to their simulation setup, comparing their results to experimental data. The previous simulations described in this blog post have been laminar, and while laminar simulations are able to capture end-state conditions, they struggle to accurately capture other parameters such as pressure drop and phase change distribution. As shown in Figure 5, the addition of the k-ω SST turbulence model enables the simulations to accurately capture the pressure drop, and the phase change distribution better reflects a pressure-driven flow.
The team from Argonne and Convergent Science were able to develop and validate a multi-phase approach for modeling microchannel condensers with CONVERGE. With this model, they were able to gain a deeper understanding of the influence of low-GWP refrigerants and geometric parameters on the performance of microchannel condensers.
In the future, the team plans to incorporate conjugate heat transfer modeling into the CONVERGE setup to more accurately replicate the real-world devices. In addition, they are working on modeling other heat pump components, with the goal of simulating the complete heat pump system down the line. They have already started work on modeling a supersonic ejector, with preliminary results shown in the video in Figure 6. Two-equation k-ε large eddy simulation turbulence modeling and Adaptive Mesh Refinement are able to capture the shock trains and other complex flow features in the mixing chamber and diffuser.
Overall, this work is paving the way to developing more efficient, more effective, and more environmentally friendly heat pumps. Enabling a more widespread adoption of heat pumps could make a significant impact in reducing on-site building GHG emissions, while still keeping us Minnesotans warm in the winter. Learn more about this work in the team’s International Refrigeration and Air Conditioning Conference paper!
[1] Department of Energy. (2024). Decarbonizing the U.S. Economy by 2050 (No. DOE/EE-2830).
[2] Kim, S.-M., Kim, J., & Mudawar, I. (2012). Flow condensation in parallel micro-channels-part 1: Experimental results and assessment of pressure drop correlations. International Journal of Heat and Mass Transfer, 55(4), 971-983.
[3] Kim, S.-M., & Mudawar, I. (2012). Flow condensation in parallel micro-channels-part 2: Heat transfer results and correlation technique. International Journal of Heat and Mass Transfer, 55(4), 984-994.
Author:
Allie Yuxin Lin
Marketing Writer
Standardization is a foundational pillar of modern civilization, shaping our world in ways we might not even notice. It can be found in our currency, our language, and our sciences. In computational fluid dynamics (CFD), standardization has an indispensable role in ensuring consistency, accuracy, and interoperability between different CFD tools. Some examples of standardization in CFD include standards on boundary conditions, mesh generation, and file formats.
A well-known file format system is the CFD General Notation System (CGNS), which is a general and extensible standard for the storage and retrieval of CFD output files. Storing such files in CGNS format allows your CFD data to be easily read and interpreted by many post-processing tools, such as ParaView, Tecplot, EnSight, Cassiopée, and more. This post-processing is a critical part of CFD, since it allows for the visualization of raw data in the form of plots, images, videos, and more. By following the CGNS standard, CFD engineers can run their simulations, export their data, and prepare it for analysis, all in one streamlined process.
However, as of May 2024, the CGNS conventions lacked documentation on particle data. Therefore, if your CFD results included Lagrangian data or particle-laden flows, you would have needed to use a different file format for exporting the data to post-processing. As such, several CFD solvers, CONVERGE included, exported their files in a proprietary format.
To address this limitation, Convergent Science proposed an extension to the CGNS format that would enable the export of particle data. With the acceptance of our proposal by the international CGNS steering committee, we have compiled the appropriate modifications to the various components of the CGNS: the SIDS (Standard Interface Data Structures), the MLL (Mid-Level Library), and the FMM (File Mapping Manual).
The CGNS platform now includes new nodes containing precise definitions for information related to particle data. The highest level structure in a CGNS database is CGNSBase_t, a self-contained entity with data that can be used to archive and reproduce a complete CFD computation. To this base, we have added a new node, defined as type ParticleZone_t. In any given base, there can be multiple nodes of type ParticleZone_t, where each node contains data pertaining to a specific set of particles. Different groups of particles can be differentiated using the FamilyName_t and AdditionalFamilyName_t nodes. ParticleCoordinates_t describes the physical coordinates of the particle centers and contains a list of data arrays for the individual components of the position vector. Additionally, ParticleSolutions_t describes the solution on each particle and contains a list for the data arrays of the individual solution variables. Since the framework allows multiple particle sets within a single ParticleZone_t, there can be numerous instances of both ParticleCoordinates_t and ParticleSolutions_t. These two nodes are linked to the simulation time using ParticleIterativeData_t, which is used to record pointers to particle data at different time steps.
While ParticleZone_t nodes are useful for exporting Lagrangian data, Zone_t nodes export Eulerian data. These types are independent, and particles defined in a ParticleZone_t do not necessarily need to be carried by a flow defined in a Zone_t. Simulation results can be fully defined by a CGNSBase_t and a ParticleZone_t (i.e., without a Zone_t), when there is no Eulerian data to export. Consequently, our extension may be employed by codes that use smoothed-particle hydrodynamics (SPH), a meshfree Lagrangian computational method.
In order to describe the governing particle equations, we have created several different model and equation nodes which may be found in ParticleEquationSet_t. This structure, which can be defined as a child node of CGNSBase_t and/or ParticleZone_t, includes the dimensionality of the governing equations, as well as a collection of equation-set descriptions. The additional models can be used to describe particle breakup, particle collision, particle forces (including lift and drag), wall interactions, and phase changes.
If you have any questions regarding this extension to the CGNS format, please contact us on our website! We are more than happy to talk to you about standardization in CFD, the limitations of the previous CGNS standard, and how Convergent Science proposed and implemented a solution to that constraint.
Author:
Allie Yuxin Lin
Marketing Writer
In today’s fast-paced and ever-evolving world, industries face increasing pressure to deliver precise results quickly—and CFD simulations are no exception. Instead of buckling in the face of this challenge, one organization rose up and decided they were not going to settle for the typical trade-off between accuracy and speed; they wanted both, and they were determined to figure out how to get it. Researchers at Southwest Research Institute (SwRI) developed an innovative coupled approach between two common techniques in the CFD industry, and their results combined high-fidelity simulations with fast computational runtimes. In this blog, we explore their journey, from the identification of the problem to the creation of a solution, along with the appropriate testing, analysis, and general relevance.
A 3D CFD simulation for a turbocharger is typically conducted in one of two ways. The simplest approach is the multiple reference frame (MRF) strategy, also known as the frozen rotor. This technique keeps the impeller stationary and simulates movement using a rotating coordinate system; as such, the simulation accommodates the moving geometry without needing to regenerate the mesh at every time-step. However, the existing literature indicates this approach may be limited in several capacities. In their CFD analysis of an automotive pulse system turbocharger, a research team in London found the MRF model could not numerically capture the hysteresis curves of mass flow rate and efficiency.1 The MRF approach is also known to overpredict the non-uniformity of the flow field, as demonstrated by CFD studies of turbo compressors.2
The most accurate framework is achieved through transient fluid-structure interaction (FSI) modeling, in which forces are calculated by the numerical integration of pressure and shear stress over the impeller surface. With these calculations and Newton’s Second Law, the rotational speed of the impeller can be predicted. This approach is a predictive method where the rotation of the impeller is determined by the fluid-impeller interaction; therefore, any flow field change can result in a different rotational speed. While this approach accurately predicts all necessary parameters and creates a comprehensive simulation, it is computationally time-consuming.
“Typically, for CFD simulations of compressors and turbines, we use an FSI modeling approach. This works relatively well, since the device’s rotational speed is low, around 1,000–4,000 RPM, which means the computational expense is not so extreme,” said Zainal Abidin, Powertrain Analysis Manager at SwRI. “But for a turbocharger, where the speed is comparatively much higher, in the order of 100,000 RPM, the simulation can get very expensive, very fast. So we needed to do something differently.”
To accommodate the limitations they found, the team at SwRI developed a two-way coupled MRF and FSI approach using CONVERGE CFD software. The FSI solver within CONVERGE simulates the impeller motion using the constrained 1-degree of freedom (1-DOF) model, where the motion is restricted to rotational movement about the impeller axis. A specific region is identified around the moving turbine impeller, where the equations are modeled in the local rotating reference frame. The governing equations are then modified to incorporate the velocity of the rotating region that arises due to the fluid forces on the moving surface, which in turn affect the flow field.3
“When we were considering CFD solvers to use for this case, CONVERGE was the obvious choice,” explained Zainal. “In the years that we’ve worked with the software, we’ve found CONVERGE provides the highest accuracy for simulations with a complicated mesh, which is definitely the case for this turbocharger.”
The test platform used was a 2010 heavy-duty on-highway 15L engine with a twin-scroll compressor. To collect CFD calibration data, high-speed pressure transducers were installed on both sides of the divided turbine inlet, turbine outlet, compressor inlet, and compressor outlet, as shown in Figure 1.
The team at SwRI then created a 3D CFD model to test the new coupling method; the 3D geometry is shown in Figure 2.
CONVERGE automatically generates a cut-cell Cartesian grid at runtime, eliminating user meshing time. At each intersection surface, the software trims the cells so the intersection information, including metrics such as surface area and normal vectors, is reduced before storage. The Redlich-Kwong equation-of-state was employed to couple density, pressure, and temperature variables, and a modified Pressure Implicit with Splitting of Operators (PISO) algorithm assisted with pressure-velocity coupling. Due to its simplicity and low computational runtimes, the researchers chose to employ the k-ε turbulence model over more complicated options like a Reynolds Stress Model or large eddy simulation model. The setup also leveraged a law of the wall boundary condition to bridge the under-resolved flow in the viscous sublayer between the wall and the fully turbulent region.3
To compare the FSI-MRF coupling approach with its pure FSI counterpart, a pure FSI model was built and run to simulate the impeller rotation. The numerical setup used for both strategies was the same, but due to the long runtimes, the pure FSI simulation was not run for as many crank angle degrees. Results, as pictured in Figure 3, showed both approaches had very similar predictions of impeller rotational speed. Additionally, the computational time for the coupled FSI-MRF process is around 16 times faster than the pure FSI solution.
To further assess the validity of the new approach, the SwRI researchers wanted to compare the predicted values for pressure upstream of the turbine against experimental data. To do so, they introduced an energy sink (represented by a resistant torque) to the governing equations to account for the energy transfer from the turbine to the compressor. Calculated pressure values from the coupled approach matched well with test data, as shown in Figure 4.
The validated coupling approach can now be used in design optimization studies to maximize turbine efficiency. The adapter and exhaust manifold were modified to assess their influence on turbine power. The adapter connects the exhaust manifold to the turbine entrance; therefore, an improvement on turbine power is represented by an increase in impeller speed. The modified adapter resulted in slightly increased rotational speed, while the modified manifold had the opposite effect, as seen below in Figure 5.
The coupled FSI-MRF approach successfully bridges the gap between accuracy and speed, offering a powerful solution for complex simulations that require both precision and efficiency. Calculations reminiscent of a pure FSI approach were iteratively passed back to the solver to update an MRF-type system. Early testing demonstrated this approach not only aligns closely with experimental results but also achieves a 16-fold speed increase for the simulation process. As future research continues to refine this method, it has the potential to play a pivotal role in driving faster, more accurate simulations across various applications.
“We discovered this new coupling approach, but we’ve only really scratched the surface. There is a lot of room for improvement, especially to increase the efficiency of the exhaust port,” Zainal noted. “Still, this method has a lot of potential; it can be applied to any simulation that could benefit from a faster computational speed while avoiding the pitfalls of a less accurate solution.”
[1] Palfreyman, D. and Martinez-Botas, R. F., “The Pulsating Flow Field in a Mixed Flow Turbocharger Turbine: An Experimental and Computational Study .” J. Turbomach. 2005; 127(1), 144–155. doi:10.1115/1.1812322.
[2] Liu, Z. and Hill, D. L., “Issues Surrounding Multiple Frames of Reference Models for Turbo Compressor Applications,” International Compressor Engineering Conference. Paper 1369. 2000.
[3] Abidin, Z., Morris, A., Miwa, J., Sadique, J., et. al., “FSI – MRF Coupling Approach For Faster Turbocharger 3D Simulation,” SAE Technical Paper 2019-01-0007, 2019, doi:10.4271/2019-01-0007.
Graphcore has used a range of technologies from Mentor, a Siemens business, to successfully design and verify its latest M2000 platform based on the Graphcore Colossus™ GC200 Intelligence Processing Unit (IPU) processor.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.
High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.
A common question from Tecplot 360 users centers around the hardware that they should buy to achieve the best performance. The answer is invariably, it depends. That said, we’ll try to demystify how Tecplot 360 utilizes your hardware so you can make an informed decision in your hardware purchase.
Let’s have a look at each of the major hardware components on your machine and show some test results that illustrate the benefits of improved hardware.
Our test data is an OVERFLOW simulation of a wind turbine. The data consists of 5,863 zones, totaling 263,075,016 elements and the file size is 20.9GB. For our test we:
The test was performed using 1, 2, 4, 8, 16, and 32 CPU-cores, with the data on a local HDD (spinning hard drive) and local SSD (solid state disk). Limiting the number of CPU cores was done using Tecplot 360’s ––max-available-processors
command line option.
Data was cleared from the disk cache between runs using RamMap.
Advice: Buy the fastest disk you can afford.
In order to generate any plot in Tecplot 360, you need to load data from a disk. Some plots require more data to be loaded off disk than others. Some file formats are also more efficient than others – particularly file formats that summarize the contents of the file in a single header portion at the top or bottom of the file – Tecplot’s SZPLT is a good example of a highly efficient file format.
We found that the SSD was 61% faster than the HDD when using all 32 CPU-cores for this post-processing task.
All this said – if your data are on a remote server (network drive, cloud storage, HPC, etc…), you’ll want to ensure you have a fast disk on the remote resource and a fast network connection.
With Tecplot 360 the SZPLT file format coupled with the SZL Server could help here. With FieldView you could run in client-server mode.
Advice: Buy the fastest CPU, with the most cores, that you can afford. But realize that performance is not always linear with the number of cores.
Most of Tecplot 360’s data compute algorithms are multi-threaded – meaning they’ll use all available CPU-cores during the computation. These include (but are not limited to): Calculation of new variables, slices, iso-surfaces, streamtraces, and interpolations. The performance of these algorithms improves linearly with the number of CPU-cores available.
You’ll also notice that the overall performance improvement is not linear with the number of CPU-cores. This is because loading data off disk becomes a dominant operation, and the slope is bound to asymptote to the disk read speed.
You might notice that the HDD performance actually got worse beyond 8 CPU-cores. We believe this is because the HDD on this machine was just too slow to keep up with 16 and 32 concurrent threads requesting data.
It’s important to note that with data on the SSD the performance improved all the way to 32 CPU-cores. Further reinforcing the earlier advice – buy the fastest disk you can afford.
Advice: Buy as much RAM as you need, but no more.
You might be thinking: “Thanks for nothing – really, how much RAM do I need?”
Well, that’s something you’re going to have to figure out for yourself. The more data Tecplot 360 needs to load to create your plot, the more RAM you’re going to need. Computed iso-surfaces can also be a large consumer of RAM – such as the iso-surface computed in this test case.
If you have transient data, you may want enough RAM to post-process a couple time steps simultaneously – as Tecplot 360 may start loading a new timestep before unloading data from an earlier timestep.
The amount of RAM required is going to be different depending on your file format, cell types, and the post-processing activities you’re doing. For example:
When testing the amount of RAM used by Tecplot 360, make sure to set the Load On Demand strategy to Minimize Memory Use (available under Options>Performance).
This will give you an understanding of the minimum amount of RAM required to accomplish your task. When set to Auto Unload (the default), Tecplot 360 will maintain more data in RAM, which improves performance. The amount of data Tecplot 360 holds in RAM is dictated by the Memory threshold (%) field, seen in the image above. So you – the user – have control over how much RAM Tecplot 360 is allowed to consume.
Advice: Most modern graphics cards are adequate, even Intel integrated graphics provide reasonable performance. Just make sure you have up to date graphics drivers. If you have an Nvidia graphics card, favor the “Studio” drivers over the “Game Ready” drivers. The “Studio” drivers are typically more stable and offer better performance for the types of plots produced by Tecplot 360.
Many people ask specifically what type of graphics card they should purchase. This is, interestingly, the least important hardware component (at least for most of the plots our users make). Most of the post-processing pipeline is dominated by the disk and CPU, so the time spent rendering the scene is a small percentage of the total.
That said – there are some scenes that will stress your graphics card more than others. Examples are:
Note that Tecplot 360’s interactive graphics performance currently (2023) suffers on Apple Silicon (M1 & M2 chips). The Tecplot development team is actively investigating solutions.
As with most things in life, striking a balance is important. You can spend a huge amount of money on CPUs and RAM, but if you have a slow disk or slow network connection, you’re going to be limited in how fast your post-processor can load the data into memory.
So, evaluate your post-processing activities to try to understand which pieces of hardware may be your bottleneck.
For example, if you:
And again – make sure you have enough RAM for your workflow.
The post What Computer Hardware Should I Buy for Tecplot 360? appeared first on Tecplot Website.
Three years after our merger began, we can report that the combined FieldView and Tecplot team is stronger than ever. Customers continue to receive the highest quality support and new product releases and we have built a solid foundation that will allow us to continue contributing to our customers’ successes long into the future.
This month we have taken another step by merging the FieldView website into www.tecplot.com. Our social media outreach will also be combined. Stay up to date with news and announcements by subscribing and following us on social media.
Members of Tecplot 360 & FieldView teams exhibit together at AIAA SciTech 2023. From left to right: Shane Wagner, Charles Schnake, Scott Imlay, Raja Olimuthu, Jared McGarry and Yves-Marie Lefebvre. Not shown are Scott Fowler and Brandon Markham.
It’s been a pleasure seeing two groups that were once competitors come together as a team, learn from each other and really enjoy working together.
– Yves-Marie Lefebvre, Tecplot CTO & FieldView Product Manager.
Our customers have seen some of the benefits of our merger in the form of streamlined services from the common Customer Portal, simplified licensing, and license renewals. Sharing expertise and assets across teams has already led to the faster implementation of modules such as licensing and CFD data loaders. By sharing our development resources, we’ve been able to invest more in new technology, which will soon translate to increased performance and new features for all products.
Many of the improvements are internal to our organization but will have lasting benefits for our customers. Using common development tools and infrastructure will enable us to be as efficient as possible to ensure we can put more of our energy into improving the products. And with the backing of the larger organization, we have a firm foundation to look long term at what our customers will need in years to come.
We want to thank our customers and partners for their support and continued investment as we endeavor to create better tools that empower engineers and scientists to discover, analyze and understand information in complex data, and effectively communicate their results.
The post FieldView joins Tecplot.com – Merger Update appeared first on Tecplot Website.
One of the most memorable parts of my finite-elements class in graduate school was a comparison of linear elements and higher-order elements for the structural analysis of a dam. As I remember, they were able to duplicate the results obtained with 34 linear elements by using a SINGLE high-order element. This made a big impression on me, but the skills I learned at that time remained largely unused until recently.
You see, my Ph.D. research and later work was using finite-volume CFD codes to solve the steady-state viscous flow. For steady flows, there didn’t seem to be much advantage to using higher than 2nd or 3rd order accuracy.
This has changed recently as the analysis of unsteady vortical flows have become more common. The use of higher-order (greater than second order) computational fluid dynamics (CFD) methods is increasing. Popular government and academic CFD codes such as FUN3D, KESTREL, and SU2 have released, or are planning to release, versions that include higher-order methods. This is because higher-order accurate methods offer the potential for better accuracy and stability, especially for unsteady flows. This trend is likely to continue.
Commercial visual analysis codes are not yet providing full support for higher-order solutions. The CFD 2030 vision states
“…higher-order methods will likely increase in utilization during this time frame, although currently the ability to visualize results from higher order simulations is highly inadequate. Thus, software and hardware methods to handle data input/output (I/O), memory, and storage for these simulations (including higher-order methods) on emerging HPC systems must improve. Likewise, effective CFD visualization software algorithms and innovative information presentation (e.g., virtual reality) are also lacking.”
The isosurface algorithm described in this paper is the first step toward improving higher-order element visualization in the commercial visualization code Tecplot 360.
Higher-order methods can be based on either finite-difference methods or finite-element methods. While some popular codes use higher-order finite-difference methods (OVERFLOW, for example), this paper will focus on higher-order finite-element techniques. Specifically, we will present a memory-efficient recursive subdivision algorithm for visualizing the isosurface of higher-order element solutions.
In previous papers we demonstrated this technique for quadratic tetrahedral, hexahedral, pyramid, and prism elements with Lagrangian polynomial basis functions. In this paper Optimized Implementation of Recursive Sub-Division Technique for Higher-Order Finite-Element Isosurface and Streamline Visualization we discuss the integration of these techniques into the engine of the commercial visualization code Tecplot 360 and discuss speed optimizations. We also discuss the extension of the recursive subdivision algorithm to cubic tetrahedral and pyramid elements, and quartic tetrahedral elements. Finally, we discuss the extension of the recursive subdivision algorithm to the computation of streamlines.
Click an image to view the slideshow
[See image gallery at www.tecplot.com]The post Faster Visualization of Higher-Order Finite-Element Data appeared first on Tecplot Website.
In this release, we are very excited to offer “Batch-Pack” licensing for the first time. A Batch-Pack license enables a single user access to multiple concurrent batch instances of our Python API (PyTecplot) while consuming only a single license seat. This option will reduce license contention and allow for faster turnaround times by running jobs in parallel across multiple nodes of an HPC. All at a substantially lower cost than buying additional license seats.
Data courtesy of ZJ Wang, University of Kansas, visualization by Tecplot.
Get a Free Trial Update Your Software
The post Webinar: Tecplot 360 2022 R2 appeared first on Tecplot Website.
Call 1.800.763.7005 or 425.653.1200
Email info@tecplot.com
Batch-mode is a term nearly as old as computers themselves. Despite its age, however, it is representative of a concept that is as relevant today as it ever was, perhaps even more so: headless (scripted, programmatic, automated, etc.) execution of instructions. Lots of engineering is done interactively, of course, but oftentimes the task is a known quantity and there is a ton of efficiency to be gained by automating the computational elements. That efficiency is realized ten times over when batch-mode meets parallelization – and that’s why we thought it was high-time we offered a batch-mode licensing model for Tecplot 360’s Python API, PyTecplot. We call them “batch-packs.”
Tecplot 360 batch-packs work by enabling users to run multiple concurrent instances of our Python API (PyTecplot) while consuming only a single license seat. It’s an optional upgrade that any customer can add to their license for a fee. The benefit? The fee for a batch-pack is substantially lower than buying an equivalent number of license seats – which makes it easier to justify outfitting your engineers with the software access they need to reach peak efficiency.
Here is a handy little diagram we drew to help explain it better:
Each network license allows ‘n’ seats. Traditionally, each instance of PyTecplot consumes 1 seat. Prior to the 2022 R2 release of Tecplot 360 EX, licenses only operated using the paradigm illustrated in the first two rows of the diagram above (that is, a user could check out up to ‘n’ seats, or ‘n’ users could check out a single seat). Now customers can elect to purchase batch-packs, which will enable each seat to provide a single user with access to ‘m’ instances of PyTecplot, as shown in the bottom row of the figure.
In addition to a cost reduction (vs. purchasing an equivalent number of network seats), batch-pack licensees will enjoy:
We’re excited to offer this new option and hope that our customers can make the most of it.
The post Introducing 360 “Batch-Packs” appeared first on Tecplot Website.
If you care about how you present your data and how people perceive your results, stop reading and watch this talk by Kristen Thyng on YouTube. Seriously, I’ll wait, I’ve got the time.
Which colormap you choose, and which data values are assigned to each color can be vitally important to how you (or your clients) interpret the data being presented. To illustrate the importance of this, consider the image below.
Figure 1. Visualization of the Southeast United States. [4]
Before I explain what a perceptually uniform colormap is, let’s start with everyone’s favorite: the rainbow colormap. We all love the rainbow colormap because it’s pretty and is recognizable. Everyone knows “ROY G BIV” so we think of this color progression as intuitive, but in reality (for scalar values) it’s anything but.
Consider the image below, which represents the “Estimated fraction of precipitation lost to evapotranspiration”. This image makes it appear that there’s a very distinct difference in the scalar value right down the center of the United States. Is there really a sudden change in the values right in the middle of the Great Plains? No – this is an artifact of the colormap, which is misleading you!
Figure 2. This plot illustrates how the rainbow colormap is misleading, giving the perception that there is a distinct different in the middle of the US, when in fact the values are more continuous. [2]
So let’s dive a little deeper into the rainbow colormap and how it compares to perceptually uniform (or perceptually linear) colormaps.
Consider the six images below, what are we looking at? If you were to only look at the top three images, you might get the impression that the scalar value has non-linear changes – while this value (radius) is actually changing linearly. If presented with the rainbow colormap, you’d be forgiven if you didn’t guess that the object is a cone, colored by radius.
Figure 3. An example of how the rainbow colormap imparts information that does not actually exist in the data.
So why does the rainbow colormap mislead? It’s because the color values are not perceptually uniform. In this image you can see how the perceptual changes in the colormap vary from one end to the other. The gray scale and “cmocean – haline” colormaps shown here are perceptually uniform, while the rainbow colormap adds information that doesn’t actually exist.
Figure 4. Visualization of the perceptual changes of three colormaps. [5]
Well, that depends. Tecplot 360 and FieldView are typically used to represent scalar data, so Sequential and Diverging colormaps will probably get used the most – but there are others we will discuss as well.
Sequential colormaps are ideal for scalar values in which there’s a continuous range of values. Think pressure, temperature, and velocity magnitude. Here we’re using the ‘cmocean – thermal’ colormap in Tecplot 360 to represent fluid temperature in a Barracuda Virtual Reactor simulation of a cyclone separator.
Diverging colormaps are a great option when you want to highlight a change in values. Think ratios, where the values span from -1 to 1, it can help to highlight the value at zero.
The diverging colormap is also useful for “delta plots” – In the plot below, the bottom frame is showing a delta between the current time step and the time average. Using a diverging colormap, it’s easy to identify where the delta changes from negative to positive.
If you have discrete data that represent things like material properties – say “rock, sand, water, oil”, these data can be represented using integer values and a qualitative colormap. This type of colormap will do good job in supplying distinct colors for each value. An example of this, from a CONVERGE simulation, can be seen below. Instructions to create this plot can be found in our blog, Creating a Materials Legend in Tecplot 360.
Perhaps infrequently used, but still important to point out is the “phase” colormap. This is particularly useful for values which are cyclic – such as a theta value used to represent wind direction in this FVCOM simulation result. If we were to use a simple sequential colormap (inset plot below) you would observe what appears to be a large gradient where the wind direction is 360o vs. 0o. Logically these are the same value and using the “cmocean – phase” colormap allows you communicate the continuous nature of the data.
There are times when you want to force a break in a continuous colormap. In the image below, the colormap is continuous from green to white but we want to ensure that values at or below zero are represented as blue – to indicate water. In Tecplot 360 this can be done using the “Override band colors” option, in which we override the first color band to be blue. This makes the plot more realistic and therefore easier to interpret.
The post Colormap in Tecplot 360 appeared first on Tecplot Website.
Ansys has announced that it will acquire Zemax, maker of high-performance optical imaging system simulation solutions. The terms of the deal were not announced, but it is expected to close in the fourth quarter of 2021.
Zemax’s OpticStudio is often mentioned when users talk about designing optical, lighting, or laser systems. Ansys says that the addition of Zemax will enable Ansys to offer a “comprehensive solution for simulating the behavior of light in complex, innovative products … from the microscale with the Ansys Lumerical photonics products, to the imaging of the physical world with Zemax, to human vision perception with Ansys Speos [acquired with Optis]”.
This feels a lot like what we’re seeing in other forms of CAE, for example, when we simulate materials from nano-scale all the way to fully-produced-sheet-of-plastic-scale. There is something to be learned at each point, and simulating them all leads, ultimately, to a more fit-for-purpose end result.
Ansys is acquiring Zemax from its current owner, EQT Private Equity. EQT’s announcement of the sale says that “[w]ith the support of EQT, Zemax expanded its management team and focused on broadening the Company’s product portfolio through substantial R&D investment focused on the fastest growing segments in the optics space. Zemax also revamped its go-to-market sales approach and successfully transitioned the business model toward recurring subscription revenue”. EQT had acquired Zemax in 2018 from Arlington Capital Partners, a private equity firm, which had acquired Zemax in 2015. Why does this matter? Because the path each company takes is different — and it’s sometimes not a straight line.
Ansys says the transaction is not expected to have a material impact on its 2021 financial results.
Last year Sandvik acquired CGTech, makers of Vericut. I, like many people, thought “well, that’s interesting” and moved on. Then in July, Sandvik announced it was snapping up the holding company for Cimatron, GibbsCAM (both acquired by Battery Ventures from 3D Systems), and SigmaTEK (acquired by Battery Ventures in 2018). Then, last week, Sandvik said it was adding Mastercam to that list … It’s clearly time to dig a little deeper into Sandvik and why it’s doing this.
First, a little background on Sandvik. Sandvik operates in three main spheres: rocks, machining, and materials. For the rocks part of the business, the company makes mining/rock extraction and rock processing (crushing, screening, and the like) solutions. Very cool stuff but not relevant to the CAM discussion.
The materials part of the business develops and sells industrial materials; Sandvik is in the process of spinning out this business. Also interesting but …
The machining part of the business is where things get more relevant to us. Sandvik Machining & Manufacturing Solutions (SMM) has been supplying cutting tools and inserts for many years, via brands like Sandvik, SECO, Miranda, Walter, and Dormer Pramet, and sees a lot of opportunity in streamlining the processes around the use of specific tools and machines. Light weighting and sustainability efforts in end-industries are driving interest in new materials and more complex components, as well as tighter integration between design and manufacturing operations. That digitalization across an enterprise’s areas of business, Sandvik thinks, plays into its strengths.
According to info from the company’s 2020 Capital Markets Day, rocks and materials are steady but slow revenue growers. The company had set a modest 5% revenue growth target but had consistently been delivering closer to 3% — what to do? Like many others, the focus shifted to (1) software and (2) growth by acquisition. Buying CAM companies ticked both of those boxes, bringing repeatable, profitable growth. In an area the company already had some experience in.
Back to digitalization. If we think of a manufacturer as having (in-house or with partners) a design function, which sends the concept on to production preparation, then to machining, and, finally, to verification/quality control, Sandvik wants to expand outwards from machining to that entire world. Sandvik wants to help customers optimize the selection of tools, the machining strategy, and the verification and quality workflow.
The Manufacturing Solutions subdivision within SMM was created last year to go after this opportunity. It’s got 3 areas of focus: automating the manufacturing process, industrializing additive manufacturing, and expanding the use of metrology to real-time decision making.
The CGTech acquisition last year was the first step in realizing this vision. Vericut is prized for its ability to work with any CAM, machine tool, and cutting tool for NC code simulation, verification, optimization, and programming. CGTech is a long-time supplier of Vericut software to Sandvik’s Coromant production units, so the companies knew one another well. Vericut helps Sandvik close that digitalization/optimization loop — and, of course, gives it access to the many CAM users out there who do not use Coromant.
But verification is only one part of the overall loop, and in some senses, the last. CAM, on the other hand, is the first (after design). Sanvik saw CAM as “the most important market to enter due to attractive growth rates – and its proximity to Sandvik Manufacturing and Machining Solutions’ core business.” Adding Cimatron, GibbsCAM, SigmaTEK, and Mastercam gets Sandvik that much closer to offering clients a set of solutions to digitize their complete workflows.
And it makes business sense to add CAM to the bigger offering:
To head off one question: As of last week’s public statements, anyway, Sandvik has no interest in getting into CAD, preferring to leave that battlefield to others, and continue on its path of openness and neutrality.
And because some of you asked: there is some overlap in these acquisitions, but remarkably little, considering how established these companies all are. GibbsCAM is mostly used for production milling and turning; Cimatron is used in mold and die — and with a big presence in automotive, where Sandvik already has a significant interest; and SigmaNEST is for sheet metal fabrication and material requisitioning.
One interesting (to me, anyway) observation: 3D Systems sold Gibbs and Cimatron to Battery in November 2020. Why didn’t Sandvik snap it up then? Why wait until July 2021? A few possible reasons: Sandvik CEO Stefan Widing has been upfront about his company’s relative lack of efficiency in finding/closing/incorporating acquisitions; perhaps it was simply not ready to do a deal of this type and size eight months earlier. Another possible reason: One presumes 3D Systems “cleaned up” Cimatron and GibbsCAM before the sale (meaning, separating business systems and financials from the parent, figuring out HR, etc.) but perhaps there was more to be done, and Sandvik didn’t want to take that on. And, finally, maybe the real prize here for Sandvik was SigmaNEST, which Battery Ventures had acquired in 2018, and Cimatron and GibbsCAM simply became part of the deal. We may never know.
This whole thing is fascinating. A company out of left field, acquiring these premium PLMish assets. Spending major cash (although we don’t know how much because of non-disclosures between buyer and sellers) for a major market presence.
No one has ever asked me about a CAM roll-up, yet I’m constantly asked about how an acquirer could create another Ansys. Perhaps that was the wrong question, and it should have been about CAM all along. It’s possible that the window for another company to duplicate what Sandvik is doing may be closing since there are few assets left to acquire.
Sandvik’s CAM acquisitions haven’t closed yet, but assuming they do, there’s a strong fit between CAM and Sandvik’s other manufacturing-focused business areas. It’s more software, with its happy margins. And, finally, it lets Sandvik address the entire workflow from just after component design to machining and on to verification. Mr. Widing says that Sandvik first innovated in hardware, then in service – and now, in software to optimize the component part manufacturing process. These are where gains will come, he says, in maximizing productivity and tool longevity. Further out, he sees, measuring every part to see how the process can be further optimized. It’s a sound investment in the evolution of both Sandvik and manufacturing.
We all love a good reinvention story, and how Sandvik executes on this vision will, of course, determine if the reinvention was successful. And, of course, there’s always the potential for more news of this sort …
I missed this last month — Sandvik also acquired Cambrio, which is the combined brand for what we might know better as GibbsCAM (milling, turning), Cimatron (mold and die), and SigmaNEST (nesting, obvs). These three were spun out of 3D Systems last year, acquired by Battery Ventures — and now sold on to Sandvik.
This was announced in July, and the acquisition is expected to close in the second half of 2021 — we’ll find out on Friday if it already has.
At that time. Sandvik said its strategic aim is to “provide customers with software solutions enabling automation of the full component manufacturing value chain – from design and planning to preparation, production and verification … By acquiring Cambrio, Sandvik will establish an important position in the CAM market that includes both toolmaking and general-purpose machining. This will complement the existing customer offering in Sandvik Manufacturing Solutions”.
Cambrio has around 375 employees and in 2020, had revenue of about $68 million.
If we do a bit of math, Cambrio’s $68 million + CNC Software’s $60 million + CGTech’s (that’s Vericut’s maker) of $54 million add up to $182 million in acquired CAM revenue. Not bad.
More on Friday.
CNC Software and its Mastercam have been a mainstay among CAM providers for decades, marketing its solutions as independent, focused on the workgroup and individual. That is about to change: Sandvik, which bought CGTech late last year, has announced that it will acquire CNC Software to build out its CAM offerings.
According to Sandvik’s announcement, CNC Software brings a “world-class CAM brand in the Mastercam software suite with an installed base of around 270,000 licenses/users, the largest in the industry, as well as a strong market reseller network and well-established partnerships with leading machine makers and tooling companies”.
We were taken by surprise by the CGTech deal — but shouldn’t be by the Mastercam acquisition. Stefan Widing, Sandvik’s CEO explains it this way: “[Acquiring Mastercam] is in line with our strategic focus to grow in the digital manufacturing space, with special attention on industrial software close to component manufacturing. The acquisition of CNC Software and the Mastercam portfolio, in combination with our existing offerings and extensive manufacturing capabilities, will make Sandvik a leader in the overall CAM market, measured in installed base. CAM plays a vital role in the digital manufacturing process, enabling new and innovative solutions in automated design for manufacturing.” The announcement goes on to say, “CNC Software has a strong market position in CAM, and particularly for small and medium-sized manufacturing enterprises (SME’s), something that will support Sandvik’s strategic ambitions to develop solutions to automate the manufacturing value chain for SME’s – and deliver competitive point solutions for large original equipment manufacturers (OEM’s).”
Sandvik says that CNC Software has 220 employees, with revenue of $60 million in 2020, and a “historical annual growth rate of approximately 10 percent and is expected to outperform the estimated market growth of 7 percent”.
No purchase price was disclosed, but the deal is expected to close during the fourth quarter.
Sandvik is holding a call about this on Friday — more updates then, if warranted.
Bentley continues to grow its deep expertise in various AEC disciplines — most recently, expanding its focus in underground resource mapping and analysis. This diversity serves it well; read on.
In Q2,
Unlike AspenTech, Bentley’s revenue growth is speeding up (total revenue up 21% in Q2, including a wee bit from Seequent, and up 17% for the first six months of 2021). Why the difference? IMHO, because Bentley has a much broader base, selling into many more end industries as well as to road/bridge/water/wastewater infrastructure projects that keep going, Covid or not. CEO Greg Bentley told investors that some parts of the business are back to —or even better than— pre-pandemic levels, but not yet all. He said that the company continues to struggle in industrial and resources capital expenditure projects, and therefore in the geographies (theMiddle East and Southeast Asia) that are the most dependent on this sector. This is balanced against continued success in new accounts and the company’s reinvigorated selling to small and medium enterprises via its Virtuosity subsidiary — and in a resurgence in the overall commercial/facilities sector. In general, it appears that sales to contractors such as architects and engineers lag behind those to owners and operators of commercial facilities —makes sense as many new projects are still on pause until pandemic-related effects settle down.
One unusual comment from Bentley’s earnings call that we’re going to listen for on others: The government of China is asking companies to explain why they are not using locally-grown software solutions; it appears to be offering preferential tax treatment for buyers of local software. As Greg Bentley told investors, “[d]uring the year to date, we have experienced a rash of unanticipated subscription cancellations within the mid-sized accounts in China that have for years subscribed to our China-specific enterprise program … Because we don’t think there are product issues, we will try to reinstate these accounts through E365 programs, where we can maintain continuous visibility as to their usage and engagement”. So, to recap: the government is using taxation to prefer one set of vendors over another, and all Bentley can do (really) is try to bring these accounts back and then monitor them constantly to keep on top of emerging issues. FWIW, in the pre-pandemic filings for Bentley’s IPO, “greater China, which we define as the Peoples’ Republic of China, Hong Kong and Taiwan … has become one of our largest (among our top five) and fastest-growing regions as measured by revenue, contributing just over 5% of our 2019 revenues”. Something to watch.
The company updated its financial outlook for 2021 to include the recent Seequent acquisition and this moderate level of economic uncertainty. Bentley might actually join the billion-dollar club on a pro forma basis — as if the acquisition of Seequent had occurred at the beginning of 2021. On a reported basis, the company sees total revenue between $945 million and $960 million, or an increase of around 18%, including Seequent. Excluding Seequent, Bentley sees organic revenue growth of 10% to 11%.
Much more here, on Bentley’s investor website.
We still have to hear from Autodesk, but there’s been a lot of AECish earnings news over the last few weeks. This post starts a modest series as we try to catch up on those results.
AspenTech reported results for its fiscal fourth quarter, 2021 last week. Total revenue of $198 million in DQ4, down 2% from a year ago. License revenue was $145 million, down 3%; maintenance revenue was $46 million, basically flat when compared to a year earlier, and services and other revenue was $7 million, up 9%.
For the year, total revenue was up 19% to $709 million, license revenue was up 28%, maintenance was up 4% and services and other revenue was down 18%.
Looking ahead, CEO Antonio Pietri said that he is “optimistic about the long-term opportunity for AspenTech. The need for our customers to operate their assets safely, sustainably, reliably and profitably has never been greater … We are confident in our ability to return to double-digit annual spend growth over time as economic conditions and industry budgets normalize.” The company sees fiscal 2022 total revenue of $702 million to $737 million, which is up just $10 million from final 2021 at the midpoint.
Why the slowdown in FQ4 from earlier in the year? And why the modest guidance for fiscal 2022? One word: Covid. And the uncertainty it creates among AspenTech’s customers when it comes to spending precious cash. AspenTech expects its visibility to improve when new budgets are set in the calendar fourth quarter. By then, AspenTech hopes, its customers will have a clearer view of reopening, consumer spending, and the timing of an eventual recovery.
Lots more detail here on AspenTech’s investor website.
Next up, Bentley. Yup. Alphabetical order.
There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.
CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation
Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.
Conjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature
It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.
CFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study
Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).
CFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study
One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.
Dragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath
The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.
2 Hour Marathon Attempt
In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:
As you can see, we’ll be simulating the flow over a bump defined by the curve:
First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:
/*--------------------------------*- C++ -*----------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: 6
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
FoamFile
{
version 2.0;
format ascii;
class dictionary;
object blockMeshDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
convertToMeters 1;
vertices
(
(-1 0 0) // 0
(0 0 0) // 1
(1 0 0) // 2
(2 0 0) // 3
(-1 2 0) // 4
(0 2 0) // 5
(1 2 0) // 6
(2 2 0) // 7
(-1 0 1) // 8
(0 0 1) // 9
(1 0 1) // 10
(2 0 1) // 11
(-1 2 1) // 12
(0 2 1) // 13
(1 2 1) // 14
(2 2 1) // 15
);
blocks
(
hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);
edges
(
);
boundary
(
inlet
{
type patch;
faces
(
(0 8 12 4)
);
}
outlet
{
type patch;
faces
(
(3 7 15 11)
);
}
lowerWall
{
type wall;
faces
(
(0 1 9 8)
(1 2 10 9)
(2 3 11 10)
);
}
upperWall
{
type patch;
faces
(
(4 12 13 5)
(5 13 14 6)
(6 14 15 7)
);
}
frontAndBack
{
type empty;
faces
(
(8 9 13 12)
(9 10 14 13)
(10 11 15 14)
(1 0 4 5)
(2 1 5 6)
(3 2 6 7)
);
}
);
// ************************************************************************* //
This blockMeshDict produces the following grid:
It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!
So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:
edges
(
polyLine 1 2
(
(0 0 0)
(0.1 0.0309016994 0)
(0.2 0.0587785252 0)
(0.3 0.0809016994 0)
(0.4 0.0951056516 0)
(0.5 0.1 0)
(0.6 0.0951056516 0)
(0.7 0.0809016994 0)
(0.8 0.0587785252 0)
(0.9 0.0309016994 0)
(1 0 0)
)
polyLine 9 10
(
(0 0 1)
(0.1 0.0309016994 1)
(0.2 0.0587785252 1)
(0.3 0.0809016994 1)
(0.4 0.0951056516 1)
(0.5 0.1 1)
(0.6 0.0951056516 1)
(0.7 0.0809016994 1)
(0.8 0.0587785252 1)
(0.9 0.0309016994 1)
(1 0 1)
)
);
The sub-dictionary above is just a list of points on the curve . The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.
The following mesh is produced:
Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!
Cheers.
This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trademarks.
Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.
Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.
In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.
Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).
In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.
For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).
In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.
Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.
In ParaView the necessary tool for this is:
Gradient of Unstructured DataSet:
Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:
To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:
There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.
To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:
The results look pretty realistic:
The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:
Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!
To do this, we just have to use the Gradient of Unstructured DataSet tool again:
This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.
Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:
Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.
This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.
Hopefully this post will be helpful to some of you out there. Cheers!
Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/
The law given by:
It is also often simplified (as it is in OpenFOAM) to:
In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.
So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.
So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.
By far the simplest way to achieve this is using Python and the Scipy.optimize package.
Step 1: Get Data
The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:
Temparature (K) | Viscosity (Pa.s) |
200 |
0.000012924 |
400 | 0.000022217 |
600 | 0.000029602 |
800 | 0.000035932 |
1000 | 0.000041597 |
1200 | 0.000046812 |
1400 | 0.000051704 |
1600 | 0.000056357 |
1800 | 0.000060829 |
2000 | 0.000065162 |
This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).
Step 2: Use python to fit the data
If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.
First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
Now we define the sutherland function:
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
Next we input the data:
T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.
popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
Now we can just output our data to the screen and plot the results if we so wish:
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
Overall the entire code looks like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!
In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.
This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.
The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.
There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.
While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.
Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:
(1) Understand CFD
This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:
(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish
(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera
(c) Computational fluid dynamics – the basics with applications – By John D. Anderson
(2) Understand fluid dynamics
Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.
(3) Avoid building cases from scratch
Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!
As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.
(4) Using Ubuntu makes things much easier
This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.
I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.
(5) If you’re struggling, simplify
Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.
(6) Familiarize yourself with the cfd-online forum
If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.
(7) The results from checkMesh matter
If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:
http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf
(8) CFL Number Matters
If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.
For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:
https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam
For the record, this points falls into point (1) of Understanding CFD.
(9) Work through the OpenFOAM Wiki “3 Week” Series
If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:
https://wiki.openfoam.com/%223_weeks%22_series
If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.
(10) OpenFOAM is not a second-tier software – it is top tier
I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).
In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.
(11) Meshing… Ugh Meshing
For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.
Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.
Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.
This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trade marks.
Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.
Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.
The two main ways that I have meshed airfoils to date has been:
(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.
But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.
The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections
In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.
There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!
Hopefully, this is useful to some of you out there!
You can download the script here:
https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher
Here you will also find a template based on the airfoil2D OpenFOAM tutorial.
(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh
PS
You need to run this with python 3, and you need to have numpy installed
The inputs for the script are very simple:
ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.
airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.
DomainHeight: This is the height of the domain in multiples of chords.
WakeLength: Length of the wake domain in multiples of chords
firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator
growthRate: Boundary layer growth rate
MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.
The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.
BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil
LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge
TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge
inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity
trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.
Inputs:
With the above inputs, the grid looks like this:
Mesh Quality:
These are some pretty good mesh statistics. We can also view them in paraView:
The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:
With these inputs, the result looks like this:
Mesh Quality:
Visualizing the mesh quality:
Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).
Inputs:
Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.
Grid Quality:
Visualizing the grid quality
Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.
The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!
Comments and bug reporting encouraged!
DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM® and OpenCFD® trademarks.
Here is a useful little tool for calculating the properties across a normal shock.
If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!
Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.