|
[Sponsors] |
August 22, 2018, 06:04 |
Artificial Intelligence in CFD
|
#1 |
Member
Naresh Yathuru
Join Date: Feb 2015
Posts: 66
Rep Power: 11 |
Hello Everyone,
I recently came across an article about using Artificial Neural Network (ANN) back propogation in predicting the heat transfer of heat sinks. One of Many: https://www.researchgate.net/profile...l-material.pdf The oldest article I found was way too back (around 1996). I see the following advantages and disadvantages: Advantages: 1. Creating a ANN to recognise patterns from previous work and predict the values. 2. Accurate than correlation 3. Eventually could also help in optimization Disadvantages: 1. Extensive work has to be done for generating training data 2. Predicting Turbulence using patterns is very difficult 3. depending on the number of parameters the number of CFD simulations to generate the training data is enormous. Having said that the question is if it is worth training a neural network using CFD with thousands of simulations or increase the computational power to perform the simulation of the required case? I would like to know if there is any work going in this field and hear your ideas on this topic. Best regards, Naresh |
|
August 22, 2018, 19:05 |
|
#2 |
Member
Join Date: Aug 2018
Posts: 77
Rep Power: 8 |
Dear Naresh,
you are right about the challenges and open issues using ANNs. Here is one example that answers some of the questions you raised and IMO shows the potential of ML:http://https://www.researchgate.net/publication/325737916_Deep_Neural_Networks_for_Data-Driven_Turbulence_Models best, Vesparian |
|
August 23, 2018, 13:54 |
|
#3 | |||
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,754
Rep Power: 66 |
I would like to point out that ANN is not a solver like Fluent/OpenFOAM/Star-CCM. What ANN is, is more like a brain / decision maker. You have a history of results and you like to run more cases. ANN can help you decide what cases to run next in a smart way. Your CFD doesn't suddenly become more accurate because you are using a neural network (you should be able to run the exact same case without a neural network and get the same solution). I would never say that ANN predicts a solution, that's what the actual CFD does.
I don't think ANN's are any godsend but it is a computerized method of what we do as humans so I see a lot of value in solving repetitive problems using an ANN approach in order to cut out the human behind it. Most of the human thinking can be digitized in an ANN framework because it's not a binary decision tree (e.g. fuzzy logic works well in ANN). Quote:
1. You have to train the ANN to recognize the pattern, so I don't see it as an advantage. It's a requirement of ANN. 2. I don't see ANN as providing any accuracy benefit or that accuracy is a advantage of ANN. It's not a solver. 3. Is quite redundant. Of course ANN can be applied here. It's like saying calculators will help in optimization one day. Quote:
1. Yes the point of ANN is that it can be trained and training takes a long time if you don't know what you are doing. But look at AlphaZero and how straightforward it can be to make a very powerful chess engine using simple training rules. If it works, do it! If it doesn't, don't. The difficulty in CFD is it's not so simple the quality of a result you get from it. And a lot of times, we are optimizing for multiple objectives and we don't even know how we want to balance or weigh these objectives (unlike chess whether it is win/draw/lose). 2. This is not a drawback specific to ANN. Humans have the same difficulty. And this tends to happen in fields where your predictive models are not very predictive. If turbulence models worked all the time, it would be cakework. It's not ANN's fault that CFD is not predictive like it needs to be. It's also not ANN's fault that we still don't know how to model turbulence well. 3. If you want a complex neural network yes it will take a lot. But this training takes place in computer time and not human time. Quote:
I guess it's not so obvious that we should consider economics of scale. The number of cases that you have can run is exactly proportional to your processing power. The number of cases that you don't need to run because you have a trained neural network can vastly exceed that. I.e. you can train your neural network using 1000 cases to avoid running 1 million simulations. Maybe you find the optimal solution on the 1001 or 1002 run, I don't know. But if you only need to run 10 cases to find your answer, then there's no point in training a neural network. Use the correct tool for the correct problem. Not discussed is what if you do not use a neural network but a different optimization technique like basic gradient based searching or genetic growth algorithms? Well ANN is a supplement to these approaches. You don't gain anything by not using ANN but you also don't lose or break anything either. |
||||
August 25, 2018, 16:16 |
|
#4 |
Member
Join Date: Aug 2018
Posts: 77
Rep Power: 8 |
I believe that the strength of ANNs in CFD can be in model deduction, not in replacing the solver itself. While that is also an intriguing thought, enforcing the governing equations indirectly through an ANN seems dicey to me.
|
|
August 26, 2018, 09:12 |
|
#5 |
Senior Member
Join Date: May 2012
Posts: 551
Rep Power: 16 |
This Youtube channel has lots of AI papers covered that do CFD
For example: https://www.youtube.com/watch?v=iOWamCtnwTc |
|
August 26, 2018, 11:30 |
|
#6 |
Senior Member
|
Neural Networks are just an interpolation method, whose main advantage comes in those fields where setting up the interpolation problem is a problem in itself.
Their use in optimization is a very old topic, which however hardly met great acceptance in practical cases, because of the availability of techniques which are more efficient and mathematically sound (i.e., known error bounds etc.). Yet, they have their share of use cases. Unfortunately, today, the unaware reader has to filter all the crazyness exploded around this field in the last 10 years (mostly because of the flattened risk curve due to the quantitative easing, which has made every business appear as viable, especially niche nerdy ones... which might also be good, but we are going off topic here). Reading an authoritative and comprehensive book on Neural Networks is the only defense for such readers; yet, this is affordable in a week for engineers dealing with CFD (i.e., those alteady equipped with mathematical instruments). The use of neural networks as surrogate for turbulence models (opposed to the previous use case, where they are used to predict global integral quantities) is yet another case which is both old and misleading to the unaware reader. A NN can only extract information which is present in the input and map it into its training set. Using a NN as turbulence model is never going to fill the gap between what we know (i.e., what is currently resolved on your grid) and what we don't (i.e., all the unsolved scales). This is even more so if you consider that the number of degrees of freedom of a turbulent flow increases with the Re number. If they were predictable from few solves ones then turbulence would not be a problem at all. Finally, using a NN as a full solver (as per last link) is yet another use case, which however is relative to the entertainment industry. Yet it is similar to the previous one. Honestly, as a scientist, I won't consider any numerical simulation technique without a clear, proven, method to control the resulting error and a way to eventually reduce it to 0. Funny and intriguing? Maybe. But please do not confuse science with something else. Unfortunately, these days are also plagued by a lot of automatic, real time, whatever tools which are user centric etc., but they all forget about science. It seems that nobody cares anymore about convergence and accuracy, only stability and speed. |
|
August 26, 2018, 12:12 |
|
#7 | |||||
Member
Join Date: Aug 2018
Posts: 77
Rep Power: 8 |
Quote:
Quote:
Quote:
I have to disagree here, please see also the link provided above. Of course, NNs cannot create something from nothing, but it can be trained to approximate the mapping from e.g. coarse grid to DNS grid data. If you are familiar with the deconvolution approaches to turbulence modelling, this is just another method of doing that. NNs cannot fill the gap with 100% accuracy, but they can learn a darn good approximation. Quote:
I agree, how large a net has to be to achieve some form of universal closure model is an open issue. Quote:
|
||||||
August 28, 2018, 08:02 |
|
#8 | |||||
Senior Member
|
First of all, of course, I didn't want to be harsh (maybe just sound imperative for educational purposes), so don't take all of this on the personal level...
...unless you are the author of the paper at the link you posted. In this case, I'm sorry, but that doesn't seem JFM material at all, to me. Not to mention the high confusion on when commutation holds in LES. However, just for the sake of the argument... Quote:
And no, turbulence modeling is not a non-linear interpolation task. It refers to the fact that, on a certain grid, if not a DNS one, you have missing information and, more importantly, a truncated dynamics. Turbulence modeling, in its major acception, refers to the development of a surrogate model for the missing dynamics (i.e., the role of the missing scales in the overall flow dynamics). More on this later. Quote:
Does this qualify as old? Note submission date please. We could argue on the differences between the specific NN adopted, but let's face it: peolpe today are doing this stuff just because money is there and the whole world is actually going there (just like for GPU). There's nothing really new under the hood, except: money, availability of software, availability of hardware (in the sense of Amazon, not real hardware, that one already existed). Admittedly, I haven't followed this field a lot, and probably there is also some interesting idea (maybe in RANS), but the bulk was already there. Honestly, it is a shame that the work above was cited last in the work you cited. As it is a shame that, in the end, the supposed working SGS model in the work you cited is just an eddy viscosity (really?) working worst than static smagorinsky on HIT. It would be much more interesting if such NN were used to analyze DNS data, which today are more common and large than back then. But the plague today is that nobody is using CPU power to gain knowledge, just to prove what they already know. We have increased, by order of magnitudes, both the availability and power of computing systems, since years, yet no significant discovery has changed our life with respect to 25 years ago. Even if such NN would eventually provide a perfect turbulence model, do you expect it to provide any significant gain, by itself, in the engineering community? Quote:
In LES we have Sub Grid Scales (SGS) and SFS (Sub Filter Scales). The former (SGS) refer to scales that are not representable on your grid. As such, there is no trackable information associated to them that you can store somewhere in any form. In LES we talk about functional modeling in relation to the common turbulence modeling practice (as previously explained) of providing a model for such scales. It is called functional because, considering the total lack of information related to them, you can only hope to provide a model that dynamically works like them, that has the same functional role. Typically, for several reasons, we rely on a dissipative model (e.g., through an eddy viscosity). We are also obliged, for a practical reason, to only use available information to formulate such models but, it goes without saying, it has no sound reasoning behind it, except that, at large, it actually works. The latter (SFS) refer instead to scales that are actually representable on your grid but, for some reason, have been altered. The information is there, but has been transformed. Deconvolution is one among several techniques aimed at recovering those represented scales. It is not even always applicable. It requires the LES to be explicitly filtered, so that you know what actually altered those scales (the explicit filter) and can try an approximate, well conditioned, recovery (filter inversion). You should also note that not all explicit filters are usable in this context. Projection filters are not, as they lead to a total loss of information which cannot be recovered (a thing that your cited paper authors also seem to not know). This has nothing to do with turbulence modeling, and indeed is inspired from other fields. Let's make an example. You have a detailed picture of a congress meeting with all the partecipants, with each pixel representing a squared mm. Then you do two different operations on such picture. In the first one, you blur it with a filter but leave the resolution unaltered. In the second one you just cut the pixel resolution to 50cm. By deconvolution, you can recover the first image but not the second one. You can teach a NN to do a deconvolution, but that wouldn't change the matter. Now, you can also actually teach a NN to recover the full picture from the second coarse one, but would it work for any congress with any combination of people? You can train the NN with all the congress pictures ever taken, yet they all look the same at 50 cm resolution. Quote:
Imagine you trained your NN to work with 50 cm resolution pictures of DLES congresses (order 100 participants). Now you use your NN with 50 cm resolution pictures of AIAA conferences (order 1000 partecipants). Does it work the same? That's how turbulence works as function of the Re number. You get more and more people in the picture. You can maybe teach a NN how these people typically seat for such a picture but not if, for the given venue, they had to arrange differently because they were too much. Turbulence is maybe not that drastic as function of the Re number, but the universality concept implies that you know everything about every flow in every condition... and that this information is actually already contained in the resolved scales only. Like saying the 50 cm resolution pictures can represent all the conferences' pictures ever made and that will be ever made. At this point I would also ask if 50 cm is a special number or it applies to other measures (if you know what I mean here, in turbulence terms). Quote:
And conservation has nothing to do with accuracy and convergence. Finite difference methods do not typically conserve stuff, yet they are accurate and convergent. |
||||||
August 28, 2018, 10:54 |
|
#9 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,882
Rep Power: 73 |
Quote:
SGS models for LES based on NN are quite old, see for example https://www.sciencedirect.com/scienc...45793001000986 Then, let me state that the deconvolution method can NEVER reconstruct a DNS field. When you apply a deconvolution technique to a discrete field that extends up to the Nyquist frequency you get a deconvolved field that still extend up to the Nyquist frequency! What is the content between Nyquist and Kolmogorov frequencies is not reconstructed by the deconvolution. |
||
August 28, 2018, 11:10 |
|
#10 | ||
Member
Join Date: Aug 2018
Posts: 77
Rep Power: 8 |
Quote:
Quote:
Best Vesparian |
|||
August 28, 2018, 11:53 |
|
#11 | |
Senior Member
|
Quote:
Peer reviewing is not only flawed, it has also actually crashed, but nobody noticed yet. I am now seriously scared of how people do stuff in med research. |
||
August 28, 2018, 13:14 |
|
#12 |
Super Moderator
|
I would just like to point out a recent review paper on applications to turbulence modeling
https://arxiv.org/abs/1804.00183 ML has had great and real success in other fields. Here is an article on its role in image processing https://sinews.siam.org/Details-Page/deep-deep-trouble |
|
August 28, 2018, 13:28 |
|
#13 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,882
Rep Power: 73 |
Quote:
Yes, ML in image processing can be very useful but the problem is quite far from being similar to a new definition of a model closure. Generally, the image reconstruction techniques are governed by parabolic PDE and they are not involved in a fractal-like pictures as happens in turbulence. The real issue that has not been highlighted is that we already know from DNS solutions that extracting from those data the unresolved fields and inserting them in a practical computations still produce not satisfactory solutions. Thus, I don't think that a ML algorithm can change this framework. |
||
August 28, 2018, 13:33 |
|
#14 | |
Member
Join Date: Aug 2018
Posts: 77
Rep Power: 8 |
Quote:
Could you please elaborate on this? Do you mean akin to a perfect LES approach, where the exact closure terms are generated from the DNS? |
||
August 28, 2018, 13:47 |
|
#15 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,882
Rep Power: 73 |
Quote:
Yes, some studies tried to do this both in LES and RANS formulation. Have a look to the discussion in this recent article https://www.researchgate.net/publica...ll-Conditioned |
||
August 28, 2018, 14:19 |
|
#16 |
New Member
Argyris Apost
Join Date: Nov 2014
Posts: 7
Rep Power: 11 |
I am a postgraduate student, so I dont have any experience with AI in CFD but I see some serious research regarding data-driven turbulence modeling from NASA, University of Michigan, ONERA etc.
http://turbgate.engin.umich.edu/symp.../Duraisamy.pdf http://turbgate.engin.umich.edu/symp...2/Fabbiane.pdf From what I understand they use data from DNS and experiments to optimize the coefficients in Spalart-Allmaras model.They also suggest a similar procedure can be implemented in Reynolds Stress models, which are more difficult to calibrate to be applicable in a wide range of flows. I am really interested in doing some research in this area, but I have doubts over its future (good funding or will it be abandoned?). |
|
August 28, 2018, 14:23 |
|
#17 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,882
Rep Power: 73 |
Quote:
Who can tell you the answer? AI and ML are just new fashionable nomenclatures and it seems this is sufficient at present to get funds (and publications). |
||
August 28, 2018, 14:23 |
|
#18 | |
Member
Join Date: Aug 2018
Posts: 77
Rep Power: 8 |
Quote:
https://journals.aps.org/pre/abstrac...RevE.75.046303 |
||
August 28, 2018, 14:27 |
|
#19 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,882
Rep Power: 73 |
For example you can read the approach for LES
https://www.researchgate.net/publica...ddy_simulation https://www.researchgate.net/publica...ddy_simulation |
|
August 28, 2018, 14:33 |
|
#20 |
New Member
Argyris Apost
Join Date: Nov 2014
Posts: 7
Rep Power: 11 |
Yeah that's exactly what bothers me. Are AI and ML being used because they are fashionable or they can actually lead to improvements into the "stagnant" field, for quite some time now, of Turbulence Modeling. (I am not looking for answers, just opinions)
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
CFD Online Celebrates 20 Years Online | jola | Site News & Announcements | 22 | January 31, 2015 01:30 |
What is the Better Way to Do CFD? | John C. Chien | Main CFD Forum | 54 | April 23, 2001 09:10 |
CFD JOBS and Expected Salary.... | Noel Harrison | Main CFD Forum | 11 | November 22, 2000 08:15 |
Which is better to develop in-house CFD code or to buy a available CFD package. | Tareq Al-shaalan | Main CFD Forum | 10 | June 13, 1999 00:27 |
public CFD Code development | Heinz Wilkening | Main CFD Forum | 38 | March 5, 1999 12:44 |