|
[Sponsors] |
2016 Xeons processors, Intel Omni-Path interconnect and Infiniband direct connect |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
December 8, 2015, 12:09 |
2016 Xeons processors, Intel Omni-Path interconnect and Infiniband direct connect
|
#1 |
Member
|
Hi
1/ Does anyone know when the 2016 xeon 2U processors will become available? 2/ Does anyone know when the Intel Omni-Path Fabric will become available? 3/I read an article a while ago where Mellanox said they had a clear technological lead over Intel. I wondering if that's possible technically to have an advantage over Intel because they already do the interconnect between processors. Is there such a huge gap in the technology required to get 8 processors working together on the same motherboard and 5000 in a cluster? 4/Is it still possible with the latest infiniband technology to have a direct connect between two infiniband cards i.e; with no switch? 5/Is direct connect possible with Omni-Path? Thanks Guillaume with trampoCFD |
|
December 22, 2015, 08:52 |
|
#2 | |
Member
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 16 |
Quote:
My cluster is 2 nodes of Dual Xeon E5-2667 v2, connected directly using Infiniband and no switch. I dont know if any changes to Infiniband has occured during the last two years. Best regards Kim Bindesbøll |
||
December 23, 2015, 06:31 |
|
#3 |
New Member
Join Date: Feb 2010
Posts: 17
Rep Power: 16 |
And does you cluster scale well on small simulation, around the 2M cells mark?
I noticed in my previous company that the direct connect was probably nowhere near as good as it should have been. I wonder if direct connect increases the latency somehow. |
|
January 4, 2016, 03:17 |
|
#4 |
Member
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 16 |
See this post:
http://www.cfd-online.com/Forums/har...l#post452000#8 Later scaling tests I have done, see attached picture. |
|
January 8, 2016, 09:18 |
|
#5 |
New Member
Join Date: Feb 2010
Posts: 17
Rep Power: 16 |
Hi Kim, thanks for that, so the infiniband models were run with direct connect : just a card in each node, with an infiniband cable and no switch, correct?
|
|
January 11, 2016, 02:29 |
|
#6 |
Member
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 16 |
Yes, correct: Only 2 nodes and Infiniband card in each and no switch. Of cause you can only do this with 2 nodes.
|
|
Tags |
infiniband, intel, mellanox, omni-path, xeon |
|
|