|
[Sponsors] |
Viability of Sun T5120 (UltraSPARC T2) for CFD |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
February 8, 2017, 18:18 |
Viability of Sun T5120 (UltraSPARC T2) for CFD
|
#1 |
New Member
Join Date: Feb 2017
Posts: 1
Rep Power: 0 |
Hi everyone! I have recently noticed that some old UltraSPARC servers can now be found online for under $200 so I decided to investigate if one of these machines is a viable alternative for performing CFD calculations in terms of performance for the price. The Sun T5120 seemed like the obvious choice since the machines can be found for little money and also since the T2 has also been preliminarily benchmarked in numerically-intensive tasks by these guys:
https://doc.itc.rwth-aachen.de/displ...multiplication The results shown here look somewhat promising I think. So I recently bought a Sun T5120 for around $200 and installed Debian 9 testing (Sparc64 2016-11-25 iso). The system has the following specs: Code:
$ uname -a Linux antares 4.5.0-2-sparc64-smp #1 SMP Debian 4.5.2-1 (2016-04-28) sparc64 GNU/Linux Code:
$ lscpu Architecture: sparc64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Big Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 4 Core(s) per socket: 16 Socket(s): 1 Flags: sun4v Code:
$ cat /proc/meminfo MemTotal: 33053792 kB MemFree: 26013512 kB MemAvailable: 32269728 kB Buffers: 401384 kB Cached: 5766488 kB SwapCached: 2400 kB Active: 4353928 kB Inactive: 2068008 kB Active(anon): 397096 kB Inactive(anon): 155960 kB Active(file): 3956832 kB Inactive(file): 1912048 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 16410384 kB SwapFree: 16401288 kB Dirty: 3840 kB Writeback: 0 kB AnonPages: 255800 kB Mapped: 100000 kB Shmem: 295176 kB Slab: 532072 kB SReclaimable: 479112 kB SUnreclaim: 52960 kB KernelStack: 10576 kB PageTables: 6744 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 32937280 kB Committed_AS: 1271808 kB VmallocTotal: 103075020800 kB VmallocUsed: 0 kB VmallocChunk: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 8192 kB Code:
$ lspci 02:00.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 03:01.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 03:02.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 03:08.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 03:09.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 04:00.0 PCI bridge: PLX Technology, Inc. PEX 8517 16-lane, 5-port PCI Express Switch (rev ac) 05:01.0 PCI bridge: PLX Technology, Inc. PEX 8517 16-lane, 5-port PCI Express Switch (rev ac) 05:02.0 PCI bridge: PLX Technology, Inc. PEX 8517 16-lane, 5-port PCI Express Switch (rev ac) 05:03.0 PCI bridge: PLX Technology, Inc. PEX 8517 16-lane, 5-port PCI Express Switch (rev aa) 06:00.0 PCI bridge: PLX Technology, Inc. PEX8112 x1 Lane PCI Express-to-PCI Bridge (rev aa) 07:00.0 USB controller: NEC Corporation OHCI USB Controller (rev 43) 07:00.1 USB controller: NEC Corporation OHCI USB Controller (rev 43) 07:00.2 USB controller: NEC Corporation uPD72010x USB 2.0 Controller (rev 04) 08:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) 08:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) 09:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) 09:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) 0a:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS (rev 04) 0b:00.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 0c:01.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 0c:02.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 0c:08.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 0c:09.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) 0c:0a.0 PCI bridge: PLX Technology, Inc. PEX 8533 32-lane, 6-port PCI Express Switch (rev aa) Code:
$ gcc --version gcc (Debian 6.3.0-5) 6.3.0 20170124 Copyright (C) 2016 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. flags: Code:
-O3 -m64 -mcpu=niagara2 -mvis2 -ftree-vectorize I put together this test quickly using only duct tape, glue, and paper clips. If it turns out that this -for fun- project gets enough interest from some of y'all, I may come up with a better test case. The test geometry is a pipe with a diameter d=10mm and a length L=500mm. The mesh was generated by gmsh with 139944 cells. Here is part of the gmshToFoam command output: Code:
Cells: total:139944 hex :0 prism:0 pyr :0 tet :139944 CellZones: Zone Size 0 139944 Skipping tag at line 207727 Patch 0 gets name patch0 Patch 1 gets name patch1 Patch 2 gets name patch2 Code:
application icoFoam; startFrom startTime; startTime 0; stopAt endTime; endTime 150; deltaT 0.1; writeControl timeStep; writeFrequency 1; purgeWrite 0; writeFormat ascii; writePrecision 6; writeCompression off; timeFormat general; timePrecision 6; runTimeModifiable true; Code:
numberOfSubdomains 64; method simple; simpleCoeffs { n (64 1 1); delta 0.001; } Code:
solvers { p { solver PCG; preconditioner DIC; tolerance 1e-06; relTol 0.05; } pFinal { $p; relTol 0; } U { solver smoothSolver; smoother symGaussSeidel; tolerance 1e-05; relTol 0; } } PISO { nCorrectors 2; nNonOrthogonalCorrectors 2; pRefCell 0; pRefValue 0; } problem. RESULTS This is how I ran the case on the T5120: Code:
mpirun -np 64 icoFoam -parallel Code:
403 seconds Code:
473 seconds Code:
743 seconds For comparison, the run time using my laptop (Intel T7300) was Code:
1006 seconds Hopefully someone (a grad student with little funding ) finds this information useful and allow for making a more educated decision when considering buying a now cheap T5120 for CFD. |
|
February 9, 2017, 06:36 |
|
#2 |
Senior Member
Hrvoje Jasak
Join Date: Mar 2009
Location: London, England
Posts: 1,907
Rep Power: 33 |
Hi,
It actually looks much better than I thought. I would still buy low-core high-clock Intel chips, but that does not account for the budget constraint. In any case, it is really cool to see FOAM on a Sun again, since a lot of early 1990-s work has been done on Sun (and SGI) workstations. Oh, the good old times... Hrv Good luck, Hrv
__________________
Hrvoje Jasak Providing commercial FOAM/OpenFOAM and CFD Consulting: http://wikki.co.uk |
|
February 10, 2017, 13:42 |
|
#3 |
Senior Member
Paulo Vatavuk
Join Date: Mar 2009
Location: Campinas, Brasil
Posts: 200
Rep Power: 18 |
Hi Dr Jasak,
Can you explain why low-core is good? Isn't a Core I7 better than a I5? Best Regards, Paulo |
|
Tags |
cfd, performance, ultrasparc |
|
|