src: https://gist.github.com/Brainiarc7/6d6c3f23ea057775b72c52817759b25c
Building Tensorflow from source on Ubuntu 16.04LTS for maximum performance.
라고 하는데 정말 일까나..
src: https://gist.github.com/Brainiarc7/6d6c3f23ea057775b72c52817759b25c
Building Tensorflow from source on Ubuntu 16.04LTS for maximum performance.
라고 하는데 정말 일까나..
2080ti cuda benchmark by using NVIDIA sample
ref url: https://www.pugetsystems.com/labs/hpc/TitanXp-vs-GTX1080Ti-for-Machine-Learning-937/
nbody -benchmark -numbodies=256000
2080Ti
--------------------------------------
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
MapSMtoCores for SM 7.5 is undefined. Default to use 64 Cores/SM
GPU Device 0: "GeForce RTX 2080 Ti" with compute capability 7.5
> Compute 7.5 CUDA device: [GeForce RTX 2080 Ti]
number of bodies = 256000
256000 bodies, total time for 10 iterations: 1225.979 ms
= 534.561 billion interactions per second
= 10691.212 single-precision GFLOP/s at 20 flops per interaction
Nvidia driver: 410.73
cuda: 9.0
g/c: 2080ti
cpu: 8700k
m/b: ga-z370-hd3
ram: dominator 3466kHz(XMP applied)
My previous GPU
- Titan Xp
--------------------------------------
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
gpuDeviceInit() CUDA Device [0]: "Graphics Device
> Compute 6.1 CUDA device: [Graphics Device]
number of bodies = 256000
256000 bodies, total time for 10 iterations: 1611.591 ms
= 406.654 billion interactions per second
= 8133.082 single-precision GFLOP/s at 20 flops per interaction
(The range of GLOP/s was from 8066.468 to 8133.082)
Nvidia driver: 375.26
cuda: 8.0
g/c: titan xp
cpu: 8700k
m/b: ga-z370-hd3
ram: dominator 3466kHz(XMP applied)
* GTX 680 showed "1606.946 single-precision GFLOP/s at 20 flops per interaction"
부모가 되면 안다는 것 중 적어도 하나는
아이를 생각하면
좋았던 기억과 미안했던 기억만이
번갈아 떠오른 다는 것 같다.
source backup