Parallel numerical integration (summation) in OpenMP
I recently started learning about parallel coding, I am still at the beginning, so I wanted to try a very simple coding. Since I'm interested in doing parallel numerical integration, I started with a simple Fortran transformation code:
program par_hello_world
use omp_lib
implicit none
integer, parameter:: bign = 1000000000
integer:: i
double precision:: start, finish, start1, finish1, a
a = 0
call cpu_time(start)
!$OMP PARALLEL num_threads(8)
!$OMP DO REDUCTION(+:a)
do i = 1,bign
a = a + sqrt(1.0**5)
end do
!$OMP END DO
!$OMP END PARALLEL
call cpu_time(finish)
print*, 'parallel result:'
print*, a
print*, (finish-start)
a=0
call cpu_time(start1)
do i = 1,bign
a = a + sqrt(1.0**5)
end do
call cpu_time(finish1)
print*, 'sequential result:'
print*, a
print*, (finish1-start1)
end program
The code basically simulates summation, I used a weird expression sqrt(1.0**5)
to measure the computational time, if I only used it 1
, the computation time was so small that I could not compare the serial code against parallel. I tried to avoid the race condition using a suggestion REDUCTION
.
However, I am getting very strange results time
:
- If I increase the number of threads from 2 to 16, I don't get a reduction in computational time, but somehow I even get an increase.
- It seems incredibly that sequential code is affected by the choice of thread number (I really don't understand why!), In particular, it goes up if I raise the number of threads.
- I am getting the correct result for the variable
a
I think I am doing something very bad, but I do not know about it ...
source to share
No one has answered this question yet
See similar questions:
or similar: