Targeted slow MATLAB function?

I want to write a really, really slow program for MATLAB . I say like, O (2 ^ n), or worse. It should finish and it should be deterministically slow, so there is no "if rand () = 123,123, exit!" It sounds crazy, but it really is for a distributed systems test. I need to create a .m file, compile it (with MCC ) and then run it on my distributed system to do some debugging operation.

The program has to do work all the time, so sleep()

it is not a valid parameter.

I tried to make a random large matrix and find its inverse, but it was too fast. Any ideas?

+2


source to share


11 replies


This naive discrete Fourier transform implementation takes ~ 9 seconds for a 2048 long input vector x on my 1.86 GHz single core machine. Going 4096 inputs increases the time to ~ 35 seconds, approaching the 4x I would expect for O (N ^ 2). I don't have the patience to try longer inputs :)

function y = SlowDFT(x)

t = cputime;
y = zeros(size(x));
for c1=1:length(x)
    for c2=1:length(x)
        y(c1) = y(c1) + x(c2)*(cos((c1-1)*(c2-1)*2*pi/length(x)) - ...
                            1j*sin((c1-1)*(c2-1)*2*pi/length(x)));
    end
end
disp(cputime-t);

      

EDIT: Or, if you want to boost the memory more than the processor:



function y = SlowDFT_MemLookup(x)

t = cputime;
y = zeros(size(x));
cosbuf = cos((0:1:(length(x)-1))*2*pi/length(x));
for c1=1:length(x)
    cosctr = 1;
    sinctr = round(3*length(x)/4)+1;
    for c2=1:length(x)
         y(c1) = y(c1) + x(c2)*(cosbuf(cosctr) ...
                            -1j*cosbuf(sinctr));
         cosctr = cosctr + (c1-1);
         if cosctr > length(x), cosctr = cosctr - length(x); end
         sinctr = sinctr + (c1-1);
         if sinctr > length(x), sinctr = sinctr - length(x); end
    end
end
disp(cputime-t);

      

This is faster than calculating sin and cos at each iteration. 2048 long entry took ~ 3 seconds and 16384 long entry took ~ 180 seconds.

+4


source


Count to 2 n . Optionally, make a slow call to the function on each iteration.



+4


source


If you want real work that is easily customizable and emphasizes the CPU path from memory:

  • Large dense matrix inversion (not slow enough, will increase it).
  • Factor a RSA number
+4


source


How about using inv? It was reported rather slowly.

+3


source


I'm not talking about MATLAB, but something equivalent to the following might work.

loops = 0
counter = 0
while (loops < MAX_INT) {
    counter = counter + 1;
    if (counter == MAX_INT) {
        loops = loops + 1;
        counter = 0;
    }
}

      

This will iterate MAX_INT * MAX_INT times. You can put some computational heavy thing in a loop to make it take longer if that's not enough.

+2


source


Do some looping work. You can adjust the time it takes to complete using the number of loop iterations.

+2


source


Easy! Go back to your Turing roots and think about processes that are O (2 ^ n) or worse.

Here's a pretty simple one (warning, untested, but you get the point)

N = 12; radix = 10;
odometer = zeros(N, 1);
done = false;
while (~done)
    done = true;
    for i = 1:N
        odometer(i) = odometer(i) + 1;
        if (odometer(i) >= radix)
            odometer(i) = 0;
        else
            done = false;
            break;
        end
    end
end

      

Better yet, how about calculating Fibonacci numbers recursively? Runtime is O (2 ^ N) since fib (N) has to do two function calls fib (N-1) and fib (N-2) but stack depth is O (N) since only one of these function calls happens at a time.

function y = fib(n)
   if (n <= 1)
      y = 1;
   else
      y = fib(n-1) + fib(n-2);
   end
end

      

+2


source


You can query it factor(X)

for a large enough X

+1


source


You can also check if a given input is simple by simply dividing it by all the smaller numbers. This will give you O (n ^ 2).

+1


source


Try the following:

tic
isprime( primes(99999999) );
toc

      


EDIT

For a broader set of tests, use these tests (possibly for multiple repetitions):

disp(repmat('-',1,85))
disp(['MATLAB Version ' version])
disp(['Operating System: ' system_dependent('getos')])
disp(['Java VM Version: ' version('-java')]);
disp(['Date: ' date])
disp(repmat('-',1,85))

N = 3000;   % matrix size
A = rand(N,N);  
A = A*A;

tic;  A*A; t=toc;
fprintf('A*A \t\t\t%f sec\n', t)

tic; [L,U,P] = lu(A); t=toc; clear L U P
fprintf('LU(A)\t\t\t%f sec\n', t)

tic; inv(A); t=toc;
fprintf('INV(A)\t\t\t%f sec\n', t)

tic; [U,S,V] = svd(A); t=toc; clear U S V
fprintf('SVD(A)\t\t\t%f sec\n', t)

tic; [Q,R,P] = qr(A); t=toc; clear Q R P
fprintf('QR(A)\t\t\t%f sec\n', t)

tic; [V,D] = eig(A); t=toc; clear V D
fprintf('EIG(A)\t\t\t%f sec\n', t)

tic; det(A); t=toc;
fprintf('DET(A)\t\t\t%f sec\n', t)

tic; rank(A); t=toc;
fprintf('RANK(A)\t\t\t%f sec\n', t)

tic; cond(A); t=toc;
fprintf('COND(A)\t\t\t%f sec\n', t)

tic; sqrtm(A); t=toc;
fprintf('SQRTM(A)\t\t%f sec\n', t)

tic; fft(A(:)); t=toc;
fprintf('FFT\t\t\t%f sec\n', t)

tic; isprime(primes(10^7)); t=toc;
fprintf('Primes\t\t\t%f sec\n', t)

      

Below are the results on my machine using N = 1000 for only one iteration (note primes used as upper bound 10 ^ 7 NOT 10 ^ 8 [takes longer!])

A*A             0.178329 sec
LU(A)           0.118864 sec
INV(A)          0.319275 sec
SVD(A)          15.236875 sec
QR(A)           0.841982 sec
EIG(A)          3.967812 sec
DET(A)          0.121882 sec
RANK(A)         1.813042 sec
COND(A)         1.809365 sec
SQRTM(A)        22.750331 sec
FFT             0.113233 sec
Primes          27.080918 sec

      

+1


source


this will start 100% cpu in WANTED_TIME seconds

WANTED_TIME = 2^n; % seconds
t0=cputime; 
t=cputime; 
while (t-t0 < WANTED_TIME)
    t=cputime; 
end;

      

+1


source







All Articles