Optimizing mathematics on arrays of floats in Ada 95 with GNATC

Posted by mat_geek on Stack Overflow See other posts from Stack Overflow or by mat_geek
Published on 2010-03-26T03:37:17Z Indexed on 2010/03/26 3:43 UTC
Read the original article Hit count: 420

Consider the bellow code. This code is supposed to be processing data at a fixed rate, in one second batches, It is part of an overal system and can't take up too much time.

When running over 100 lots of 1 seconds worth of data the program takes 35 seconds; or 35%. How do I improce the code to get the processing time down to a minimum?

The code will be running on an Intel Pentium-M which is a P3 with SSE2.

package FF is new Ada.Numerics.Generic_Elementary_Functions(Float);

N : constant Integer := 820;
type A is array(1 .. N) of Float;
type A3 is array(1 .. 3) of A;

procedure F(state  : in out A3;
            result :    out A3;
            l      : in     A;
            r      : in     A) is
   s : Float;
   t : Float;
begin
   for i in 1 .. N loop
      t := l(i) + r(i);
      t := t / 2.0;
      state(1)(i) := t;
      state(2)(i) := t * 0.25 + state(2)(i) * 0.75;
      state(3)(i) := t * 1.0 /64.0 + state(2)(i) * 63.0 /64.0;
      for r in 1 .. 3 loop
         s := state(r)(i);
         t := FF."**"(s, 6.0) + 14.0;
         if t > MAX then
            t := MAX;
         elsif t < MIN then
            t := MIN;
         end if;
         result(r)(i) := FF.Log(t, 2.0);
      end loop;
   end loop;
end;

© Stack Overflow or respective owner

Related posts about ada

Related posts about optimization