Search Results

Search found 950 results on 38 pages for 'floating'.

Page 1/38 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Floating point precision in Visual C++

    - by Luigi Giaccari
    HI, I am trying to use the robust predicates for computational geometry from Jonathan Richard Shewchuk. I am not a programmer, so I am not even sure of what I am saying, I may be doing some basic mistake. The point is the predicates should allow for precise aritmthetic with adaptive floating point precision. On my computer: Asus pro31/S (Core Due Centrino Processor) they do not work. The problem may stay in the fact the my computer may use some improvements in the floating point precision taht conflicts with the one used by Shewchuk. The author says: /* On some machines, the exact arithmetic routines might be defeated by the / / use of internal extended precision floating-point registers. Sometimes / / this problem can be fixed by defining certain values to be volatile, / / thus forcing them to be stored to memory and rounded off. This isn't / / a great solution, though, as it slows the arithmetic down. */ Now what I would like to know is that there is a way, maybe some compiler option, to turn off the internal extended precision floating-point registers. I really appriaciate your help

    Read the article

  • ray collision with rectangle and floating point accuracy

    - by phq
    I'm trying to solve a problem with a ray bouncing on a box. Actually it is a sphere but for simplicity the box dimensions are expanded by the sphere radius when doing the collision test making the sphere a single ray. It is done by projecting the ray onto all faces of the box and pick the one that is closest. However because I'm using floating point variables I fear that the projected point onto the surface might be interpreted as being below in the next iteration, also I will later allow the sphere to move which might make that scenario more likely. Also the bounce coefficient might be as low as zero, making the sphere continue along the surface. So my naive solution is to project not only forwards but backwards to catch those cases. That is where I got into problems shown in the figure: In the first iteration the first black arrow is calculated and we end up at a point on the surface of the box. In the second iteration the "back projection" hits the other surface making the second black arrow bounce on the wrong surface. If there are several boxes close to each other this has further consequences making the sphere fall through them all. So my main question is how to handle possible floating point accuracy when placing the sphere on the box surface so it does not fall through. In writing this question I got the idea to have a threshold to only accept back projections a certain amount much smaller than the box but larger than the possible accuracy limitation, this would only cause the "false" back projection when the sphere hit the box on an edge which would appear naturally. To clarify my original approach, the arrows shown in the image is not only the path the sphere travels but is also representing a single time step in the simulation. In reality the time step is much smaller about 0.05 of the box size. The path traveled is projected onto possible sides to avoid traveling past a thinner object at higher speeds. In normal situations the floating point accuracy is not an issue but there are two situations where I have the concern. When the new position at the end of the time step is located very close to the surface, very unlikely though. When using a bounce factor of 0, here it happens every time the sphere hit a box. To add some loss of accuracy, the motivation for my concern, is that the sphere and box are in different coordinate systems and thus the sphere location is transformed for every test. This last one is why I'm not willing to stand on luck that one floating point value lying on top of the box always will be interpreted the same. I did not know voronoi regions by name, but looking at it I'm not sure how it would be used in a projection scenario that I'm using here.

    Read the article

  • Floating point equality and tolerances

    - by doron
    Comparing two floating point number by something like a_float == b_float is looking for trouble since a_float / 3.0 * 3.0 might not be equal to a_float due to round off error. What one normally does is something like fabs(a_float - b_float) < tol. How does one calculate tol? Ideally tolerance should be just larger than the value of one or two of the least significant figures. So if the single precision floating point number is use tol = 10E-6 should be about right. However this does not work well for the general case where a_float might be very small or might be very large. How does one calculate tol correctly for all general cases? I am interested in C or C++ cases specifically.

    Read the article

  • Floating point inaccuracy examples

    - by David Rutten
    How do you explain floating point inaccuracy to fresh programmers and laymen who still think computers are infinitely wise and accurate? Do you have a favourite example or anecdote which seems to get the idea across much better than an precise, but dry, explanation? How is this taught in Computer Science classes?

    Read the article

  • Manipulating and comparing floating points in java

    - by Praneeth
    In Java the floating point arithmetic is not represented precisely. For example following snippet of code float a = 1.2; float b= 3.0; float c = a * b; if(c == 3.6){ System.out.println("c is 3.6"); } else { System.out.println("c is not 3.6"); } actually prints "c is not 3.6". I'm not interested in precision beyond 3 decimals (#.###). How can I deal with this problem to multiply floats and compare them reliably? Thanks much

    Read the article

  • Why are floating point values so prolific?

    - by Kibbee
    So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did. I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster. It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid. What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems. Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate? [EDIT] Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

    Read the article

  • C++ floating point precision

    - by Davinel
    double a = 0.3; std::cout.precision(20); std::cout << a << std::endl; result: 0.2999999999999999889 double a, b; a = 0.3; b = 0; for (char i = 1; i <= 50; i++) { b = b + a; }; std::cout.precision(20); std::cout << b << std::endl; result: 15.000000000000014211 So.. 'a' is smaller than it should be. But if we take 'a' 50 times - result will be bigger than it should be. Why is this? And how to get correct result in this case?

    Read the article

  • a floating toolbar in WTL

    - by freefallr
    I've created a multimedia app that uses DirectShow to display multiple media streams simultaneously. The app is a WTL MDI application. For video windows, I use a CWindowImpl derived class - one per CChildFrame. I'd like to add controls to the video windows (volume ctrls etc). I'd initially thought about adding a slider (volume) control and a couple of buttons to a context menu - but later thought that this might not be the best approach. I was looking at MS Word 2007 - which has a floating toolbar that allows you to change options on highlighted text. I'd like to implement a similar floating toolbar for the video controls. I googled around a bit and found an old post about floating toolbars in WTL. The response was - for a floating toolbar, create a popup window and make it's parent the main window. I think that this sounds like a reasonable approach. my questions: Is this a good approach, or is there a more standard approach for a floating toolbar now in WTL? Should I make the toolbar a child of the video window or the CChildFrame that contains the video window, in order to ensure that it always remains on top of the video? How can I implement transparency in the floating toolbar, as in the floating toolbar in MS word?

    Read the article

  • Comparing floating point values

    - by Sauron
    I just read a statement about the floating point value comparison Floating point values shall not be compared using either the == or != operators. Most floating point values have no exact binary representation and have a limited precision. If so what is the best method for comparing two floating point values?

    Read the article

  • Is there a floating point value of x, for which x-x == 0 is false?

    - by Andrew Walker
    In most cases, I understand that a floating point comparison test should be implemented using over a range of values (abs(x-y) < epsilon), but does self subtraction imply that the result will be zero? // can the assertion be triggered? float x = //?; assert( x-x == 0 ) My guess is that nan/inf might be special cases, but I'm more interested in what happens for simple values.

    Read the article

  • sIFR 3 no text wrap around floating images

    - by Jeremy Schultz
    Is sIFR supposed to wrap around floating images? I have some headings next to a large image float:left and the image bumps the headline down below it. Headings that aren't beside floating images do wrap properly so the functionality is there, so my question is whether or not sIFR 3 text wraps beside floating images. Jeremy

    Read the article

  • Does "epsilon" really guarantees anything in floating-point computations?!

    - by Michal Czardybon
    To make the problem short let's say I want to compute expression: a / (b - c) on float's. To make sure the result is meaningful, I can check if 'b' and 'c' are inequal: float eps = std::numeric_limits<float>::epsilon(); if ((b - c) > EPS || (c - b) > EPS) { return a / (b - c); } but my tests show it is not enough to guarantee either meaningful results nor not failing to provide a result if it is possible. Case 1: a = 1.0f; b = 0.00000003f; c = 0.00000002f; Result: The if condition is NOT met, but the expression would produce a correct result 100000008 (as for the floats' precision). Case 2: a = 1e33f; b = 0.000003; c = 0.000002; Result: The if condition is met, but the expression produces not a meaningful result +1.#INF00. I found it much more reliable to check the result, not the arguments: const float INF = numeric_limits<float>::infinity(); float x = a / (b - c); if (-INF < x && x < INF) { return x; } But what for is the epsilon then and why is everyone saying epsilon is good to use?

    Read the article

  • C - floating point rounding

    - by hatorade
    I'm trying to understand how floating point numbers work. I think I'd like to test out what I know / need to learn by evaluating the following: I would like to find the smallest x such that x + 1 = x, where x is a floating point number. As I understand it, this would happen in the case where x is large enough so that x + 1 is closer to x than the next number higher than x representable by floating point. So intuitively it seems it would be the case where I don't have enough digits in the significand. Would this number x then be the number where the significand is all 1's. But then I can't seem to figure out what the exponent would have to be. Obviously it would have to be big (relative to 10^0, anyway).

    Read the article

  • MySQL floating point comparison issues

    - by Sharief
    I ran into an issue by introducing floating point columns in the MySQL database schema that the comparisons on floating point values don't return the correct results always. 1 - 50.12 2 - 34.57 3 - 12.75 4 - ...(rest all less than 12.00) SELECT COUNT(*) FROM `users` WHERE `points` > "12.75" This returns me "3". I have read that the comparisons of floating point values in MySQL is a bad idea and decimal type is the better option. Do I have any hope of moving ahead with the float type and get the comparisons to work correctly?

    Read the article

  • floating point hex octal binary

    - by workinprogress
    Hi, I am working on a calculator that allows you to perform calculations past the decimal point in octal, hexadecimal, binary, and of course decimal. I am having trouble though finding a way to convert floating point decimal numbers to floating point hexadecimal, octal, binary and vice versa. The plan is to do all the math in decimal and then convert the result into the appropriate number system. Any help, ideas or examples would be appreciated. Thanks!

    Read the article

  • C99 and floating point environment

    - by yCalleecharan
    Hi, I was looking at new features of C99 and saw the floating point environment: #include <fenv.h> My question is simple. If I'm performing floating point number computations, do I have to include the above preprocessor directive in my code? If no, then what does this directive do and when does it become important to include? Thanks a lot...

    Read the article

  • Ruby BigDecimal sanity check (floating point newb)

    - by Andy
    Hello, Hoping to get some feedback from someone more experienced here. I haven't dealt with the dreaded floating-point calculation before... Is my understanding correct that with Ruby BigDecimal types (even with varying precision and scale lengths) should calculate accurately or should I anticipate floating point shenanigans? All my values within a Rails application are BigDecimal type and I'm seeing some errors (they do have different decimal lengths), hoping it's just my methods and not my object types... Thanks!

    Read the article

  • floating exception using icc compiler

    - by Hristo
    I'm compiling my code via the following command: icc -ltbb test.cxx -o test Then when I run the program: time ./mp6 100 > output.modified Floating exception 4.871u 0.405s 0:05.28 99.8% 0+0k 0+0io 0pf+0w I get a "Floating exception". This following is code in C++ that I had before the exception and after: // before if (j < E[i]) { temp += foo(0, trr[i], ex[i+j*N]); } // after temp += (j < E[i])*foo(0, trr[i], ex[i+j*N]); This is boolean algebra... so (j < E[i]) is either going to be a 0 or a 1 so the multiplication would result either in 0 or the foo() result. I don't see why this would cause a floating exception. This is what foo() does: int foo(int s, int t, int e) { switch(s % 4) { case 0: return abs(t - e)/e; case 1: return (t == e) ? 0 : 1; case 2: return (t < e) ? 5 : (t - e)/t; case 3: return abs(t - e)/t; } return 0; } foo() isn't a function I wrote so I'm not too sure as to what it does... but I don't think the problem is with the function foo(). Is there something about boolean algebra that I don't understand or something that works differently in C++ than I know of? Any ideas why this causes an exception? Thanks, Hristo

    Read the article

  • What is a Bias Value of Floating Point Numbers?

    - by mudge
    In learning how floating point numbers are represented in computers I have come across the term "bias value" that I do not quite understand. The bias value in floating point numbers has to do with the negative and positiveness of the exponent part of a floating point number. The bias value of a floating point number is 127 which means that 127 is always added to the exponent part of a floating point number. How does doing this help determine if the exponent is negative or positive or not?

    Read the article

  • Floating point conversion from Fixed point algorithm

    - by Viks
    Hi, I have an application which is using 24 bit fixed point calculation.I am porting it to a hardware which does support floating point, so for speed optimization I need to convert all fixed point based calculation to floating point based calculation. For this code snippet, It is calculating mantissa for(i=0;i<8207;i++) { // Do n^8/7 calculation and store // it in mantissa and exponent, scaled to // fixed point precision. } So since this calculation, does convert an integer to mantissa and exponent scaled to fixed point precision(23 bit). When I tried converting it to float, by dividing the mantissa part by precision bits and subtracting the exponent part by precision bit, it really does' t work. Please help suggesting a better way of doing it.

    Read the article

  • Floating point arithmetic is too reliable.

    - by mcoolbeth
    I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg: static void Main(string[] args) { double x = 0, y = 0; x += 20013.8; x += 20012.7; y += 10016.4; y += 30010.1; Console.WriteLine("Result: "+ x + " " + y + " " + (x==y)); Console.Write("Press any key to continue . . . "); Console.ReadKey(true); } However, in this case, x and y are equal in the end. Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >