Search Results

Search found 680 results on 28 pages for 'precision'.

Page 6/28 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Formatting currency within a specific precision range

    - by Alex Prose
    I am trying to format currency that will always contain 2 decimal digits, but if there are extra digits of accuracy to display up to five. As an example: for value = 5.0 display: $5.00 for value = 5.023 display: $5.023 for value = 5.333333333333333 display: $5.33333 I have been playing with the .ToString() formatting, but I can't seem to find the right match of options. Clarification: I want to show from 2-5 decimals, truncating zeros after the second digit. for value = 5.000000000000000 display: $5.00 for value = 5.333333333333333 display: $5.33333

    Read the article

  • Function that creates a timestamp in c#

    - by Konstantinos
    Hi there, I was wondering, is there a way to create a timestamp in c# from a datetime? I need a millisecond precision value that also works in Compact Framework(saying that since DateTime.ToBinary() does not exist in CF). My problem is that i want to store this value in a database agnostic way so i can sortby it later and find out which value is greater from another etc.

    Read the article

  • OpenSCAD keeps crashing when I raise my precision past $fn=29?

    - by Jeremy Quick
    Out of the blue, OpenSCAD decided to stop working for me on designs that previously rendered just fine. After playing around I found out that any time I use a precision greater than $fn=29 the program crashes with the message "openscad.exe has stopped working". I can use any precision 29 and below, but the second I adjust it to even 30 the program crashes. This is a major problem as before I was using a precision of 100, and without it the design's moving parts do not work on print. I have uninstalled and reinstalled the program several times and haven't been able to fix the problem. I have watched the CPU during the calculations and it never reaches more than 35%. Does anyone have any suggestions as to how to fix this problem? I also have not changed computers or even the code. Thanks! I am using Windows 32 bit, Version 2013.06. The default version from the website.

    Read the article

  • calculate intersection between two segments in a symmetric way

    - by Elazar Leibovich
    When using the usual formulas to calculate intersection between two 2D segments, ie here, if you round the result to an integer, you get non-symmetric results. That is, sometimes, due to rounding errors, I get that intersection(A,B)!=intersection(B,A). The best solution is to keep working with floats, and compare the results up to a certain precision. However, I must round the results to integers after calculating the intersection, I cannot keep working with floats. My best solution so far was to use some full order on the segments in the plane, and have intersection to always compare the smaller segment to the larger segment. Is there a better method? Am I missing something?

    Read the article

  • Rejigging a floating point equation ...

    - by Jamie
    I'd like to know if there is a way to improve the accuracy of calculating a slope. (This came up a few months back here). It seems by changing: float get_slope(float dXa, float dXb, float dYa, float dYb) { return (dXa - dXb)/(dYa - dYb); } to float get_slope(float dXa, float dXb, float dYa, float dYb) { return dXa/(dYa - dYb) - dXb/(dYa - dYb); } might be an improvement. Suggestions? Edit: It's precision I'm after, not efficiency.

    Read the article

  • How many double numbers are there between 0.0 and 1.0?

    - by polygenelubricants
    This is something that's been on my mind for years, but I never took the time to ask before. Many (pseudo) random number generators generate a random number between 0.0 and 1.0. Mathematically there are infinite numbers in this range, but double is a floating point number, and therefore has a finite precision. So the questions are: Just how many double numbers are there between 0.0 and 1.0? Are there just as many numbers between 1 and 2? Between 100 and 101? Between 10^100 and 10^100+1? Note: if it makes a difference, I'm interested in Java's definition of double in particular.

    Read the article

  • What are the most known arbitrary precision arithmetic implementation approaches?

    - by keykeeper
    I'm going to write a class library for .NET which provide an implementation of arbitrary precision arithmetic for integer, rational and maybe complex numbers. What best known approaches should I become familiar with? I tried to start with Knuth's TAOCP Vol.2 (Seminumerical Algorithms, Chapter 4 – Arithmetic) but it's too complicated. At least I couldn't get the ideas in a relatively short period of time.

    Read the article

  • fopen / fopen_s and writing to files

    - by yCalleecharan
    Hi, I'm using fopen in C to write the output to a text file. The function declaration is (where ARRAY_SIZE has been defined earlier): void create_out_file(char file_name[],long double *z1){ FILE *out; int i; if((out = fopen(file_name, "w+")) == NULL){ fprintf(stderr, "* Open error on output file %s", file_name); exit(-1); } for(i = 0; i < ARRAY_SIZE; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } My questions: On compilation with MVS2008 I get the warning: warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. I haven't see much information on fopen_s so that I can change my code. Any suggestions? Can one instruct fprintf to write at desired precision? If I'm using long double then I assume that my answers are good till 15 digits after the decimal point. Am I right? Thanks a lot...

    Read the article

  • ArithmeticException thrown during BigDecimal.divide

    - by polygenelubricants
    I thought java.math.BigDecimal is supposed to be The Answer™ to the need of performing infinite precision arithmetic with decimal numbers. Consider the following snippet: import java.math.BigDecimal; //... final BigDecimal one = BigDecimal.ONE; final BigDecimal three = BigDecimal.valueOf(3); final BigDecimal third = one.divide(three); assert third.multiply(three).equals(one); // this should pass, right? I expect the assert to pass, but in fact the execution doesn't even get there: one.divide(three) causes ArithmeticException to be thrown! Exception in thread "main" java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result. at java.math.BigDecimal.divide It turns out that this behavior is explicitly documented in the API: In the case of divide, the exact quotient could have an infinitely long decimal expansion; for example, 1 divided by 3. If the quotient has a non-terminating decimal expansion and the operation is specified to return an exact result, an ArithmeticException is thrown. Otherwise, the exact result of the division is returned, as done for other operations. Browsing around the API further, one finds that in fact there are various overloads of divide that performs inexact division, i.e.: final BigDecimal third = one.divide(three, 33, RoundingMode.DOWN); System.out.println(three.multiply(third)); // prints "0.999999999999999999999999999999999" Of course, the obvious question now is "What's the point???". I thought BigDecimal is the solution when we need exact arithmetic, e.g. for financial calculations. If we can't even divide exactly, then how useful can this be? Does it actually serve a general purpose, or is it only useful in a very niche application where you fortunately just don't need to divide at all? If this is not the right answer, what CAN we use for exact division in financial calculation? (I mean, I don't have a finance major, but they still use division, right???).

    Read the article

  • Google livre quelques secrets sur la recherche vocale, la précision du système extrêmement liée à la quantité de données

    Google dévoile quelques secrets sur la recherche vocale la précision du système extrêmement liée à la quantité de données Google Research, la division de recherche de Google a publié un document qui décrit un peu comment sa technologie de recherche vocale fonctionne. Les mécanismes qui sont développés au sein de ses applications de reconnaissance vocale reposent essentiellement sur les données. En effet, les chercheurs ont constaté que la présence des quantités de données énormes entraine moins d'erreurs lors de la prédiction du mot suivant en fonction des mots qui le précèdent. Selon l'article publié par Google, son implémentation de la recherche vocale utilise pr...

    Read the article

  • postgresql error - ERROR: input is out of range

    - by CaffeineIV
    The function below keeps returning this error message. I thought that maybe the double_precision field type was what was causing this, and I tried to use CAST, but either that's not it, or I didn't do it right... Help? Here's the error: ERROR: input is out of range CONTEXT: PL/pgSQL function "calculate_distance" line 7 at RETURN ********** Error ********** ERROR: input is out of range SQL state: 22003 Context: PL/pgSQL function "calculate_distance" line 7 at RETURN And here's the function: CREATE OR REPLACE FUNCTION calculate_distance(character varying, double precision, double precision, double precision, double precision) RETURNS double precision AS $BODY$ DECLARE earth_radius double precision; BEGIN earth_radius := 3959.0; RETURN earth_radius * acos(sin($2 / 57.2958) * sin($4 / 57.2958) + cos($2/ 57.2958) * cos($4 / 57.2958) * cos(($5 / 57.2958) - ($3 / 57.2958))); END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; ALTER FUNCTION calculate_distance(character varying, double precision, double precision, double precision, double precision) OWNER TO postgres; //I tried changing (unsuccessfully) that RETURN line to: RETURN CAST( (earth_radius * acos(sin($2 / 57.2958) * sin($4 / 57.2958) + cos($2/ 57.2958) * cos($4 / 57.2958) * cos(($5 / 57.2958) - ($3 / 57.2958))) ) AS text);

    Read the article

  • Emulating Dynamic Dispatch in C++ based on Template Parameters

    - by Jon Purdy
    This is heavily simplified for the sake of the question. Say I have a hierarchy: struct Base { virtual int precision() const = 0; }; template<int Precision> struct Derived : public Base { typedef Traits<Precision>::Type Type; Derived(Type data) : value(data) {} virtual int precision() const { return Precision; } Type value; }; I want a function like: Base* function(const Base& a, const Base& b); Where the specific type of the result of the function is the same type as whichever of first and second has the greater Precision; something like the following pseudocode: template<class T> T* operation(const T& a, const T& b) { return new T(a.value + b.value); } Base* function(const Base& a, const Base& b) { if (a.precision() > b.precision()) return operation((A&)a, A(b.value)); else if (a.precision() < b.precision()) return operation(B(a.value), (B&)b); else return operation((A&)a, (A&)b); } Where A and B are the specific types of a and b, respectively. I want f to operate independently of how many instantiations of Derived there are. I'd like to avoid a massive table of typeid() comparisons, though RTTI is fine in answers. Any ideas?

    Read the article

  • PHP bitwise left shifting 32 spaces problem and bad results with large numbers arithmetic operations

    - by Victor Stanciu
    Hello, I have the following problems: First: I am trying to do a 32-spaces bitwise left shift on a large number, and for some reason the number is always returned as-is. For example: echo(516103988<<32); // echoes 516103988 Because shifting the bits to the left one space is the equivalent of multiplying by 2, i tried multiplying the number by 2^32, and it works, it returns 2216649749795176448. Second: I have to add 9379 to the number from the above point: printf('%0.0f', 2216649749795176448 + 9379); // prints 2216649749795185920 Should print: 2216649749795185827

    Read the article

  • Floating point arithmetic is too reliable.

    - by mcoolbeth
    I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg: static void Main(string[] args) { double x = 0, y = 0; x += 20013.8; x += 20012.7; y += 10016.4; y += 30010.1; Console.WriteLine("Result: "+ x + " " + y + " " + (x==y)); Console.Write("Press any key to continue . . . "); Console.ReadKey(true); } However, in this case, x and y are equal in the end. Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.

    Read the article

  • curious ill conditioned numerical problem

    - by aaa
    hello. somebody today showed me this curious ill conditioned problem (apparently pretty famous), which looks relatively simple ƒ = (333.75 - a^2)b^6 + a^2 (11a^2 b^2 - 121b^4 - 2) + 5.5b^8 + a/(2^b) where a = 77617 and b = 33096 can you determine correct answer?

    Read the article

  • Floating Point Arithmetic - Modulo Operator on Double Type

    - by CrimsonX
    So I'm trying to figure out why the modulo operator is returning such a large unusual value. If I have the code: double result = 1.0d % 0.1d; it will give a result of 0.09999999999999995. I would expect a value of 0 Note this problem doesn't exist using the dividing operator - double result = 1.0d / 0.1d; will give a result of 10.0, meaning that the remainder should be 0. Let me be clear: I'm not surprised that an error exists, I'm surprised that the error is so darn large compared to the numbers at play. 0.0999 ~= 0.1 and 0.1 is on the same order of magnitude as 0.1d and only one order of magnitude away from 1.0d. Its not like you can compare it to a double.epsilon, or say "its equal if its < 0.00001 difference". I've read up on this topic on StackOverflow, in the following posts one two three, amongst others. Can anyone suggest explain why this error is so large? Any any suggestions to avoid running into the problems in the future (I know I could use decimal instead but I'm concerned about the performance of that).

    Read the article

  • Why does a C# System.Decimal remember trailing zeros?

    - by Rob Davey
    Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with? See the following example: public void DoSomething() { decimal dec1 = 0.5M; decimal dec2 = 0.50M; Console.WriteLine(dec1); //Output: 0.5 Console.WriteLine(dec2); //Output: 0.50 Console.WriteLine(dec1 == dec2); //Output: True } The decimals are classed as equal, yet dec2 remembers that it was entered with an additional zero. What is the reason/purpose for this?

    Read the article

  • How to correctly and standardly compare floats?

    - by DIMEDROLL
    Every time I start a new project and when I need to compare some float or double variables I write the code like this one: if (fabs(prev.min[i] - cur->min[i]) < 0.000001 && fabs(prev.max[i] - cur->max[i]) < 0.000001) { continue; } Then I want to get rid of these magic variables 0.000001(and 0.00000000001 for double) and fabs, so I write an inline function and some defines: #define FLOAT_TOL 0.000001 So I wonder if there is any standard way of doing this? May be some standard header file? It would be also nice to have float and double limits(min and max values)

    Read the article

  • Dividing a double with integer

    - by hardcoder
    I am facing an issue while dividing a double with an int. Code snippet is : double db = 10; int fac = 100; double res = db / fac; The value of res is 0.10000000000000001 instead of 0.10. Does anyone know what is the reason for this? I am using cc to compile the code.

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • C: 8x8 -> 16 bit multiply precision guaranteed by integer promotions?

    - by craig-blome
    I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows: unsigned char foo; unsigned int foo_u16 = foo * 10; Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first: unsigned int foo_u16 = (unsigned int)foo * 10; then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8-16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly? Best regards, Craig Blome

    Read the article

  • How to specify numeric width and precision when creating a dBase database?

    - by Stevo3000
    We need to be able to create a dBase database (.dbf file) containing numeric columns with specific width and precision. I seem to be able to set the precision but not the width. The following code shows my connection string and my command text. using (OleDbConnection oConnection = new OleDbConnection(String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source = {0};Extended Properties=dBase 5.0", msPath))) { .... oCommand.CommandText = "CREATE TABLE [Field] ([Id] Numeric (15, 3))"; oCommand.ExecuteNonQuery(); } This gives me a column Id,20,3 in the file. There must be a way to set the field width without resorting to editing the .dbf file manually? Has nobody else come across this before when creating shapefiles?

    Read the article

  • How to set up precision attribute used by @Collumn annotation ???

    - by Arthur Ronald F D Garcia
    I often use java.lang.Integer as primary key. Here you can see some piece of code @Entity private class Person { private Integer id; @Id @Column(precision=8, nullable=false) public Integer getId() { } } I need to set up its precision attribute value equal to 8. But, when exporting The schema (Oracle), it does not work as expected. AnnotationConfiguration configuration = new AnnotationConfiguration(); configuration .addAnnotatedClass(Person.class) .setProperty(Environment.DIALECT, "org.hibernate.dialect.OracleDialect") .setProperty(Environment.DRIVER, "oracle.jdbc.driver.OracleDriver"); SchemaExport schema = new SchemaExport(configuration); schema.setOutputFile("schema.sql"); schema.create(true, false); schema.sql outputs create table Person (id number(10,0) not null) Always i get 10. Is There some workaround to get 8 instead of 10 ?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >