Search Results

Search found 1348 results on 54 pages for 'floating accuracy'.

Page 4/54 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SQL SERVER – DATEDIFF – Accuracy of Various Dateparts

    - by pinaldave
    I recently received the following question through email and I found it very interesting so I want to share it with you. “Hi Pinal, In SQL statement below the time difference between two given dates is 3 sec, but when checked in terms of Min it says 1 Min (whereas the actual min is 0.05Min) SELECT DATEDIFF(MI,'2011-10-14 02:18:58' , '2011-10-14 02:19:01') AS MIN_DIFF Is this is a BUG in SQL Server ?” Answer is NO. It is not a bug; it is a feature that works like that. Let us understand that in a bit more detail. When you instruct SQL Server to find the time difference in minutes, it just looks at the minute section only and completely ignores hour, second, millisecond, etc. So in terms of difference in minutes, it is indeed 1. The following will also clear how DATEDIFF works: SELECT DATEDIFF(YEAR,'2011-12-31 23:59:59' , '2012-01-01 00:00:00') AS YEAR_DIFF The difference between the above dates is just 1 second, but in terms of year difference it shows 1. If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it to minutes. SELECT DATEDIFF(second,'2011-10-14 02:18:58' , '2011-10-14 02:19:01')/60.0 AS MIN_DIFF Even though the concept is very simple it is always a good idea to refresh it. Please share your related experience with me through your comments. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

  • CPU Architecture and floating-point math

    - by Jo-Herman Haugholt
    I'm trying to wrap my head around some details about how floating point math is performed on the CPU, trying to better understand what data types to use etc. I think I have a fairly good understanding of how integer math is performed. If I've understood correctly, and disregarding SIMD, a 32-bit CPU will generally perform integer math at at least 32-bit precision etc. Is it correct that floating-point math is dependent on the presence of a FPU? And that the FPU on the x86 is 80-bit, so floating point math is performed at this precision unless using SIMD? What about ARM?

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • How to get the equivalent of the accuracy in Google Map Geocoder V3

    - by Scorpi0
    Hi, I want to get geocode from google, and I used to do it with the V2 of the API. Google send in the json a pretty good information, the accuracy, reference here : http://code.google.com/intl/fr-FR/apis/maps/documentation/javascript/v2/reference.html#GGeoAddressAccuracy In V3, Google doesn't seem to send me exactly the same information. There is the array "adresse_component", which seem bigger if the accuracy is better, but not exactly. For example, I have a request accuracy to the street number, the array is of size 8. Another query is accuracy to the route, so less accuracy, but the array is style of size 8, as there is a row 'sublocality', which not appear in the first case. Ok, for a result, Google send a data 'types', which have the 'best' accuracy. This types are here : http://code.google.com/intl/fr-FR/apis/maps/documentation/geocoding/#Types But, there is no real order, and if I wan't the result better than postal_code, I have no clue to how to do that. So, how can I get this equivalent of the V2 accuracy, whithout some dumb and horrible code ?

    Read the article

  • understanding floating point variables

    - by Syom
    There is some problem, i can't understand anyway. look at this code please <script type="text/javascript"> function math(x) { var y; y = x*10; alert(y); } </script> <input type="button" onclick="math(0.011)"> What must be alerted after i click on button? i think 0.11, but no, it alerts 0.10999999999999999 explain please this behavior. thanks in advance

    Read the article

  • how IEEE-754 floating point numbers work

    - by hatorade
    Let's say I have this: float i = 1.5 in binary, this float is represented as: 0 01111111 10000000000000000000000 I broke up the binary to represent the 'signed', 'exponent' and 'fraction' chunks. What I don't understand is how this represents 1.5. The exponent is 0 once you subtract the bias (127 - 127), and the fraction part with the implicit leading one is 1.1. How does 1.1 scaled by nothing = 1.5???

    Read the article

  • Find max integer size that a floating point type can handle without loss of precision

    - by Checkers
    Double has range more than a 64-bit integer, but its precision is less dues to its representation (since double is 64-bit as well, it can't fit more actual values). So, when representing larger integers, you start to lose precision in the integer part. #include <boost/cstdint.hpp> #include <limits> template<typename T, typename TFloat> void maxint_to_double() { T i = std::numeric_limits<T>::max(); TFloat d = i; std::cout << std::fixed << i << std::endl << d << std::endl; } int main() { maxint_to_double<int, double>(); maxint_to_double<boost::intmax_t, double>(); maxint_to_double<int, float>(); return 0; } This prints: 2147483647 2147483647.000000 9223372036854775807 9223372036854775800.000000 2147483647 2147483648.000000 Note how max int can fit into a double without loss of precision and boost::intmax_t (64-bit in this case) cannot. float can't even hold an int. Now, the question: is there a way in C++ to check if the entire range of a given integer type can fit into a loating point type without loss of precision? Preferably, it would be a compile-time check that can be used in a static assertion, and would not involve enumerating the constants the compiler should know or can compute.

    Read the article

  • floating point precision in ruby on rails model validations

    - by Chris Allison
    Hello I am trying to validate a dollar amount using a regex: ^[0-9]+\.[0-9]{2}$ This works fine, but whenever a user submits the form and the dollar amount ends in 0(zero), ruby(or rails?) chops the 0 off. So 500.00 turns into 500.0 thus failing the regex validation. Is there any way to make ruby/rails keep the format entered by the user, regardless of trailing zeros?

    Read the article

  • IE7 div floating bug

    - by Michael Frey
    I have the following <div id="border" style="width:100%; border:8px solid #FFF"> <div id="menu" style="width:250px; float:left;" > Some menu </div> <div id="content" style="padding-left:270px; width:520px;" > Main page content </div> </div> This gives me a left aligned menu and the content to the right of it, all surrounded by a border. On all browsers including IE8 it displays correctly. But on IE7 the content only starts below the menu, and leaves a big open space to the right of the menu. I have searched all kind of solutions and tried all kinds of combinations of right, left, none for float. clearing left right both. It always displays different on the browsers. Any help is appreciated. Michael

    Read the article

  • Menu floating to the right on IE and to the left in FF

    - by the_drow
    I am working on a website that has a menu which behaves correctly on FF but not on IE (as usuall). On IE it floats to the right while it should float to the left, however if float is set to none it behaves almost correctly, attaching the onto the top of the container. Here's the css: #navigation_wrap { background: url(../images/ltr/nav_bg.png); height: 34px; width: 954px; } .btn_login { float: right; margin: 4px 4px 0 0; } .navigation { float: left; } .navigation ul { list-style: none; margin: 8px 0 0 15px; } .navigation ul li { border-right: 1px solid white; float: left; padding: 0 12px 0 12px; } .navigation ul li.last { border: none; } .navigation ul li a { color: white; font-size: 14px; text-decoration: none; } .navigation ul li a:hover { text-decoration: underline; } .navigation ul li a.active { font-weight: bold; } And here's the html: <div id="navigation_wrap"> <div class="navigation"> <ul> <li><a class="active" href="default.asp">Home Page</a></li> <li><a class="" href="faq.asp">FAQ</a></li><li><a class="" href="articles.asp">Articles</a></li> <li><a class="" href="products.asp">Packages &amp; Pricing</a></li> <li><a class="" href="gp.asp?gpid=15">test1</a></li> <li><a class=" last" href="gp.asp?gpid=17">test asher</a></li> </ul> </div> <div class="btn_login"> ... </div> </div> I hope anyone would have an idea. Thanks, Omer.

    Read the article

  • How to make gcc on SUN calculate floating points the same way as in Linux

    - by Marina
    I have a project where I have to perform some mathematics calculations with double variables. The problem is that I get different results on SUN Solaris 9 and Linux. There are a lot of ways (explained here and other forums) how to make Linux work as Sun, but not the other way around. I cannot touch the Linux code, so it is only SUN I can change. Is there any way to make SUN to behave as Linux? The code I run(compile with gcc on both systems): int hash_func(char *long_id) { double product, lnum, gold; while (*long_id) lnum = lnum * 10.0 + (*long_id++ - '0'); printf("lnum => %20.20f\n", lnum); lnum = lnum * 10.0E-8; printf("lnum => %20.20f\n", lnum); gold = 0.6125423371582974; product = lnum * gold; printf("product => %20.20f\n", product); ... } if the input is 339886769243483 the output in Linux: lnum => 339886769243**483**.00000000000000000000 lnum => 33988676.9243**4829473495483398** product => 20819503.600158**59827399253845** When on SUN: lnum => 339886769243483.00000000000000000000 lnum => 33988676.92434830218553543091 product = 20819503.600158**60199928283691** Note: The result is not always different, moreover most of the times it is the same. Just 10 15-digit numbers out of 60000 have this problem. Please help!!!

    Read the article

  • how floating point numbers work in C

    - by hatorade
    Let's say I have this: float i = 1.5 in binary, this float is represented as: 0 01111111 10000000000000000000000 I broke up the binary to represent the 'signed', 'exponent' and 'fraction' chunks. What I don't understand is how this represents 1.5. The exponent is 0 once you subtract the bias (127 - 127), and the fraction part with the implicit leading one is 1.1. How does 1.1 scaled by nothing = 1.5???

    Read the article

  • Rounding floating-point numbers to 4 decimal points

    - by Himadri
    I have two decimal numbers. I want those number to be same upto 4 decimal points without rounding. If numbers are different I want 2nd number to be replaced by 1st. What if condition should I write? Eg, 1. num1 = 0.94618976 num2 = 0.94620239 If we round these numbers upto 4 decimal then we get 0.9462 same number, but I don't want to round these numbers. 2. num1 = 0.94620239 num2 = 0.94639125 The one way I found is take absolute difference of both numbers say diff and then check the value. My problem is of checking the range of diff. Thank You.

    Read the article

  • Floating point precision and physics calculations

    - by Vee
    The gravity Vector2 in my physics world is (0; 0.1). The number 0.1 is known to be problematic, since "it cannot be represented exactly, but is approximately 1.10011001100110011001101 × 2-4". Having this value for the gravity gives me problems with collisions and creates quite nasty bugs. Changing the value to 0.11 solves these problems. Is there a more elegant solution that doesn't require changing the value at all?

    Read the article

  • Floating point comparison in STL, BOOST

    - by Paul
    Is there in the STL or in Boost a set of generic simple comparison functions? The one I found are always requiring template parameters, and/or instantiation of a struct template. I'm looking for something with a syntax like : if ( is_greater(x,y) ) { ... } Which could be implemented as : template <typename T> bool is_greater(const T& x, const T& y) { return x > y + Precision<T>::eps; }

    Read the article

  • Time difference in seconds (as a floating point)

    - by pocoa
    >>> from datetime import datetime >>> t1 = datetime.now() >>> t2 = datetime.now() >>> delta = t2 - t1 >>> delta.seconds 7 >>> delta.microseconds 631000 Is there any way to get that as 7.631000 ? I can use time module, but I also need that t1 and t2 variables as DateTime objects. So if there is a way to do it with datettime, that would be great.

    Read the article

  • CLLocationManager ensure best accuracy for iphone

    - by Zap
    Does anyone have any suggestions on how to obtain the best horizontal accuracy on the location manager? I have set the desired accuracy to NearestTenMeters, but given that the accuracy can always change depending on coverage area, how can I write some code in the locationManager to stop updating only after I get the best horizontal accuracy?

    Read the article

  • How to correctly pass a float from C# to C++ (dll)

    - by RavelT
    I'm getting huge differences when I pass a float from C# to C++. I'm passing a dynamic float wich changes over time. With a debuger I get this: c++ lonVel -0.036019072 float c# lonVel -0.029392920 float I did set my MSVC++2010 floating point model to /fp:fast wich should be the standard in .NET if I'm not mistaken, but this didnt help. Now I cant give out the code but I can show a fraction of it. From C# side it looks like this: namespace Example { public class Wheel { public bool loging = true; #region Members public IntPtr nativeWheelObject; #endregion Members public Wheel() { this.nativeWheelObject = Sim.Dll_Wheel_Add(); return; } #region Wrapper methods public void SetVelocity(float lonRoadVelocity,float latRoadVelocity){Sim.Dll_Wheel_SetVelocity(this.nativeWheelObject,lonRoadVelocity,latRoadVelocity);} #endregion Wrapper methods } internal class Sim { #region PInvokes [DllImport(pluginName, CallingConvention=CallingConvention.Cdecl)] public static extern void Dll_Wheel_SetVelocity(IntPtr wheel,float lonRoadVelocity,float latRoadVelocity); #endregion PInvokes } } And in C++ side @ exportFunctions.cpp: EXPORT_API void Dll_Wheel_SetVelocity(CarWheel* wheel,float lonRoadVelocity,float latRoadVelocity){ wheel->SetVelocity(lonRoadVelocity,latRoadVelocity);} So any sugestions on what I should do in order to get 1:1 results or atleast 99% correct results.

    Read the article

  • What Determines the Default Setting of the x87 FPU Control Word?

    - by Rick Regan
    What determines the default setting of the x87 FPU control word -- specifically, the precision control field? Does the compiler set it based on the target processor? Is there a compiler option to change it? Using Microsoft Visual C++ 2008 Express Edition on an Intel Core Duo processor, the default setting for the precision control field is "01b", meaning double (53 bit) precision. I'm wondering -- why is the default not "11"b, or extended (64 bit) precision? (I know I can change it using _controlfp.)

    Read the article

  • Why is my number being rounded incorrectly?

    - by izb
    This feels like the kind of code that only fails in-situ, but I will attempt to adapt it into a code snippet that represents what I'm seeing. float f = myFloat * myConstInt; /* Where myFloat==13.45, and myConstInt==20 */ int i = (int)f; int i2 = (int)(myFloat * myConstInt); After stepping through the code, i==269, and i2==268. What's going on here to account for the difference?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >