Search Results

Search found 1806 results on 73 pages for 'numeric precision'.

Page 5/73 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • J2me Blackberry Numeric Input

    - by Paul
    Hello, I am developing a blackberry application using j2me and LWUIT (blackberry port). Everything works great except for the TextField in numeric mode. Basically when you have focus on the TextField you have to first go into "NUMERIC" mode (by pressing alt + aA) in order to input, which is not user friendly and a problem. The proposed solution is to use a TextArea instead that allows you to open a NATIVE type input box. The problem there is that the user needs to focus the field and then press the fire button which again is unfriendly. Does anyone know of any simple solutions? The few solutions i have in mind (but not sure how to do them): 1) Capture any keypress on the TextArea and go into NATIVE mode, instead of just the fire key. 2) Put the blackberry input mode into numeric using code for the whole form. Any advice will be appreciated. Many Thanks, Paul

    Read the article

  • RegEx - Take all numeric characters following a text character

    - by Simon
    Given a string in the format: XXX999999v99 (where X is any alpha character and v is any numeric character and v is a literal v character) how can I get a regex to match the numeric characters following the v? So far I've got 'v\d\d' which includes the v but ideally I'd like just the numeric part. As an aside does anyone know of a tool in which you can specify a string to match and have the regex generated? Modifying an existing regex is one thing but I find starting from scratch painful! Edit: Re-reading this question I realise it reads like a homework assignment! However I can assure you it's not, the strings I'm trying to match represent product versions appended to product codes. The current code uses all sorts of substring expressions to retrieve the version part.

    Read the article

  • ORA-01438: value larger than specified precision allows for this column

    - by bobir
    We get sometimes the following error from our partner's database: ORA-01438: value larger than specified precision allows for this column The full response looks like the following: ORA-01438: value larger than specified precision allows for this column ORA-06512: at "UMAIN.PAY_NET_V1_PKG", line 176 ORA-06512: at line 1 5592988 What can be the cause for this error? Thank you in advance.

    Read the article

  • How to bind a double precision using psycopg2

    - by user337636
    I'm trying to bind a float to a postgresql double precision using psycopg2. ele = 1.0/3.0 dic = {'name': 'test', 'ele': ele} sql = '''insert into waypoints (name, elevation) values (%(name)s, %(ele)s)''' cur = db.cursor() cur.execute(sql, dic) db.commit() sql = """select elevation from waypoints where name = 'test'""" cur.execute(sql_out) ele_out = cur.fetchone()[0] ele_out 0.33333333333300003 ele 0.33333333333333331 Obviously I don't need the precision, but I would like to be able to simply compare the values. I could use the struct module and save it as a string, but thought there should be a better way. Thanks

    Read the article

  • Sub-millisecond precision timing in C or C++

    - by andand
    What techniques / methods exist for getting sub-millisecond precision timing data in C or C++, and what precision and accuracy do they provide? I'm looking for methods that don't require additional hardware. The application involves waiting for approximately 50 microseconds +/- 1 microsecond while some external hardware collects data. EDIT: OS is Wndows, probably with VS2010. If I can get drivers and SDK's for the hardware on Linux, I can go there. Thanks.

    Read the article

  • How to maintain precision using DateTime.Now.Ticks in C#

    - by nmr
    I know that when I use DateTime.Now.Ticks in C# it returns a long value but I need to store it in an int variable and I am confused as to whether or not I can maintain that precision. As of right now I just have a cast int timeStampValue = (int)DateTime.Now.Ticks; Any suggestions or advice on how to maintain the precision, if possible, would be much appreciated.

    Read the article

  • Configuring NVIDIA Quadro with Dell Precision M4600

    - by vsecades
    After a frustrating couple of weeks when I recently bought my Dell Precision laptop, I managed to fix an issue where Ubuntu (yes, was NOT using Windows, get serious) would not recognize the video card and would cause all sorts of problems all over the place. I ended up one Saturday morning nearly throwing this thing away, when I managed to find a post about NVIDIA Optimus technology... ( http://www.pcmag.com/article2/0,2817,2358963,00.asp ). Now I am a huge advocate of disruptive new stuff, as long as we keep the broader audience in mind. Anyhow, disabling this (which as the BIOS settings state only work on Windows 7 or later), effectively allow the NVIDIA based Ubuntu driver to kick in full force. No need for a trash can anymore thankfully. As I saw multiple posts all over the place about this, check your BIOS, disable and try the video again to see if this corrects your issues. Best of luck!

    Read the article

  • How to clean this Dell Precision M6400

    - by Daniel Pratt
    I have (well, ok, my employer has and I use) a Dell Precision M6400 notebook. It's a decent piece of hardware, but I have at least one major gripe: It's a dust and...uh...crumb (I repent! I repent!) magnet! And I cannot seem to exorcise the dust/crumbs from it! There is a strip of metal above the keyboard that is punched full of tiny holes. Well, maybe it's better to describe them as 'pits'. If a sufficiently small particle finds its way into one of those pits, there is only about a 50% that I will manage to get it out. Consequently, there is now a chorus of tiny little particles silently chiding me about eating cookies a cracker whilst I browse the intarwebs. Does anyone have any suggestions about how I could remove these particles from this machine...while still preserving the function of the machine?

    Read the article

  • jquery: validate that text field is numeric

    - by George Johnston
    I have simple issue -- I would like to check a field to see if it's numeric if it is not blank. I'm not using any additional plugins, just jQuery. My code is as follows: if($('#Field').val() != "") { if($('#Field').val().match('^(0|[1-9][0-9]*)$')) { errors+= "Field must be numeric.<br/>"; success = false; } } ...It doesn't seem to work. Where am I going wrong? Thanks, George

    Read the article

  • iPad numeric keyboard

    - by Tiago
    I've been trying to get a numeric keypad on the iPad, but when I set a numeric pad on a TextField, I get a normal keyboard with numbers and ponctuation. But I found out several bugs on the simulator, I don't really know if this is another. Is there a standard keypad on the iPad?

    Read the article

  • Solution for Numeric Text Field in GWT

    - by Ashwin Prabhu
    I need a text field very similar in behavior to Gxt's NumberField. Unfortunately I am not using Gxt in my application and GWT 2.0 does not have a Numeric text field implementation as yet. So that currently leaves me with an option to simulate a NumberField by filtering out non-numeric keystrokes using a keyboardHandler. Is this the the best way to approach the problem? Does anyone here have a better solution/approach in mind? Thanks in advance :)

    Read the article

  • Why don't computers store decimal numbers as a second whole number?

    - by SomeKittens
    Computers have trouble storing fractional numbers where the denominator is something other than a solution to 2^x. This is because the first digit after the decimal is worth 1/2, the second 1/4 (or 1/(2^1) and 1/(2^2)) etc. Why deal with all sorts of rounding errors when the computer could have just stored the decimal part of the number as another whole number (which is therefore accurate?) The only thing I can think of is dealing with repeating decimals (in base 10), but there could have been an edge solution to that (like we currently have with infinity).

    Read the article

  • Precision of cos(atan2(y,x)) versus using complex <double>, C++

    - by Ivan
    Hi all, I'm writing some coordinate transformations (more specifically the Joukoswky Transform, Wikipedia Joukowsky Transform), and I'm interested in performance, but of course precision. I'm trying to do the coordinate transformations in two ways: 1) Calculating the real and complex parts in separate, using double precision, as below: double r2 = chi.x*chi.x + chi.y*chi.y; //double sq = pow(r2,-0.5*n) + pow(r2,0.5*n); //slow!!! double sq = sqrt(r2); //way faster! double co = cos(atan2(chi.y,chi.x)); double si = sin(atan2(chi.y,chi.x)); Z.x = 0.5*(co*sq + co/sq); Z.y = 0.5*si*sq; where chi and Z are simple structures with double x and y as members. 2) Using complex : Z = 0.5 * (chi + (1.0 / chi)); Where Z and chi are complex . There interesting part is that indeed the case 1) is faster (about 20%), but the precision is bad, giving error in the third decimal number after the comma after the inverse transform, while the complex gives back the exact number. So, the problem is on the cos(atan2), sin(atan2)? But if it is, how the complex handles that? Thanks!

    Read the article

  • Handling extremely large numbers in a language which can't?

    - by Mallow
    I'm trying to think about how I would go about doing calculations on extremely large numbers (to infinitum - intergers no floats) if the language construct is incapable of handling numbers larger than a certain value. I am sure I am not the first nor the last to ask this question but the search terms I am using aren't giving me an algorithm to handle those situations. Rather most suggestions offer a language change or variable change, or talk about things that seem irrelevant to my search. So I need a little guideance. I would sketch out an algorithm like this: Determine the max length of the integer variable for the language. If a number is more than half the length of the max length of the variable split it in an array. (give a little play room) Array order [0] = the numbers most to the right [n-max] = numbers most to the left Ex. Num: 29392023 Array[0]:23, Array[1]: 20, array[2]: 39, array[3]:29 Since I established half the length of the variable as the mark off point I can then calculate the ones, tenths, hundredths, etc. Place via the halfway mark so that if a variable max length was 10 digits from 0 to 9999999999 then I know that by halfing that to five digits give me some play room. So if I add or multiply I can have a variable checker function that see that the sixth digit (from the right) of array[0] is the same place as the first digit (from the right) of array[1]. Dividing and subtracting have their own issues which I haven't thought about yet. I would like to know about the best implementations of supporting larger numbers than the program can.

    Read the article

  • How to flash Dell Precision 390 from linux (debian)

    - by malat
    I am trying to update my BIOS: $ sudo dmidecode -s bios-version 2.1.2 With a newer one: 2.6.0. I went to this page Dell Precision System BIOS, 2.6.0 After downloading the file WS390-020600.BIN, here is what it states: $ ./WS390-020600.BIN --help Usage: WS390-020600.BIN [options] Options: --help Print this text. --version Print package versions. If no options, update the BIOS. and $ ./WS390-020600.BIN --version Dell BIOS Update Installer 1.2 Copyright 2006 Dell Inc. All Rights Reserved. ./WS390-020600.BIN: 60: ./WS390-020600.BIN: ./flash: not found Does anyone knows where this flash command can be found ? Update: it looks like this is a self-extracting archive (need bash as per comment in header). $ head -30 WS390-020600.BIN [...] Extract() { tail -n +`awk '/^__ARC__/ { print NR + 1; exit 0; }' $0` $0 | gzip -cd >$_PRG So the flash command should have been auto-generated, however the above command does not appear to be running as original author intended. I do not see anything wrong with the command though.

    Read the article

  • The best cross platform (portable) arbitrary precision math library

    - by Siu Ching Pong - Asuka Kenji
    Dear ninjas / hackers / wizards, I'm looking for a good arbitrary precision math library in C or C++. Could you please give me some advices / suggestions? The primary requirements: It MUST handle arbitrarily big integers (my primary interest is on integers). In case that you don't know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000). The precision MUST NOT NEED to be specified during library initialization / object creation. The precision should ONLY be constrained by the available resources of the system. It SHOULD utilize the full power of the platform, and should handle "small" numbers natively. That means on a 64-bit platform, calculating 2^33 + 2^32 should use the available 64-bit CPU instructions. The library SHOULD NOT calculate this in the same way as it does with 2^66 + 2^65 on the same platform. It MUST handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), gcd(), factorial(), and other common integer arithmetic calculations efficiently. Ability to handle functions like sqrt() (square root), log() (logarithm) that do not produce integer results is a plus. Ability to handle symbolic computations is even better. Here are what I found so far: Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don't understand the math underneath. It may be based on theories / algorithms that I have never learnt. The built-in integer type or in core libraries of bc / Python / Ruby / Haskell / Lisp / Erlang / OCaml / PHP / some other languages: I have ever used some of these, but I have no idea on which library they are using, or which kind of implementation they are using. What I have already known: Using a char as a decimal digit, and a char* as a decimal string and do calculations on the digits using a for-loop. Using an int (or a long int, or a long long) as a basic "unit" and an array of it as an arbitrary long integer, and do calculations on the elements using a for-loop. Booth's multiplication algorithm What I don't know: Printing the binary array mentioned above in decimal without using naive methods. Example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, ... (2) use a char* string mentioned above to store the intermediate decimal results). What I appreciate: Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion). Good suggestions on books / articles that I should read. For example, an illustration with figures on how a un-naive arbitrarily long binary to decimal conversion algorithm works is good. Any help. Please DO NOT answer this question if: you think using a double (or a long double, or a long long double) can solve this problem easily. If you do think so, it means that you don't understand the issue under discussion. you have no experience on arbitrary precision mathematics. Thank you in advance! Asuka Kenji

    Read the article

  • Numeric UIDs/GIDs in ACLs on OS X server (10.6)

    - by Oliver Humpage
    Hi, On one (old OS X 10.4) server I'm tarring up some files which have ACLs. I'm then using ``tar -xp'' to untar the archive onto a new 10.6 server, which doesn't have any users/groups configured on it yet except the default admin (UID 501) (there's a reason for that, don't ask!). Obviously this means an "ls -lne" will list files and ACLs with numeric UIDs and GIDs. Now for the normal file permissions it makes sense: you get UIDs like "1037". And for some ACLs, it also makes sense: you get things like "AAAABBBB-CCCC-DDDD-EEEE-FFFF00000402" for groups (0x402 = GID 1026) and "FFFFEEEE-DDDD-CCCC-BBBB-AAAA000001F5" for users (0x1F5 = UID 501). However, some ACLs have a UIDs like "E51DA674-AE70-41BC-8340-9B06C243A262" or GIDs like "0A3FCD24-0012-46FA-B085-88519E55EF29" and I have absolutely no idea how to translate these IDs back into something that could be matched back to the original IDs (UID 1072 and GID 1047 respectively in this example). Can anyone help me translate these weird long hex strings? (Basically we're moving from local users to an Active Directory setup, so I want to move all files to the new server with permissions intact, then chmod, chgrp and set ACLs such that we translate old IDs to the new AD IDs. Hence needing some way to map between the sets. I don't believe there's an easier way to do this?) Many thanks, Oliver.

    Read the article

  • SQL Server insert with XML parameter - empty string not converting to null for numeric

    - by Mayo
    I have a stored procedure that takes an XML parameter and inserts the "Entity" nodes as records into a table. This works fine unless one of the numeric fields has a value of empty string in the XML. Then it throws an "error converting data type nvarchar to numeric" error. Is there a way for me to tell SQL to convert empty string to null for those numeric fields in the code below? -- @importData XML <- stored procedure param DECLARE @l_index INT EXECUTE sp_xml_preparedocument @l_index OUTPUT, @importData INSERT INTO dbo.myTable ( [field1] ,[field2] ,[field3] ) SELECT [field1] ,[field2] ,[field3] FROM OPENXML(@l_index, 'Entities/Entity', 1) WITH ( field1 int 'field1' ,field2 varchar(40) 'field2' ,field3 decimal(15, 2) 'field3' ) EXECUTE sp_xml_removedocument @l_index EDIT: And if it helps, sample XML. Error is thrown unless I comment out field3 in the code above or provide a value in field3 below. <?xml version="1.0" encoding="utf-16"?> <Entities> <Entity> <field1>2435</field1> <field2>843257-3242</field2> <field3 /> </Entity> </Entities>

    Read the article

  • Hibernate: Found: float, expected: double precision

    - by Frederic Morin
    I have a problem with the mapping of Oracle Float double precision datatype to Java Double datatype. The hibernate schema validator seems to fail when the Java Double datatype is used. org.hibernate.HibernateException: Wrong column type in DB.TABLE for column amount. Found: float, expected: double precision The only way to avoid this is to disable schema validation and hope the schema is in sync with the app about to run. I must fix this before it goes out to production. App's evironment: - Grails 1.2.1 - Hibernate-core 3.3.1.GA - Oracle 10g

    Read the article

  • Arbitrary-precision random numbers in C: generation for Monte Carlo simulation without atmospheric n

    - by Yktula
    I know that there are other questions similar to this one, however the following question pertains to arbitrary-precision random number generation in C for use in Monte Carlo simulation. How can we generate good quality arbitrary-precision random numbers in C, when atmospheric noise isn't always available, without relying on disk i/o or network access that would create bottlenecks? libgmp is capable of generating random numbers, but, like other implementations of pseudo-random number generators, it requires a seed. As the manual mentions, "the system time is quite easy to guess, so if unpredictability is required then it should definitely not be the only source for the seed value." Is there a portable/ported library for generating random numbers, or seeds for random numbers? The libgmp also mentions that "On some systems there's a special device /dev/random which provides random data better suited for use as a seed." However, /dev/random and /dev/urandom can only be used on *nix systems.

    Read the article

  • Phantom updates due to decimal precision on calculated properties

    - by Jamie Ide
    This article describes my problem. I have several properties that are calculated. These are typed as decimal(9,2) in SQL Server and decimal in my C# classes. An example of the problem is: Object is loaded with a property value of 14.9 A calculation is performed and the property value is changed to 14.90393 When the session is flushed, NHibernate issues an update because the property is dirty Since the database field is decimal (9,2) the stored value doesn't change Basically a phantom update is issued every time this object is loaded. I don't want to truncate the calculations in my business objects because that tightly couples them to the database and I don't want to lose the precision in other calculations. I tried setting scale and precision or CustomType("Decimal(9,2)") in the mapping file but this appears to only affect schema generation. My only reasonable option appears to be creating an IUserType implementation to handle this. Is there a better solution?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >