Search Results

Search found 1848 results on 74 pages for 'significant'.

Page 1/74 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Python - Number of Significant Digits in results of division

    - by russ
    Newbie here. I have the following code: myADC = 128 maxVoltage = 5.0 maxADC = 255.0 VoltsPerADC = maxVoltage/maxADC myVolts = myADC * VoltsPerADC print "myADC = {0: >3}".format(myADC) print "VoltsPerADC = {0: >7}".format(VoltsPerADC) print VoltsPerADC print "myVolts = {0: >7}".format(myVolts) print myVolts This outputs the following: myADC = 128 VoltsPerADC = 0.0196078 0.0196078431373 myVolts = 2.5098 2.50980392157 I have been searching for an explanation of how the number of significant digits is determined by default, but have had trouble locating an explanation that makes sense to me. This link link text suggests that by default the "print" statement prints numbers to 10 significant figures, but that does not seem to be the case in my results. How are the number of significant digits/precision determined? Can someone shed some light on this for me. Thanks in advance for your time and patience.

    Read the article

  • Formatting numbers with significant figures in C#

    - by Chris Farmer
    I have some decimal data that I am pushing into a SharePoint list where it is to be viewed. I'd like to restrict the number of significant figures displayed in the result data based on my knowledge of the specific calculation. Sometimes it'll be 3, so 12345 will become 12300 and 0.012345 will become 0.0123. Occasionally it will be 4 or 5. Is there any convenient way to handle this?

    Read the article

  • Advice on approaching a significant rearrangement/refactoring?

    - by Prog
    I'm working on an application (hobby project, solo programmer, small-medium size), and I have recently redesigned a significant part of it. The program already works in it's current state, but I decided to reimplement things to improve the OO design. I'm about to implement this new design by refactoring a big part of the application. Thing is I'm not sure where to start. Obviously, by the nature of a rearrangement, the moment you change one part of the program several other parts (at least temporarily) break. So it's a little 'scary' to rearrange something in a piece of software that already works. I'm asking for advice or some general guidelines: how should I approach a significant refactoring? When you approach rearranging large parts of your application, where do you start? Note that I'm interested only in re-arranging the high-level structure of the app. I have no intention of rewriting local algorithms.

    Read the article

  • How many significant digits should I use for double literals in Java?

    - by M. Dudley
    How many significant digits should I use when defining a double literal in Java? This is assuming that I am trying to represent a number with more significant figures than a double can hold. In Math.java I see 20 and 21: public static final double E = 2.7182818284590452354; public static final double PI = 3.14159265358979323846; This is more than the 15-17 significant digits provided by IEEE 754. So what's the general rule-of-thumb?

    Read the article

  • JD Edwards Delivers Once Again with Significant Announcements

    Listen to Lenley Hensarling, JD Edwards Group Vice President,talk about the significant JD Edwards announcements made during Oracle OpenWorld 2008.Lenley will highlight how JD Edwards’ customers can benefit from the latest product releases from EnterpriseOne and World,discuss the wave of companies who are upgrading to the most recent JD Edwards releases to take advantage of an array of industry specific enhancements,and elaborate on JD Edwards’ strategy about integrating to other Oracle solutions,bringing continuous value to customers.

    Read the article

  • Significant number of non-HTTP requests hitting my site

    - by Mark Westling
    I'm seeing a significant number of non-HTTP requests hitting a site I just launched. They show up in the server (nginx) logs as non-ASCII and get rejected (correctly) with a 400 status. Here are some lines from the log: 95.132.198.189 - - [09/Jan/2011:13:53:30 -0500] "œ$A\x10õœ²É9J" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:57:42 -0500] "#§i²¸oYi á¹„\x13VJ—x·—œ\x04N \x1DÔvbÛè½\x10§¬\x1E0œ_^¼+\x09ÜÅ\x08DÌÃiJeT€¿æ]œr\x1EëîyIÐ/ßýúê5Ǹ" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:58:33 -0500] "¯Ú%ø=Œ›D@\x12¼\x1C†ÄÀe\x015mˆàd˜Û%pÛÿ" 400 173 "-" "-" What should I make of this? Is this some sort of scripted attack? Or could these be correct requests that have somehow been garbled? They're not affecting the performance of the site and I'm not seeing any other signs of attacks (e.g., no strange POSTs) so at this point I'm more curious than afraid.

    Read the article

  • Ted Kummert to make significant announcement related to SQL Server

    - by jamiet
    Microsoft have announced a conference call tomorrow with the head of all things SQL Server, Ted Kummert: Normally I wouldn’t take any notice of such things but the mysterious pre-conference-call-announcement (not something that the SQL Server team do regularly as I recall) has me intrigued. Logic says that it will have something to do with SQL Server R2, we shall see! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Someone using my website for Email and significant increase in spam

    - by Joy
    Let me give you the background in context so that you know the full story. Last summer my web guy (he put my website together) got in a fight with someone who attempted to register on my site using the name of my company as part of his user name. I was not aware of this at all until it had escalated dramatically. I don't know why my web guy was so unprofessional in his response to this person. I really don't know him - met him via SCORE and have never met in person. He is a vendor. Anyway, this guy who got into it with my web guy then threatened to do all he could to hurt my business and said he was internet savvy, etc. So, nothing seemed to happen for a while then not long ago this guy attempted to send me a friend request on Linkedin. After his behavior I declined it. Shortly afterwards I began seeing a dramatic increase in spammers posting comments on the blog part of my site. Just lately I have been receiving Emails from a variety of names but all with the "@___" that I own - for my business. I had additional security added so now they have to register in order to comment on my blog and I am seeing a lot of registration attempts from the same (and similar) IP addresses with bogus names and weird Email addresses being blocked. So, it is not creating more work as it is all automatic. The Email addresses are of more concern. Is there a way to identify a person through an IP address or a place to report the behavior or the Email usage? This guy lives in South Carolina so he is not overseas. Any help/advice you can provide will be greatly appreciated. Thanks Joy

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Broken Splash boot screen with significant performance and freezing issues

    - by Ghost_ITMachine
    okay heres the story and yes this is important becauses its what was the last thing i did before this madness watched a youtube video which ive now flagged and reported but he basically instructed us to su virtualbox into the system i dont know what i did all i did was follow directions now im plagued with no ubuntu purple boot screen it goes straight to a black screen filled with letters and im experiencing more lag and freezing than i ever have before this is not from an upgrade ive had 14.04 for about a month before this happened im at a loss for what my routes are it was all i could do to remove the superblocks just to get back into my desktop please help

    Read the article

  • Significant figures in the decimal module

    - by Jason Baker
    So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly: from decimal import Decimal >>> Decimal('1.0') + Decimal('2.0') Decimal("3.0") But this doesn't: >>> Decimal('1.00') / Decimal('3.00') Decimal("0.3333333333333333333333333333") So two questions: Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math? Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.

    Read the article

  • Are there significant performance difference between chipsets?

    - by Let_Me_Be
    I wanted to build a single PC to fit all my needs, but since hardware virtualization support (Vt-d specifically) is a huge problem, I decided to build multiple single-use oriented computers. In this scenario I want these computers to be as minimal as possible. So the core of my question is: "Are there significant performance difference between chipsets?" I'm considering Sandy-Bridge i7 or i5 for my "game console" computer. And since I will use only one graphic card, one or two HDD, 4-8GB RAM and nothing else, I would be fine with a micro-ATX board with a Q67 (or some other low-end chipset).

    Read the article

  • Significant OS X Finder lag when listing directories/files

    - by Jack Sleight
    I'm experiencing some significant OS X finder lag, that seems to be purely an issue with Finder itself, and not the HD or any other part of OS X (I'll explain below). The lag only appears to be when listing directories/files, where I'm seeing up to twelve or so seconds of lag (the folder opens with a blank list and the spinner going in the bottom right). This happens with both the local SSD and network drives (connected via ethernet or wifi) Browsing both local and network drives in terminal and listing directories is instant I can actually browse files on my NAS from my phone over a 3G connection from the other side of the country faster than Finder can when connected to the local network (madness!) Can anyone help? Thanks.

    Read the article

  • Significant events in Computer Science

    - by Brabster
    What were the most significant events or milestones in the history of computer science? I haven't been able to find a potted history, so I thought I'd see what views the SO community had on the question. I'm studying for a Masters in CS at the moment, so I'm hoping for some stuff to go take a look at that I've not come across before. Related: Computer science advances in past 5 years Significant new inventions in computing since 1980

    Read the article

  • Find most significant bit (left-most) that is set in a bit array

    - by Claudiu
    I have a bit array implementation where the 0th index is the MSB of the first byte in an array, the 8th index is the MSB of the second byte, etc... What's a fast way to find the first bit that is set in this bit array? All the related solutions I have looked up find the first least significant bit, but I need the first most significant one. So, given 0x00A1, I want 9.

    Read the article

  • Does a servlet-based stack have significant overheads?

    - by John
    I don't know if it's simply because page-loads take a little time, or the way servlets have an abstraction framework above the 'bare metal' of HTTP, or just because of the "Enterprise" in Jave-EE, but in my head I have the notion that a servlet-based app is inherently adding overhead compared to a Java app which simply deals with sockets directly. Forget web-pages, imagine instead a Java server app where you send it a question over an HTTP request and it looks up an answer from memory and returns the answer in the response. You can easily write a Java socket-based app which does this, you can also do a servlet approach and get away from the "bare metal" of sockets. Is there any measurable performance impact to be expected implementing the same approach using Servlets rather than a custom socket-based HTTP listening app? And yes, I am hazy on the exact data sent in HTTP requests and I know it's a vague question. It's really about whether servlet implementations have lots of layers of indirection or anything else that would add up to a significant overhead per call, where by significant I mean maybe an additional 0.1s or more.

    Read the article

  • Show a number with specified number of significant digits

    - by dreeves
    I use the following function to convert a number to a string for display purposes (don't use scientific notation, don't use a trailing dot, round as specified): (* Show Number. Convert to string w/ no trailing dot. Round to the nearest r. *) Unprotect[Round]; Round[x_,0] := x; Protect[Round]; shn[x_, r_:0] := StringReplace[ ToString@NumberForm[Round[N@x,r], ExponentFunction->(Null&)], re@"\\.$"->""] (Note that re is an alias for RegularExpression.) That's been serving me well for years. But sometimes I don't want to specify the number of digits to round to, rather I want to specify a number of significant figures. For example, 123.456 should display as 123.5 but 0.00123456 should display as 0.001235. To get really fancy, I might want to specify significant digits both before and after the decimal point. For example, I might want .789 to display as 0.8 but 789.0 to display as 789 rather than 800. Do you have a handy utility function for this sort of thing, or suggestions for generalizing my function above? Related: Suppressing a trailing "." in numerical output from Mathematica

    Read the article

  • doing arithmetic upto two significant figures in Python?

    - by user248237
    I have two floats in Python that I'd like to subtract, i.e. v1 = float(value1) v2 = float(value2) diff = v1 - v2 I want "diff" to be computed upto two significant figures, that is compute it using %.2f of v1 and %.2f of v2. How can I do this? I know how to print v1 and v2 up to two decimals, but not how to do arithmetic like that. thanks.

    Read the article

  • Position of least significant bit that is set

    - by peterchen
    I am looking for an efficient way to determine the position of the least significant bit that is set in an integer, e.g. for 0x0FF0 it would be 4. A trivial implementation is this: unsigned GetLowestBitPos(unsigned value) { assert(value != 0); // handled separately unsigned pos = 0; while (!(value & 1)) { value >>= 1; ++pos; } return pos; } Any ideas how to squeeze some cycles out of it? (Note: this question is for people that enjoy such things, not for people to tell me xyzoptimization is evil.) [edit] Thanks everyone for the ideas! I've learnt a few other things, too. Cool!

    Read the article

  • Development process for an embedded project with significant hardware changes

    - by pierr
    I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware changes. I will describe below what we are currently doing (Ad-hoc way, no defined process yet). The changes are divided into three categories and different processes are used for each of them: complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the legacy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real hardware hardware improvement example : enhance the image display quality by improving the underlying algorithm a) RTL/FPGA simulation b) Wait until hardware and test on the hardware Minor change example : only change hardware register mapping a) Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware changes as the bring-up schedule is always very tight and the customer desired a seamless change when updating to a new version of hardware. How did you manage this kind of hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatic test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well-documented processes for this kind of change?

    Read the article

  • Developmnet process for an embedded project with significant Hardware change

    - by pierr
    Hi, I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware change. I will describe below what we are currently doing (Ad-hoc way , no defined process yet). The change are divided to three categories and different process are used for them : complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the leagcy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real Hardware hardware improvement example : enhance the image display quaulity by improving the underlie algorithm a)RTL/FPGA simulation b)Wait until hardware and test on the hardware Mino change exmaple : only change hardware register mapping a)Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware change as the bring up schedule is always very tight and the customer desired a seemless change when updating to a new version hardware. How did you manage this kind of hardware hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatical test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well documented process for this kind of change? Thanks for your insight.

    Read the article

  • Nicely representing a floating-point number in python

    - by dln385
    I want to represent a floating-point number as a string rounded to some number of significant digits, and never using the exponential format. Essentially, I want to display any floating-point number and make sure it “looks nice”. There are several parts to this problem: I need to be able to specify the number of significant digits. The number of significant digits needs to be variable, which can't be done with with the string formatting operator. I need it to be rounded the way a person would expect, not something like 1.999999999999 I've figured out one way of doing this, though it looks like a work-round and it's not quite perfect. (The maximum precision is 15 significant digits.) >>> def f(number, sigfig): return ("%.15f" % (round(number, int(-1 * floor(log10(number)) + (sigfig - 1))))).rstrip("0").rstrip(".") >>> print f(0.1, 1) 0.1 >>> print f(0.0000000000368568, 2) 0.000000000037 >>> print f(756867, 3) 757000 Is there a better way to do this? Why doesn't Python have a built-in function for this?

    Read the article

  • How significant is the bazaar performance factor?

    - by memodda
    I hear all this stuff about bazaar being slower than git. I haven't used too much distributed version control yet, but in Bazaar vs. Git on the bazaar site, they say that most complaints about performance aren't true anymore. Have you found this to be true? Is performance pretty much on par now? I've heard that speed can affect workflow (people are more likely to do good thing X if X is fast). What specific cases does performance currently affect workflow in bazaar vs other systems (especially git), and how? I'm just trying to get at why performance is of particular importance. Usually when I check something in or update it, I expect it to take a little while, but it doesn't matter. I commit/update when I have a second, so it doesn't interfere with my productivity. But then I haven't used DVCS yet, so maybe that has something to do with it?

    Read the article

  • How to extract common / significant phrases from a series of text entries

    - by arronsky
    I have a series of text items- raw HTML from a MYSQL database. I want to find the most common phrases in these entries (not the single most common phrase, and ideally, not enforcing word-for-word matching). My example is any review on Yelp.com, that shows 3 snippets from hundreds of reviews of a given restaurant, in the format: "Try the hamburger" (in 44 reviews) e.g., the "Review Highlights" section of this page: http://www.yelp.com/biz/sushi-gen-los-angeles/ I have NLTK installed and I've played around with it a bit, but am honestly overwhelmed by the options. This seems like a rather common problem and I haven't been able to find a straightforward solution by searching here. Thanks in advance for any help.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >