Search Results

Search found 2214 results on 89 pages for 'significant figures'.

Page 18/89 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Comparing Apples and Pairs

    - by Tony Davis
    A recent study, High Costs and Negative Value of Pair Programming, by Capers Jones, pulls no punches in its assessment of the costs-to- benefits ratio of pair programming, two programmers working together, at a single computer, rather than separately. He implies that pair programming is a method rushed into production on a wave of enthusiasm for Agile or Extreme Programming, without any real regard for its effectiveness. Despite admitting that his data represented a far from complete study of the economics of pair programming, his conclusions were stark: it was 2.5 times more expensive, resulted in a 15% drop in productivity, and offered no significant quality benefits. The author provides a more scientific analysis than Jon Evans’ Pair Programming Considered Harmful, but the theme is the same. In terms of upfront-coding costs, pair programming is surely more expensive. The claim of productivity loss is dubious and contested by other studies. The third claim, though, did surprise me. The author’s data suggests that if both the pair and the individual programmers employ static code analysis and testing, then there is no measurable difference in the resulting code quality, in terms of defects per function point. In other words, pair programming incurs a massive extra cost for no tangible return in investment. There were, inevitably, many criticisms of his data and his conclusions, a few of which are persuasive. Firstly, that the driver/observer model of pair programming, on which the study bases its findings, is far from the most effective. For example, many find Ping-Pong pairing, based on use of test-driven development, far more productive. Secondly, that it doesn’t distinguish between “expert” and “novice” pair programmers– that is, independently of other programming skills, how skilled was an individual at pair programming. Thirdly, that his measure of quality is too narrow. This point rings true, certainly at Red Gate, where developers don’t pair program all the time, but use the method in short bursts, while tackling a tricky problem and needing a fresh perspective on the best approach, or more in-depth knowledge in a particular domain. All of them argue that pair programming, and collective code ownership, offers significant rewards, if not in terms of immediate “bug reduction”, then in removing the likelihood of single points of failure, and improving the overall quality and longer-term adaptability/maintainability of the design. There is also a massive learning benefit for both participants. One developer told me how he once worked in the same team over consecutive summers, the first time with no pair programming and the second time pair-programming two-thirds of the time, and described the increased rate of learning the second time as “phenomenal”. There are a great many theories on how we should develop software (Scrum, XP, Lean, etc.), but woefully little scientific research in their effectiveness. For a group that spends so much time crunching other people’s data, I wonder if developers spend enough time crunching data about themselves. Capers Jones’ data may be incomplete, but should cause a pause for thought, especially for any large IT departments, supporting commerce and industry, who are considering pair programming. It certainly shouldn’t discourage teams from exploring new ways of developing software, as long as they also think about how to gather hard data to gauge their effectiveness.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • Qt vs WPF/.NET

    - by aaronc
    My company is trying to make the decision between using Qt/C++ for our GUI framework or migrating to .NET and using WPF. We have up to this point been using MFC. It seems that .NET/WPF is technically the most advanced and feature-rich platform. I do, however, have several concerns. These include: Platform support Framework longevity (i.e. future-proofing) Performance and overhead For this application we are willing to sacrifice support for Windows 2000, Macs, and Linux. But, the issue is more related to Microsoft's commitment to the framework and their extant platforms. It seems like Microsoft has a bad habit of coming up with something new, hyping it for a few years, and then relegating it to the waste-bin essentially abandoning the developers who chose it. First it was MFC and VB6, then Windows Forms, and now there's WPF. Also with .NET, versions of Windows were progressively nicked off the support list. Looks like WPF could be here to stay for a while, but since its not open source its really in Microsoft's hands. I'm also concerned about the overhead and performance of WPF since some of our applications involve processing large amounts of information and doing real-time data capture. Qt seems like a really good option, but it doesn't have all the features of WPF/.NET couldn't use languages like C#. Basically, what does the community think about Microsoft's commitment to WPF compared with previous frameworks? Are the performance considerations significant enough to avoid using it for a realtime app? And, how significant are the benefits of WPF/.NET in terms of productivity and features compared to Qt?

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • Pros and cons of making database IDs consistent and "readable"

    - by gmale
    Question Is it a good rule of thumb for database IDs to be "meaningless?" Conversely, are there significant benefits from having IDs structured in a way where they can be recognized at a glance? What are the pros and cons? Background I just had a debate with my coworkers about the consistency of the IDs in our database. We have a data-driven application that leverages spring so that we rarely ever have to change code. That means, if there's a problem, a data change is usually the solution. My argument was that by making IDs consistent and readable, we save ourselves significant time and headaches, long term. Once the IDs are set, they don't have to change often and if done right, future changes won't be difficult. My coworkers position was that IDs should never matter. Encoding information into the ID violates DB design policies and keeping them orderly requires extra work that, "we don't have time for." I can't find anything online to support either position. So I'm turning to all the gurus here at SA! Example Imagine this simplified list of database records representing food in a grocery store, the first set represents data that has meaning encoded in the IDs, while the second does not: ID's with meaning: Type 1 Fruit 2 Veggie Product 101 Apple 102 Banana 103 Orange 201 Lettuce 202 Onion 203 Carrot Location 41 Aisle four top shelf 42 Aisle four bottom shelf 51 Aisle five top shelf 52 Aisle five bottom shelf ProductLocation 10141 Apple on aisle four top shelf 10241 Banana on aisle four top shelf //just by reading the ids, it's easy to recongnize that these are both Fruit on Aisle 4 ID's without meaning: Type 1 Fruit 2 Veggie Product 1 Apple 2 Banana 3 Orange 4 Lettuce 5 Onion 6 Carrot Location 1 Aisle four top shelf 2 Aisle four bottom shelf 3 Aisle five top shelf 4 Aisle five bottom shelf ProductLocation 1 Apple on aisle four top shelf 2 Banana on aisle four top shelf //given the IDs, it's harder to see that these are both fruit on aisle 4 Summary What are the pros and cons of keeping IDs readable and consistent? Which approach do you generally prefer and why? Is there an accepted industry best-practice?

    Read the article

  • Linq to find pair of points with longest length?

    - by Chris
    I have the following code: foreach (Tuple<Point, Point> pair in pointsCollection) { var points = new List<Point>() { pair.Value1, pair.Value2 }; } Within this foreach, I would like to be able to determine which pair of points has the most significant length between the coordinates for each point within the pair. So, let's say that points are made up of the following pairs: (1) var points = new List<Point>() { new Point(0,100), new Point(100,100) }; (2) var points = new List<Point>() { new Point(150,100), new Point(200,100) }; So I have two sets of pairs, mentioned above. They both will plot a horizontal line. I am interested in knowing what the best approach would be to find the pair of points that have the greatest distance between, them, whether it is vertically or horizontally. In the two examples above, the first pair of points has a difference of 100 between the X coordinate, so that would be the point with the most significant difference. But if I have a collection of pairs of points, where some points will plot a vertical line, some points will plot a horizontal line, what would be the best approach for retrieving the pair from the set of points whose difference, again vertically or horizontally, is the greatest among all of the points in the collection? Thanks! Chris

    Read the article

  • CFHost DNS Resolution - When is it OK to use synchronous API?

    - by Jasarien
    I went to the iPhone Developer Tech Talk a few months ago and asked one of the gurus there about the lack of NSHost on the iPhone. Some code I was porting to the iPhone made significant use of NSHost throughout its networking code. I was told that NSHost is on the iPhone, but its private. I was also told that NSHost is a synchronous API and that I shouldn't use it anyway. (If anyone could elaborate on why it shouldn't be used, as a bonus, that'd be great.) I can see the caveats of using synchronous API's on the main thread in that they'll block until complete - and that's never a good thing with network code because there are so many factors that could cause the API to block the thread for a significant amount of time. My solution was to write a wrapper around CFHost's asynchronous resolution functions. The result works quite well, and I'm considering releasing it into the public domain. But my question is this: Say my app only resolves a hostname once per run, during the connecting phase, and then cache's it for the rest of the session. During the time it is resolving, a modal screen is shown telling the user "Connecting" with a nice spinner. Does it really matter whether or not the resolution is asynchronous?? The user has to wait to connect anyway, and the resolution is only done on the first connection. Subsequent connections use the cached result of the resolution. When is it OK to be synchronous and when should things be asynchronous?

    Read the article

  • What is best practice (and implications) for packaging projects into JAR's?

    - by user245510
    What is considered best practice deciding how to define the set of JAR's for a project (for example a Swing GUI)? There are many possible groupings: JAR per layer (presentation, business, data) JAR per (significant?) GUI panel. For significant system, this results in a large number of JAR's, but the JAR's are (should be) more re-usable - fine-grained granularity JAR per "project" (in the sense of an IDE project); "common.jar", "resources.jar", "gui.jar", etc I am an experienced developer; I know the mechanics of creating JAR's, I'm just looking for wisdom on best-practice. Personally, I like the idea of a JAR per component (e.g. a panel), as I am mad-keen on encapsulation, and the holy-grail of re-use accross projects. I am concerned, however, that on a practical, performance level, the JVM would struggle class loading over dozens, maybe hundreds of small JAR's. Each JAR would contain; the GUI panel code, necessary resources (i.e. not centralised) so each panel can stand alone. Does anyone have wisdom to share?

    Read the article

  • What is faster- Java or C# (Or good old C)?

    - by Rexsung
    I'm currently deciding on a platform to build a scientific computational product on, and am deciding on either C#, Java, or plain C with Intels compiler on Core2 Quad CPU's. It's mostly integer arithmetic. My benchmarks so far show Java and C are about on par with each other, and dotNET/C# trails by about 5%- however a number of my coworkers are claiming that dotNET with the right optimizations will beat both of these given enough time for the JIT to do its work. I always assume that the JIT would have done it's job within a few minutes of the app starting (Probably a few seconds in my case, as it's mostly tight loops), so I'm not sure whether to believe them Can anyone shed any light on the situation? Would dotNET beat Java? (Or am I best just sticking with C at this point?). The code is highly multithreaded and data sets are several terabytes in size. Haskell/erlang etc are not options in this case as there is a significant quantity of existing legacy C code that will be ported to the new system, and porting C to Java/C# is a lot simpler than to Haskell or Erlang. (Unless of course these provide a significant speedup). Edit: We are considering moving to C# or Java because they may, in theory, be faster. Every percent we can shave off our processing time saves us tens of thousands of dollars per year. At this point we are just trying to evaluate whether C, Java, or c# would be faster.

    Read the article

  • 4.0/WCF: Best approach for bi-idirectional message bus?

    - by TomTom
    Just a technology update, now that .NET 4.0 is out. I write an application that communicates to the server through what is basically a message bus (instead of method calls). This is based on the internal architecture of the application (which is multi threaded, passing the messages around). There are a limited number of messages to go from the client to the server, quite a lot more from the server to the client. Most of those can be handled via a separate specialized mechanism, but at the end we talk of possibly 10-100 small messages per second going from the server to the client. The client is supposed to operate under "internet conditions". THis means possibly home end users behind standard NAT devices (i.e. typical DSL routers) - a firewalled secure and thus "open" network can not be assumed. I want to have as little latency and as little overhad for the communication as possible. What is the technologally best way to handle the message bus callback? I Have no problem regularly calling to the server for message delivery if something needs to be sent... ...but what are my options to handle the messagtes from the server to the client? WsDualHttp does work how? Especially under a NAT scenario? Just as a note: polling is most likely out - the main problem here is that I would have a significant overhead OR a significant delay, both aren ot really wanted. Technically I would love some sort of streaming appraoch, where the server can write messags to a stream while he generates them and they get sent to the client as they come. Not esure this is doable with WCF, though (if not, I may acutally decide to handle the whole message part outside of WCF and just do control / login / setup / destruction via WCF).

    Read the article

  • Pseudo code for instruction description

    - by Claus
    Hi, I am just trying to fiddle around what is the best and shortest way to describe two simple instructions with C-like pseudo code. The extract instruction is defined as follows: extract rd, rs, imm This instruction extracts the appropriate byte from the 32-bit source register rs and right justifies it in the destination register. The byte is specified by imm and thus can take the values 0 (for the least-significant byte) and 3 (for the most-significant byte). rd = 0x0; // zero-extend result, ie to make sure bits 31 to 8 are set to zero in the result rd = (rs && (0xff << imm)) >> imm; // this extracts the approriate byte and stores it in rd The insert instruction can be regarded as the inverse operation and it takes a right justified byte from the source register rs and deposits it in the appropriate byte of the destination register rd; again, this byte is determined by the value of imm tmp = 0x0 XOR (rs << imm)) // shift the byte to the appropriate byte determined by imm rd = (rd && (0x00 << imm)) // set appropriate byte to zero in rd rd = rd XOR tmp // XOR the byte into the destination register This looks all a bit horrible, so I wonder if there is a little bit a more elegant way to describe this bahaviour in C-like style ;) Many thanks, Claus

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Doing without partial commits the "Mercurial way"

    - by David Moles
    Subversion shop considering switching to Mercurial, trying to figure out in advance what all the complaints from developers are going to be. There's one fairly common use case here that I can't see how to handle. I'm working on some largish feature, and I have a significant part of the code -- or possibly several significant parts of the code -- in pieces all over the garage floor, totally unsuitable for checkin, maybe not even compiling. An urgent bugfix request comes in. The fix is nice and local and doesn't touch any of the code I've been working on. I make the fix in my working copy. Now what? I've looked at "Mercurial cherry picking changes for commit" and "best practices in mercurial: branch vs. clone, and partial merges?" and all the suggestions seem to be extensions of varying complexity, from Record and Shelve to Queues. The fact that there apparently isn't any core functionality for this makes me suspect that in some sense this working style is Doing It Wrong. What would a Mercurial-like solution to this use case look like?

    Read the article

  • How do I add code automatically to a derived function in C++

    - by Ian
    I have code that's meant to manage operations on both a networked client and a server, since there is significant overlap between the two. However, there are a few functions here and there that are meant to be exclusively called by the client or server, and accidentally calling a client function on the server (or vice versa) is a significant source of bugs. To reduce these sorts of programming errors, I'm trying to tag functions so that they'll raise a ruckus if they're misused. My current solution is a simple macro at the start of each function that calls an assert if the client or server accesses members they shouldn't. However, this runs into problems when there are multiple derived instances of classes, in that I have to tag the implementation as client or server side in EVERY child class. What I'd like to be able to do is put a tag in the virtual member's signature in the base class, so that I only have to tag it once and not run into errors by forgetting to do it repeatedly. I've considered putting a check in a base class implementation and then referring to it with something like base::functionName, but that runs into the same issue as far as needing to manually add the function call to every implementation. Ideally, I'd be able to have parent versions of the function called automatically like default constructors do. Does anybody know how to achieve something like this in C++? Is there an alternate approach I should be considering? Thanks!

    Read the article

  • How to convert Big Endian and how to flip the highest bit?

    - by Robert Frank
    I am using a TStream to read binary data (thanks to this post: http://stackoverflow.com/questions/2878180/how-to-use-a-tfilestream-to-read-2d-matrices-into-dynamic-array). My next problem is that the data is Big Endian. From my reading, the Swap() method is seemingly deprecated. How would I swap the types below? 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point - Are IEEE affected by Big Endian? And, finally, since the data is unsigned, the creators of this dataset have stored the unsigned values as signed integers (excluding the IEEE). They instruct that one need only add an offset (2^15, 2^31, and 2^63) to recover the unsigned data. But, they note that flipping the most significant bit is the fastest way to do that. How does one efficiently flip the most significant bit of a 16, 32, or 64-bit integer? So, if the data on disk (16-bit) is "85 FB" - the desired result after reading the data and swapping and bit flipping would be 1531. Is there a way to accomplish the swapping and bit flipping with generics so it fits into the generic answer at the link above? Yes, kids, THIS is how scientific astronomical data is stored by NASA, ESO, and all professional astronomers. This FITS standard is considered by some to be one of the most successful standards ever created in its proliferation and flexibility!

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • Creating Binary Block from struct

    - by MOnsDaR
    I hope the title is describing the problem, i'll change it if anyone has a better idea. I'm storing information in a struct like this: struct AnyStruct { AnyStruct : testInt(20), testDouble(100.01), testBool1(true), testBool2(false), testBool3(true), testChar('x') {} int testInt; double testDouble; bool testBool1; bool testBool2; bool testBool3; char testChar; std::vector<char> getBinaryBlock() { //how to build that? } } The struct should be sent via network in a binary byte-buffer with the following structure: Bit 00- 31: testInt Bit 32- 61: testDouble most significant portion Bit 62- 93: testDouble least significant portion Bit 94: testBool1 Bit 95: testBool2 Bit 96: testBool3 Bit 97-104: testChar According to this definition the resulting std::vector should have a size of 13 bytes (char == byte) My question now is how I can form such a packet out of the different datatypes I've got. I've already read through a lot of pages and found datatypes like std::bitset or boost::dynamic_bitset, but neither seems to solve my problem. I think it is easy to see, that the above code is just an example, the original standard is far more complex and contains more different datatypes. Solving the above example should solve my problems with the complex structures too i think. One last point: The problem should be solved just by using standard, portable language-features of C++ like STL or Boost (

    Read the article

  • Animate screen while loading textures

    - by Omega
    My RPG-like game has random battles. When the player enters a random battle, it is necessary for my game to load the textures used within that battle (animated monsters, animations, etc). The textures are quite a lot, and rather big (the battles are very graphical intensive). Such process consumes significant time. And while it is loading, the whole screen freezes. The game's map freezes, and the wait time is significant - I personally find it annoying. I can't afford to preload the textures because, after doing some math, I realized: If I preload all the textures at the beginning of the game, the application will definitely crash. If I preload the textures that are used in a specific map when the player enters the map, the application is very likely to crash as well. I can only afford to load the textures when I need them, and dispose of them as soon as the battle ends. I'd prefer to not use a "loading screen" image because it affects my game's design and concept. I want to avoid this approach. If I could do some kind of animation while loading the textures, it would be great, which leads to my question: is that possible? What kind of animation, you ask? Well, how about... you remember when Final Fantasy used to distort the screen while apparently loading the textures? Something like that. But well, distorting is quite a time-consuming process as well, so maybe just a cool frame-by-frame animation or something. While writing this, I realized that I could make small pauses between textures (there are multiple textures), and during such pauses, I update the screen to represent the animation's state. However, this is very unlikely to happen, because each texture is 2048x2048, so the animation would be refreshed at a rather laggy (and annoying) rate. I'd prefer to avoid this as well.

    Read the article

  • Reading XML or objects from a Web service

    - by Shawn
    This is my first time working with webservices and I am a bit lost. I successfully called the functions, but I only can get one value from the service. I read that the easiest way is to read xml or create objects and then call their values. Currently I use functions that return the desired value but I need to call them 3 times to get all the data witch is a waste of time and resources. I tried to call the service with the URL and use it as a website or getting the service to work the same way without importing into the program. The thing is that i cant find a way to pass the values into the url, because of that i get only blank pages. What is the fastest way to get my data from the services? I need city name, temperature and a flag if the city is valid. I need to pass the zip code to the service. Thank you. My current code wetther.Weather wether = new wetther.Weather(); string farenhait = wether.GetCityWeatherByZIP(zip).Temperature; string city = wether.GetCityWeatherByZIP(zip).City; bool correct = wether.GetCityWeatherByZIP(zip).Success; I tried it that way // Retrieve XML document XmlTextReader reader = new XmlTextReader("http://xml.weather.yahoo.com/forecastrss?p=94704"); // Skip non-significant whitespace reader.WhitespaceHandling = WhitespaceHandling.Significant; // Read nodes one at a time while (reader.Read()) { // Print out info on node Console.WriteLine("{0}: {1}", reader.NodeType.ToString(), reader.Name); } This one works for the yahoo page but not for mine. I need to use this webservice - http://wsf.cdyne.com/WeatherWS/Weather.asmx

    Read the article

  • Is there any way to configure what reCAPTCHA is actually displaying?

    - by trejder
    Is there any way to control, what kind of image is displayed to user in reCAPTCHA or what kind of puzzle he/she is required to solve? I noticed at least two significant changes to what reCAPTCHA is serving (and I must admit, that I don't much like these changes): For years reCAPTCHA was serving two words from scanned books and user was required to solve one of them. They were clearly readable (even those "second" ones, that could be ommitted) and with nearly no problem in solving them by a human. For past few month, I noticed a significant change at all of my sites, that are using reCAPTCHA. They started to show combination of computer-generated long numbers string and something, that looks for me as street/house number photographed in Google StreetView. They're even easier to solve, but what is most important -- it started to happen more and more often that user is obligated to solve both of them. Now, I have noticed another change/regression. Now some of my sites remain at so called "level 2" (like above) and some of them started to serve two words again ("level 1"?). And again, there are more and more situations, where solving both words is required. But, what is most important, on this "level" words are nearly impossible to solve (on my old mobile devices with 3.5'' display I need 5-6 attempts to pass on!). They're cluttered, written in some strange font, mostly in italics with a lot of black and white stains or drops on letters etc. Plus: reCAPTCHA stopped to be equal -- some of my pages are still serving "level2" while some of them are "killing" end users with a need to solve "level3". Is there anyway, I can control this -- force it to use only "level2" and on all my pages? (of course, I'm using exactly the same piece of code to serve reCAPTCHA on all my pages) Note, that I'm not asking for something like in this question. I don't want to change what reCAPTCHA shows (to disable words in favor of only numbers for example). I only want to control, which "version" of puzzles (among described above) reCAPTCHA shows and I want to make it equal on all my sites.

    Read the article

  • A question of long-running and disruptive branches

    - by Matt Enright
    We are about to begin prototyping a new application that will share some existing infrastructure assemblies with an existing application, and also involve a significant subset of the existing domain model. Parts of the domain model will likely undergo some serious changes for this new application, and the endgame for all of this, once the new application has been fully specified and is launch-ready is that we would like to re-unify the models of the two applications (as well as share a database, link functionality, etc.), but for the duration of development, prototyping, etc, we will be using a separate database so that we can change things without worrying about impact to development or use of the existing application. Since it is a prototype, there will be a pretty long window during which serious changes or rearchitecturing can occur as product management experiments with different workflows, different customer bases are surveyed, and we try and keep up. We have already made a Subversion branch, so as to not impact concurrent development on the mature application, and are toying with 2 potential ways of moving forward with this: Use the svn branch as the sole mechanism of separation. Make our changes to the existing domain models, and evaluate their impact on the existing application (and make requisite changes to ProjectA) when we have established that our long-running side branch is stable enough for re-entry to trunk. "Fork" the shared code (temporarily): Copy ProjectA.Entities to NewProject.Entities, and treat all of the NewProject code as self-contained. When all of the perturbations around the model have died down and we feel satisfied, manually re-integrate the changes (as granular or sweeping as warranted) back into ProjectA.Entities, updating ProjectA to use the improved models at each step (this can take place either before or after the subversion merge has occurred). The subversion merge will then not handle recombination of any of the heavy changes here. Note: the "fork" method only applies to the code we see significant changes in store for, and whose modification will break ProjectA - shared infrastructure stuff for example, we would just modify in place (on our branch) and let the merge sort out. Development is hard, go shopping. Naturally, after not coming to an agreement, we're turning it over to the oracle of power that is SO. Any experience with any of these methods, pain points to watch out for, something new entirely?

    Read the article

  • what is acceptable datastore latency on VMware ESXi host?

    - by BeowulfNode42
    Looking at our performance figures on our existing VMware ESXi 4.1 host at the Datastore/Real-time performance data Write Latency Avg 14 ms Max 41 ms Read Latency Avg 4.5 ms Max 12 ms People don't seem to be complaining too much about it being slow with those numbers. But how much higher could they get before people found it to be a problem? We are reviewing our head office systems due to running low on storage space, and are tossing up between buying a 2nd VM host with DAS or buying some sort of NAS for SMB file shares in the near term and maybe running VMs from it in the longer term. Currently we have just under 40 staff at head office with 9 smaller branches spread across the country. Head office is runnning in an MS RDS session based environment with linux ERP and mail systems. In total 22 VMs on a single host with DAS made from a RAID 10 made of 6x 15k SAS disks.

    Read the article

  • Set certain WSUS updates to auto-install

    - by Nicolas
    We're running a WSUS server for the simple purpose of caching updates. Since we are a very small network of all "power users", we've got the domain group policy for WSUS updates on the clients set to prompt for download/install. i.e. We don't want updates to install without our knowledge. But there are a few cases where it would be nice to be able to set a certain update to auto-install. e.g. Windows Defender updates, Malicious Software Removal Tool, Outlook Junk Email Filter, etc. Basically all the silly little updates that you would always install anyway and don't require a restart. Is there a way to set the general policy to prompt for download/install, but auto-install certain regular updates? P.S. WSUS itself does have the facility to auto-approve certain updates. That part works. Facts & Figures: SBS 2003 domain Windows 7 Pro clients Windows XP Pro clients

    Read the article

  • Excel 2007: how to work out percentages of groups (top 10% of...)

    - by Mike
    I've recently read the following paragraph, and wondered: how you would organise the data (possibly Column A = country, Column B = salary, Column C = tax paid) but what formulas/calculations are used to work out these types of % figures: In country Y the top 0.5% of taxpayers pay 17% of total income tax. In country X the top 0.1% of taxpayers pay 8% of total income tax and in country Z, the top 1% pay about 40% of total federal income tax. I've gone through the help files and searched within Excel websites but I'm struggling to find an answer. %'s interest and trouble me... Any pointers or examples very welcome. Thanks Mike

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >