Search Results

Search found 2565 results on 103 pages for 'reduce'.

Page 59/103 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Problem with stackless python, cannot write to a dict

    - by ANON
    I have simple map-reduce type algorithm, which I want to implement in python and make use of multiple cores. I read somewhere that threads using native thread module in 2.6 dont make use of multiple cores. is that true? I even implemented it using stackless python however i am getting into weird errors [Update: a quick search showed that the stack less does not allows multiple cores So are their any other alternatives?] def Propagate(start,end): print "running Thread with range: ",start,end def maxVote(nLabels): count = {} maxList = [] maxCount = 0 for nLabel in nLabels: if nLabel in count: count[nLabel] += 1 else: count[nLabel] = 1 #Check if the count is max if count[nLabel] > maxCount: maxCount = count[nLabel]; maxList = [nLabel,] elif count[nLabel]==maxCount: maxList.append(nLabel) return random.choice(maxList) for num in range(start,end): node=MapList[num] nLabels = [Label[k] for k in Adj[node]] if (nLabels!=[]): Label[node] = maxVote(nLabels) else: Label[node]=node However in above code the values assigned to Label, that is the change in dictionary are lost. Above propagate function is used as callable for MicroThreads (i.e. TaskLets)

    Read the article

  • Normalize or Denormalize in high traffic websites

    - by Inam Jameel
    what is the best practice for database design for high traffic websites like this one stackoverflow? should one must use normalize database for record keeping or normalized technique or combination of both? is it sensible to design normalize database as main database for record keeping to reduce redundancy and at the same time maintain another denormalized form of database for fast searching? or main database should be denormalize and one can make normalized views in the application level for fast database operations? or beside above mentioned approach? what is the best practice of designing high traffic websites???

    Read the article

  • VB Script - Unmount USB drive

    - by user736887
    I have tedious project coming up. I need to insert a usb flash drive into a computer, then copy over three files to that drive, and then unmount it and repeat 3000 times (literally). I was hoping to come up with some VB Script that can reduce my input to just 1.) insert the usb and 2.) double click on the .vbs file and remove the usb flash drive. I figure it isnt too difficult to come up with the copy and paste part of the code as long as I am inserting the usb into the same port every time. Is this assumption correct? However, the real problem is unmounting/ejecting the usb drive. Is there any simple VB Script code that can accomplish this? Thank you, -Corey

    Read the article

  • Core Data store corruption

    - by sehugg
    A handful of customers for my iPhone app are experiencing Core Data store corruption (I assume so, since the error is "Failed to save to data store: Operation could not be completed. (Cocoa error 259.)") Has anyone else experienced this kind of store corruption? I am worried since I aim to soon push an update which performs a schema migration, and I am worried that this will expose even more problems. I had assumed that the Core Data/SQLlite APIs use atomic operations and are immune to corruption except if the underlying filesystem experiences corruption. Is there a way to reduce/prevent corruption, or at least a good way to reproduce (I have been unsuccessful thus far).

    Read the article

  • Using non primitive types in ServiceOperation for WCF Data Service (3.5SP1)

    - by Nix
    Is there any way at all to create a "mock" entity type for use in a WCF Service Operation? We have some queries we do that we need to optimize by exposing as a ServiceOperation. The problem is in order to do so we would result in a very long list of primitative types... Ex SomeoneHelpMe(int time, string name, string address, string i, string purple, string foo, int stillGoing, int tooMany, etc...) And we really need to reduce this to SomeoneHelpedMe(CustomEntityNotMappedToAnything e) This would also help us when it comes time to write some complex queries since there is a 3 param limitation... I saw this will be possible in 4.0 using "complex types", but i am still in the 3.5SP1 world. Let me know if anyone needs more information.

    Read the article

  • Reducing piracy of iPhone applications

    - by Alex Reynolds
    What are accepted methods to reduce iPhone application piracy, which do not violate Apple's evaluation process? If my application "phones home" to provide the unique device ID on which it runs, what other information would I need to collect (e.g., the Apple ID used to purchase the application) to create a valid registration token that authorizes use of the application? Likewise, what code would I use to access that extra data? What seem to be the best available technical approaches to this problem, at the present time? (Please refrain from non-programming answers about how piracy is inevitable, etc.)

    Read the article

  • Single specific string replace method Objective C

    - by Sam
    Hi guys, I wanted to know if theres a single method or way that will help me replace strings for specific characters. like MALE - M FEMALE - F CHILD - P The longer way out is this.. [str stringByreplacingOccurencesOfString:@"MALE" withString:@"M"]; [str stringByreplacingOccurencesOfString:@"FEMALE" withString:@"F"]; [str stringByreplacingOccurencesOfString:@"CHILD" withString:@"P"]; I was wondering if theres another way in which i can reduce lines of code here, specially when there are alots of things to replace. thanks. this is for iPhone OS.

    Read the article

  • Is generic Money<TAmount> a good implementation idea?

    - by jdk
    I have a Money Type that allows math operations and is sensitive to exchange rates so it will reduce one currency to another if rate is available to calculate in a given currency, rounds by various methods. It has other features that are sensitive to money, but I need to ask if the basic data type used should be made generic in nature. I've realized that the basic data type to hold an amount may differ for financial situations, for example: retail money might be expressed as all cents using int or long where fractions of cents do not matter, decimal is commonly used for its fixed behaviour, sometimes double seems to be used for big finance and large values sometimes a special BigInteger or 3rd-party type is used. I want to know if it would be considered good form to turn Money into Money<T_amount> so it can be used in any one of the above chosen scenarios?

    Read the article

  • rdlc - phantom page break, what to check?

    - by Antonio Nakic Alfirevic
    I have a RDLC report which has some controls on the first page, which are inside a rectangle and which display ok. Beneath the rectangle, i have a matrix, which spans more than one page both in width and in height. I want the matrix to start rendering on the second page. If I enable "insert break before" on the matrix, there is an extra blank page before the matrix(in print layout), which is my problem. If I reduce the amount of data, so the matrix does not span more than one page in width, there is no blank page, and all is well. I checked the Page and Body sizes, they are ok. Any tips? This has been driving me crazy all day, what can I check? Thx

    Read the article

  • WCF - Network Cost

    - by Mubashar Ahmad
    Dear Devs I have a wcf service deployed on IIS with basicHttpBinding and aspNetCompatibilityEnabled=true I have a test client as well which invokes multiple service functions simultaneously. To check the performance of service call on client and server I calculated the Avg time it takes to complete a service request on client(in proxy code) and on server as well. after a test of 8 hrs (server and client were on the same machine) i came to know that average response time on client is around 34ms where as the Avg execution time on server is around 3ms so the difference is 31ms. I would like to know why every call is taking 31ms is it justified? and how can i reduce this?

    Read the article

  • Eclipse refresh taking too long

    - by Nash0
    I am doing TDD on a large Java project in eclipse and am finding it frustrating because every time I run a test I have to wait 30 seconds+ for eclipse to compile and refresh. I estimate that 80%+ of that time is spent refreshing. Is there a way I can drastically reduce the amount of refreshing it is doing? I have looked at server other similar questions but I could not see anything that helps. One way I reduced the compile refresh time was to split the unit tests and code into separate projects. There are 4,700 classes in the src project and 300 in the tests. I am running eclipse 3.5.1 on Java 1.6.0_17-b04 (eclipse.vm). My computer is running windows xp with 3.1 gigs of usable ram. The only plugin I have installed is subclipse.

    Read the article

  • How to measure the time HTTP requests spend sitting in the accept-queue?

    - by David Jones
    I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests. During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued. Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT. I am trying to figure out if that is considered a reasonable backlog. Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain. Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue? The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that. thanks in advance, David Jones

    Read the article

  • In Clojure - How do I access keys in a vector of structs

    - by Nick
    I have the following vector of structs: (defstruct #^{:doc "Basic structure for book information."} book :title :authors :price) (def #^{:doc "The top ten Amazon best sellers on 16 Mar 2010."} best-sellers [(struct book "The Big Short" ["Michael Lewis"] 15.09) (struct book "The Help" ["Kathryn Stockett"] 9.50) (struct book "Change Your Prain, Change Your Body" ["Daniel G. Amen M.D."] 14.29) (struct book "Food Rules" ["Michael Pollan"] 5.00) (struct book "Courage and Consequence" ["Karl Rove"] 16.50) (struct book "A Patriot's History of the United States" ["Larry Schweikart","Michael Allen"] 12.00) (struct book "The 48 Laws of Power" ["Robert Greene"] 11.00) (struct book "The Five Thousand Year Leap" ["W. Cleon Skousen","James Michael Pratt","Carlos L Packard","Evan Frederickson"] 10.97) (struct book "Chelsea Chelsea Bang Bang" ["Chelsea Handler"] 14.03) (struct book "The Kind Diet" ["Alicia Silverstone","Neal D. Barnard M.D."] 16.00)]) I would like to sum the prices of all the books in the vector. What I have is the following: (defn get-price "Same as print-book but handling multiple authors on a single book" [ {:keys [title authors price]} ] price) Then I: (reduce + (map get-price best-sellers)) Is there a way of doing this without mapping the "get-price" function over the vector? Or is there an idiomatic way of approaching this problem?

    Read the article

  • appending to cursor in oracle

    - by Omnipresent
    I asked a question yesterday which got answers but didnt answer the main point. I wanted to reduce amount of time it took to do a MINUS operation. Now, I'm thinking about doing MINUS operation in blocks of 5000, appending each iterations results to the cursor and finally returning the cursor. I have following: V_CNT NUMBER :=0; V_INTERVAL NUMBER := 5000; begin select count(1) into v_cnt from TABLE_1 while (v_cnt > 0) loop open cv_1 for SELECT A.HEAD,A.EFFECTIVE_DATE, FROM TABLE_1 A WHERE A.TYPE_OF_ACTION='6' AND A.EFFECTIVE_DATE >= ADD_MONTHS(SYSDATE,-15) AND A.ROWNUM <= V_INTERVAL MINUS SELECT B.head,B.EFFECTIVE_DATE, FROM TABLE_2 B AND B.ROWNUM <= V_INTERVAL V_CNT := V_CNT - V_INTERVAL; END LOOP; end; However, as you see...in each iteration the cursor is overwritten. How can I change the code so that in each iteration it appends to cv_1 cursor rather than overwriting?

    Read the article

  • WCF without NET 3.0

    - by Murat
    Hello there, Does anyone tell me if it's possible to use WCF without .NET 3.0? Our company develops a 3-tier client-server end-user solution based on .Remoting. One of the limitation of our project is using .NET 2.0. Unfortunately .NET 3.0 framework is too large to be included in our installation package and I don't know if MS license allows this. But WCF might helps us to drastically reduce our efforts in some tasks. Does anyone have a chance to use WCF from Mono? Thanks in advance -- Murat

    Read the article

  • Approaches for generic, compile-time safe lazy-load methods

    - by Aaronaught
    Suppose I have created a wrapper class like the following: public class Foo : IFoo { private readonly IFoo innerFoo; public Foo(IFoo innerFoo) { this.innerFoo = innerFoo; } public int? Bar { get; set; } public int? Baz { get; set; } } The idea here is that the innerFoo might wrap data-access methods or something similarly expensive, and I only want its GetBar and GetBaz methods to be invoked once. So I want to create another wrapper around it, which will save the values obtained on the first run. It's simple enough to do this, of course: int IFoo.GetBar() { if ((Bar == null) && (innerFoo != null)) Bar = innerFoo.GetBar(); return Bar ?? 0; } int IFoo.GetBaz() { if ((Baz == null) && (innerFoo != null)) Baz = innerFoo.GetBaz(); return Baz ?? 0; } But it gets pretty repetitive if I'm doing this with 10 different properties and 30 different wrappers. So I figured, hey, let's make this generic: T LazyLoad<T>(ref T prop, Func<IFoo, T> loader) { if ((prop == null) && (innerFoo != null)) prop = loader(innerFoo); return prop; } Which almost gets me where I want, but not quite, because you can't ref an auto-property (or any property at all). In other words, I can't write this: int IFoo.GetBar() { return LazyLoad(ref Bar, f => f.GetBar()); // <--- Won't compile } Instead, I'd have to change Bar to have an explicit backing field and write explicit getters and setters. Which is fine, except for the fact that I end up writing even more redundant code than I was writing in the first place. Then I considered the possibility of using expression trees: T LazyLoad<T>(Expression<Func<T>> propExpr, Func<IFoo, T> loader) { var memberExpression = propExpr.Body as MemberExpression; if (memberExpression != null) { // Use Reflection to inspect/set the property } } This plays nice with refactoring - it'll work great if I do this: return LazyLoad(f => f.Bar, f => f.GetBar()); But it's not actually safe, because someone less clever (i.e. myself in 3 days from now when I inevitably forget how this is implemented internally) could decide to write this instead: return LazyLoad(f => 3, f => f.GetBar()); Which is either going to crash or result in unexpected/undefined behaviour, depending on how defensively I write the LazyLoad method. So I don't really like this approach either, because it leads to the possibility of runtime errors which would have been prevented in the first attempt. It also relies on Reflection, which feels a little dirty here, even though this code is admittedly not performance-sensitive. Now I could also decide to go all-out and use DynamicProxy to do method interception and not have to write any code, and in fact I already do this in some applications. But this code is residing in a core library which many other assemblies depend on, and it seems horribly wrong to be introducing this kind of complexity at such a low level. Separating the interceptor-based implementation from the IFoo interface by putting it into its own assembly doesn't really help; the fact is that this very class is still going to be used all over the place, must be used, so this isn't one of those problems that could be trivially solved with a little DI magic. The last option I've already thought of would be to have a method like: T LazyLoad<T>(Func<T> getter, Action<T> setter, Func<IFoo, T> loader) { ... } This option is very "meh" as well - it avoids Reflection but is still error-prone, and it doesn't really reduce the repetition that much. It's almost as bad as having to write explicit getters and setters for each property. Maybe I'm just being incredibly nit-picky, but this application is still in its early stages, and it's going to grow substantially over time, and I really want to keep the code squeaky-clean. Bottom line: I'm at an impasse, looking for other ideas. Question: Is there any way to clean up the lazy-loading code at the top, such that the implementation will: Guarantee compile-time safety, like the ref version; Actually reduce the amount of code repetition, like the Expression version; and Not take on any significant additional dependencies? In other words, is there a way to do this just using regular C# language features and possibly a few small helper classes? Or am I just going to have to accept that there's a trade-off here and strike one of the above requirements from the list?

    Read the article

  • smaller date regex

    - by Jeremy
    I have a regex used to validate dates: ^(((0[1-9]|[12]\d|3[01])\/(0[13578]|1[02])\/((19|[2-9]\d)\d{2}))|((0[1-9]|[12]\d|30)\/(0[13456789]|1[012])\/((19|[2-9]\d)\d{2}))|((0[1-9]|1\d|2[0-8])\/02\/((19|[2-9]\d)\d{2}))|(29\/02\/((1[6-9]|[2-9]\d)(0[48]|[2468][048]|[13579][26])|((16|[2468][048]|[3579][26])00))))$ Works really well, but I am using it all over the place with asp.net regex validators and I want to minimize it so I can reduce page size. It works with dd/mm/yyyy format and handles leap years. I'm looking for a more concise regex statement.

    Read the article

  • Approaches for memcached sessions

    - by Industrial
    Hi everybody, I was thinking about using memcached to store sessions instead of mySQL, which seemed like a good idea, at first. When it comes to the failover part of utilizing memcached servers, It's a bit worrying that my sessions will stop working if the memcached would go offline. It will certainly affect my users. There's a few techniques that we already utilize to reduce failover, including having a pool of servers available to compensate in the event of downtime, utilizing sharding/consistent hashing across the server pool and so on. We would also do some sort of graceful degradation that tells the users that something have gone wrong and they are welcome to login again, in the event of them being kicked out due to memcached server failover. So how does people generally deal with these issues when storing sessions on memcached servers?

    Read the article

  • Bitmapdata heavy usage - memory disaster (spark/FB4)

    - by keyle
    I've got a flex component which works pretty well but unfortunately turns into a disaster once used in a datagroup item renderer of about 40-50 items. Essentially it uses bitmapdata to take screenshot of a fully-rendered webpage in mx:HTML (this version of webkit rocks btw, miles better than flex 3). The code is pretty self-explanatory I think. http://noben.org/show/PageGrabber.mxml I've optimized it all I could, browsed, search for answers and already trimmed it down a lot, I'm desparate to reduce the memory usage (about 600mb after 100 draw) The Garbage collector has little effect. Thanks! Nic

    Read the article

  • Is Stopwatch really broken?

    - by Jakub Šturc
    At MSDN page for Stopwatch class I discovered link to interesting article which makes following statement about Stopwatch: However there are some serious issues: This can be unreliable on a PC with multiple processors. Due to a bug in the BIOS, Start() and Stop() must be executed on the same processor to get a correct result. This is unreliable on processors that do not have a constant clock speed (most processors can reduce the clock speed to conserve energy). This is explained in detail here. I am little confused. I've seen tons of examples of using Stopwatch and nobody mention this drawbacks. How serious is this? Should I avoid using Stopwatch?

    Read the article

  • Most efficient way to solve system of equations involving the digamma function?

    - by Neil G
    What is the most efficient way to solve system of equations involving the digamma function? I have a vector v and I want to solve for a vector w such that for all i: digamma(sum(w)) - digamma(w_i) = v_i and w_i 0 I found the gsl function gsl_sf_psi, which is the digamma function. Is there an identity I can use to reduce the equations? Is my best bet to use a solver? I am using C++0x; which solver is easiest to use and fast?

    Read the article

  • Accumulate 2D Array by Index

    - by Tegan Snyder
    I have an array that looks like this: Array ( [0] => Array ( [amount] => 60.00 [store_id] => 1 ) [1] => Array ( [amount] => 40.00 [store_id] => 1 ) [2] => Array ( [amount] => 10.00 [store_id] => 2 ) ) What would be a good method to reduce the array to a similar array that totals the 'amount' related to a store_id. For Instance I'd like to get this: Array ( [0] => Array ( [amount] => 100.00 [store_id] => 1 ) [2] => Array ( [amount] => 10.00 [store_id] => 2 ) )

    Read the article

  • Stripping variable names from VB.Net assemblies

    - by CFP
    Hello every one :) I'm trying to reduce as much as I can my VB.Net assembly side, and I just figured out that all variable names were kept unchanged in the actual assembly. Since I tend to use pretty long var names, it adds up and, by running dotfuscator on my assembly, I could shrink it by as much as 10%. Thus I wonder: is there any way to tell Visual Studio to use shorter var names in the generated assembly? Are there any downsides to using dotfuscator (I'd rated avoid it though, since it'd need to be called after every compilation, therefore forcing me to update my build scripts...)? Thanks a lot, CFP.

    Read the article

  • How to format a dos path to a unix path on cygwin command line

    - by Jennette
    When using Cygwin, I frequently copy a Windows path and manually edit all of the slashes to Unix format. For example, if I am using Cygwin and need to change directory I enter: cd C:\windows\path then edit this to cd C:/windows/path (Typically, the path is much longer than that). Is there a way to use sed, or something else to do this automatically? For example, I tried: echo C:\windows\path|sed 's|\|g' but got the following error sed: -e expression #1, char 7: unterminated `s' command My goal is to reduce the typing, so maybe I could write a program which I could call. Ideally I would type: conversionScript cd C:/windows/path and this would be equivalent to typing: cd C:\windows\path

    Read the article

  • Improving the speed of php

    - by cast01
    I'm currently working on a website in PHP, and I'm wondering what the best practices/methods are to reduce the time requests take. I've build the site in a modular way, so a page would consist of a number of modules, and each of these would need to request information. For example, I have a cart module, that (if a cart is set) will fetch the cart with the id (stored in a session variable) from the database and return its contents. I have another module that lists categories and this needs to fetch the categories from the database. My system is built with models, and each model might also make a request, for example a category model will make a request to get products in that category.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >