Search Results

Search found 8897 results on 356 pages for 'msbuild task'.

Page 300/356 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • How to increase the performance of a loop which runs for every 'n' minutes.

    - by GustlyWind
    Hi Giving small background to my requirement and what i had accomplished so far: There are 18 Scheduler tasks run at regular intervals (least being 30 mins) takes input of nearly 5000 eligible employees run into a static method for iteration and generates a mail content for that employee and mails. An average task takes about 9 min multiplied by 18 will be roughly 162 mins meanwhile there would be next tasks which will be in queue (I assume). So my plan is something like the below loop try { // Handle Arraylist of alerts eligible employees Iterator employee = empIDs.iterator(); while (employee.hasNext()) { ScheduledImplementation.getInstance().sendAlertInfoToEmpForGivenAlertType((Long)employee.next(), configType,schedType); } } catch (Exception vEx) { _log.error("Exception Caught During sending " + configType + " messages:" + configType, vEx); } Since I know how many employees would come to my method I will divide the while loop into two and perform simultaneous operations on two or three employees at a time. Is this possible. Or is there any other ways I can improve the performance. Some of the things I had implemented so far 1.Wherever possible made methods static and variables too Didn't bother to catch exceptions and send back because these are background tasks. (And I assume this improves performance) Get the DB values in one query instead of multiple hits. If am successful in optimizing the while loop I think i can save couple of mins. Thanks

    Read the article

  • Why is my GUI unresponsive while a SwingWorker thread runs?

    - by Starchy
    Hello, I have a SwingWorker thread with an IOBound task which is totally locking up the interface while it runs. Swapping out the normal workload for a counter loop has the same result. The SwingWorker looks basically like this: public class BackupWorker extends SwingWorker<String, String> { private static String uname = null; private static String pass = null; private static String filename = null; static String status = null; BackupWorker (String uname, String pass, String filename) { this.uname = uname; this.pass = pass; this.filename = filename; } @Override protected String doInBackground() throws Exception { BackupObject bak = newBackupObject(uname,pass,filename); return "Done!"; } } The code that kicks it off lives in a class that extends JFrame: public void actionPerformed(ActionEvent event) { String cmd = event.getActionCommand(); if (BACKUP.equals(cmd)) { SwingUtilities.invokeLater(new Runnable() { public void run() { final StatusFrame statusFrame = new StatusFrame(); statusFrame.setVisible(true); SwingUtilities.invokeLater(new Runnable() { public void run () { statusFrame.beginBackup(uname,pass,filename); } }); } }); } } Here's the interesting part of StatusFrame: public void beginBackup(final String uname, final String pass, final String filename) { worker = new BackupWorker(uname, pass, filename); worker.execute(); try { System.out.println(worker.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } } So far as I can see, everything "long-running" is handled by the worker, and everything that touches the GUI on the EDT. Have I tangled things up somewhere, or am I expecting too much of SwingWorker?

    Read the article

  • Project euler problem 45

    - by Peter
    Hi, I'm not yet a skilled programmer but I thought this was an interesting problem and I thought I'd give it a go. Triangle, pentagonal, and hexagonal numbers are generated by the following formulae: Triangle T_(n)=n(n+1)/2 1, 3, 6, 10, 15, ... Pentagonal P_(n)=n(3n-1)/2 1, 5, 12, 22, 35, ... Hexagonal H_(n)=n(2n-1) 1, 6, 15, 28, 45, ... It can be verified that T_(285) = P_(165) = H_(143) = 40755. Find the next triangle number that is also pentagonal and hexagonal. Is the task description. I know that Hexagonal numbers are a subset of triangle numbers which means that you only have to find a number where Hn=Pn. But I can't seem to get my code to work. I only know java language which is why I'm having trouble finding a solution on the net womewhere. Anyway hope someone can help. Here's my code public class NextNumber { public NextNumber() { next(); } public void next() { int n = 144; int i = 165; int p = i * (3 * i - 1) / 2; int h = n * (2 * n - 1); while(p!=h) { n++; h = n * (2 * n - 1); if (h == p) { System.out.println("the next triangular number is" + h); } else { while (h > p) { i++; p = i * (3 * i - 1) / 2; } if (h == p) { System.out.println("the next triangular number is" + h); break; } else if (p > h) { System.out.println("bummer"); } } } } } I realize it's probably a very slow and ineffecient code but that doesn't concern me much at this point I only care about finding the next number even if it would take my computer years :) . Peter

    Read the article

  • Practical refactoring using unit tests

    - by awhite
    Having just read the first four chapters of Refactoring: Improving the Design of Existing Code, I embarked on my first refactoring and almost immediately came to a roadblock. It stems from the requirement that before you begin refactoring, you should put unit tests around the legacy code. That allows you to be sure your refactoring didn't change what the original code did (only how it did it). So my first question is this: how do I unit-test a method in legacy code? How can I put a unit test around a 500 line (if I'm lucky) method that doesn't do just one task? It seems to me that I would have to refactor my legacy code just to make it unit-testable. Does anyone have any experience refactoring using unit tests? And, if so, do you have any practical examples you can share with me? My second question is somewhat hard to explain. Here's an example: I want to refactor a legacy method that populates an object from a database record. Wouldn't I have to write a unit test that compares an object retrieved using the old method, with an object retrieved using my refactored method? Otherwise, how would I know that my refactored method produces the same results as the old method? If that is true, then how long do I leave the old deprecated method in the source code? Do I just whack it after I test a few different records? Or, do I need to keep it around for a while in case I encounter a bug in my refactored code? Lastly, since a couple people have asked...the legacy code was originally written in VB6 and then ported to VB.NET with minimal architecture changes.

    Read the article

  • How to check the backtrace of a "USER process" in the Linux Kernel Crash Dump

    - by Biswajit
    I was trying to debug a USER Process in Linux Crash Dump. The normal steps to go to the crash dump are: Go to the path where the dump is located. Use the command crash kernel_link dump.201104181135. Where kernel_link is a soft link I have created for vmlinux image. Now you will be in the CRASH prompt. If you run the command foreach <PID Of the process> bt Eg: crash> **foreach 6920 bt** **PID: 6920 TASK: ffff88013caaa800 CPU: 1 COMMAND: **"**climmon**"**** #0 [ffff88012d2cd9c8] **schedule** at ffffffff8130b76a #1 [ffff88012d2cdab0] **schedule_timeout** at ffffffff8130bbe7 #2 [ffff88012d2cdb50] **schedule_timeout_uninterruptible** at ffffffff8130bc2a #3 [ffff88012d2cdb60] **__alloc_pages_nodemask** at ffffffff810b9e45 #4 [ffff88012d2cdc60] **alloc_pages_curren**t at ffffffff810e1c8c #5 [ffff88012d2cdc90] **__page_cache_alloc** at ffffffff810b395a #6 [ffff88012d2cdcb0] **__do_page_cache_readahead** at ffffffff810bb592 #7 [ffff88012d2cdd30] **ra_submit** at ffffffff810bb6ba #8 [ffff88012d2cdd40] **filemap_fault** at ffffffff810b3e4e #9 [ffff88012d2cdda0] **__do_fault** at ffffffff810caa5f #10 [ffff88012d2cde50] **handle_mm_fault** at ffffffff810cce69 #11 [ffff88012d2cdf00] **do_page_fault** at ffffffff8130f560 #12 [ffff88012d2cdf50] **page_fault** at ffffffff8130d3f5 RIP: 00007fd02b7e9071 RSP: 0000000040e86ea0 RFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fd02b7e9071 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040e86ec0 RBP: 0000000040e87140 R8: 0000000000000800 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 00007fff16ec43d0 R13: 00007fd02bcadf00 R14: 0000000040e87950 R15: 0000000000001000 ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b If you check the above backtrace it shows the kernel functions used for scheduling/handling page fault but not the functions that were executed in the USER process (here eg. climmon). So I am not able to debug this process as I am not able to see the functions executed in that process. Can any one help me with this case?

    Read the article

  • Adding XOR function to bigint library

    - by Jason Gooner
    Hi, I'm using this Big Integer library for Javascript: http://www.leemon.com/crypto/BigInt.js and I need to be able to XOR two bigInts together and sadly the library doesn't include such a function. The library is relatively simple so I don't think it's a huge task, just confusing. I've been trying to hack one together but not having much luck, would be very grateful if someone could lend me a hand. This is what I've attempted (might be wrong). But im guessing the structure is going to be quite similar to some of the other functions in there. function xor(x, y) { var c, k, i; var result = new Array(0); // big int for result k=x.length>y.length ? x.length : y.length; // array length of the larger num // Make sure result is the correct array size? maybe: result = expand(result, k); // ? for (c=0, i=0; i < k; i++) { // Do some xor here } // return the bigint xor result return result; } What confuses me is I don't really understand how it stores numbers in the array blocks for the bigInt. I don't think it's a case of simply bigintC[i] = bigintA[i] ^ bigintB[i], then most other functions have some masking operation at the end that I don't understand. I would really appreciate any help getting this working. Thanks

    Read the article

  • Dataflow Pipeline holding on to memory

    - by Jesse Carter
    I've created a Dataflow pipeline consisting of 4 blocks (which includes one optional block) which is responsible for receiving a query object from my application across HTTP and retrieving information from a database, doing an optional transform on that data, and then writing the information back in the HTTP response. In some testing I've done I've been pulling down a significant amount of data from the database (570 thousand rows) which are stored in a List object and passed between the different blocks and it seems like even after the final block has been completed the memory isn't being released. Ram usage in Task Manager will spike up to over 2 GB and I can observe several large spikes as the List hits each block. The signatures for my blocks look like this: private TransformBlock<HttpListenerContext, Tuple<HttpListenerContext, QueryObject>> m_ParseHttpRequest; private TransformBlock<Tuple<HttpListenerContext, QueryObject>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_RetrieveDatabaseResults; private TransformBlock<Tuple<HttpListenerContext, QueryObject, List<string>>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_ConvertResults; private ActionBlock<Tuple<HttpListenerContext, QueryObject, List<string>>> m_ReturnHttpResponse; They are linked as follows: m_ParseHttpRequest.LinkTo(m_RetrieveDatabaseResults); m_RetrieveDatabaseResults.LinkTo(m_ConvertResults, tuple => tuple.Item2 is QueryObjectA); m_RetrieveDatabaseResults.LinkTo(m_ReturnHttpResponse, tuple => tuple.Item2 is QueryObjectB); m_ConvertResults.LinkTo(m_ReturnHttpResponse); Is it possible that I can set up the pipeline such that once each block is done with the list they no longer need to hold on to it as well as once the entire pipeline is completed that the memory is released?

    Read the article

  • Overloading operator>> to a char buffer in C++ - can I tell the stream length?

    - by exscape
    I'm on a custom C++ crash course. I've known the basics for many years, but I'm currently trying to refresh my memory and learn more. To that end, as my second task (after writing a stack class based on linked lists), I'm writing my own string class. It's gone pretty smoothly until now; I want to overload operator that I can do stuff like cin my_string;. The problem is that I don't know how to read the istream properly (or perhaps the problem is that I don't know streams...). I tried a while (!stream.eof()) loop that .read()s 128 bytes at a time, but as one might expect, it stops only on EOF. I want it to read to a newline, like you get with cin to a std::string. My string class has an alloc(size_t new_size) function that (re)allocates memory, and an append(const char *) function that does that part, but I obviously need to know the amount of memory to allocate before I can write to the buffer. Any advice on how to implement this? I tried getting the istream length with seekg() and tellg(), to no avail (it returns -1), and as I said looping until EOF (doesn't stop reading at a newline) reading one chunk at a time.

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • Png image processing in .NET.

    - by Oybek
    I have the following task. Take a base image and overlay on it another one. The base image is 8b png as well as overlay. Here are the base (left) and overlay (right) images. Here is a result and how it must look. The picture in the left is a screenshot when one picture is on top of another (html and positioning) and the second is the result of programmatic merging. As you can see in the screenshot the borders of the text is darker. Also here are the sizes of the images Base image 14.9 KB Overlay image 6.87 KB Result image 34.8 KB The size of the resulting image is also huge Here is my code that I use to merge those pictures /*...*/ public Stream Concatinate(Stream baseStream, params Stream[] overlayStreams) { var @base = Image.FromStream(baseStream); var canvas = new Bitmap(@base.Width, @base.Height); using (var g = canvas.ToGraphics()) { g.DrawImage(@base, 0, 0); foreach (var item in overlayStreams) { using (var overlayImage = Image.FromStream(item)) { try { Overlay(@base, overlayImage, g); } catch { } } } } var ms = new MemoryStream(); canvas.Save(ms, ImageFormat.Png); canvas.Dispose(); @base.Dispose(); return ms; } /*...*/ /*Tograpics extension*/ public static Graphics ToGraphics(this Image image, CompositingQuality compositingQuality = CompositingQuality.HighQuality, SmoothingMode smoothingMode = SmoothingMode.HighQuality, InterpolationMode interpolationMode = InterpolationMode.HighQualityBicubic) { var g = Graphics.FromImage(image); g.CompositingQuality = compositingQuality; g.SmoothingMode = smoothingMode; g.InterpolationMode = interpolationMode; return g; } My questions are What should I do in order to merge images to achieve the result as in the screenshot? How can I lower the size of the result image? Is the System.Drawing a suitable tool for this or is there any better tool for working with png for .NET?

    Read the article

  • 500 Worker Threads, what kind of thread pool?

    - by Submerged
    I am wondering if this is the best way to do this. I have about 500 threads that run indefinitely, but Thread.sleep for a minute when done one cycle of processing. ExecutorService es = Executors.newFixedThreadPool(list.size()+1); for (int i = 0; i < list.size(); i++) { es.execute(coreAppVector.elementAt(i)); //coreAppVector is a vector of extends thread objects } The code that is executing is really simple and basically just this class aThread extends Thread { public void run(){ while(true){ Thread.sleep(ONE_MINUTE); //Lots of computation every minute } } } I do need a separate threads for each running task, so changing the architecture isn't an option. I tried making my threadPool size equal to Runtime.getRuntime().availableProcessors() which attempted to run all 500 threads, but only let 8 (4xhyperthreading) of them execute. The other threads wouldn't surrender and let other threads have their turn. I tried putting in a wait() and notify(), but still no luck. If anyone has a simple example or some tips, I would be grateful! Well, the design is arguably flawed. The threads implement Genetic-Programming or GP, a type of learning algorithm. Each thread analyzes advanced trends makes predictions. If the thread ever completes, the learning is lost. That said, I was hoping that sleep() would allow me to share some of the resources while one thread isn't "learning"

    Read the article

  • How to remove cursor from text field on click of button?

    - by Miraaj
    Hi all, I am trying to do a simple task: I have an editable text field, two buttons (titles: make editable/ make un-editable) over a window. Idea is: when user clicks "make editable" button, text field should become editable and when he/she clicks "make un-editable", it should become un-editable. In action of "make un-editable" I am doing this: [myTextField setSelectable:NO]; [myTextField setEditable:NO]; and in action of "make editable" I am doing this: [myTextField setSelectable:YES]; [myTextField setEditable:YES]; Problem is: It works fine when myTextField does not have cursor within it and user clicks - "make un-editable", ie. myTextField becomes un-editable but when it has cursor and user clicks "make un-editable" he/she can still edit myTextField. For its solution I tried to remove cursor from myTextField as soon as user clicks "make un-editable" button, by adding these lines before selectable and editable statements: [someOtherTextField selectText:self]; [[NSRunLoop currentRunLoop] performSelector:@selector(selectText:) someOtherTextField argument:self order:9999 modes:[NSArray arrayWithObject:NSDefaultRunLoopMode]]; [someOtherTextField becomeFirstResponder]; but none is working for me :( Can anyone suggest some solution for it? Thanks, Miraaj

    Read the article

  • How to make active services highly available?

    - by Jader Dias
    I know that with Network Load Balancing and Failover Clusteringwe can make passive services highly available. But what about active apps? Example: One of my apps retrieves some content from a external resource in a fixed interval. I have imagined the following scenarios: Run it in a single machine. Problem: if this instance falls, the content won't be retrieved Run it in each machine of the cluster. Problem: the content will be retrieved multiple times Have it in each machine of the cluster, but run it only in one of them. Each instance will have to check some sort of common resource to decide whether it its turn to do the task or not. When I was thinking about the solution #3 I have wondered what should be the common resource. I have thought of creating a table in the database, where we could use it to get a global lock. Is this the best solution? How does people usually do this? By the way it's a C# .NET WCF app running on Windows Server 2008

    Read the article

  • OO vs Simplicity when it comes to user interaction

    - by Oetzi
    Firstly, sorry if this question is rather vague but it's something I'd really like an answer to. As a project over summer while I have some downtime from Uni I am going to build a monopoly game. This question is more about the general idea of the problem however, rather than the specific task I'm trying to carry out. I decided to build this with a bottom up approach, creating just movement around a forty space board and then moving on to interaction with spaces. I realised that I was quite unsure of the best way of proceeding with this and I am torn between two design ideas: Giving every space its own object, all sub-classes of a Space object so the interaction can be defined by the space object itself. I could do this by implementing different land() methods for each type of space. Only giving the Properties and Utilities (as each property has unique features) objects and creating methods for dealing with the buying/renting etc in the main class of the program (or Board as I'm calling it). Spaces like go and super tax could be implemented by a small set of conditionals checking to see if player is on a special space. Option 1 is obviously the OO (and I feel the correct) way of doing things but I'd like to only have to handle user interaction from the programs main class. In other words, I don't want the space objects to be interacting with the player. Why? Errr. A lot of the coding I've done thus far has had this simplicity but I'm not sure if this is a pipe dream or not for larger projects. Should I really be handling user interaction in an entirely separate class? As you can see I am quite confused about this situation. Is there some way round this? And, does anyone have any advice on practical OO design that could help in general?

    Read the article

  • Seed has_many relation using mass assignment

    - by rnd
    Here are my two models: class Article < ActiveRecord::Base attr_accessible :content has_many :comments end class Comment < ActiveRecord::Base attr_accessible :content belongs_to :article end And I'm trying to seed the database in seed.rb using this code: Article.create( [{ content: "Hi! This is my first article!", comments: [{content: "It sucks"}, {content: "Best article ever!"}] }], without_protection: true) However rake db:seed gives me the following error message: rake aborted! Comment(#28467560) expected, got Hash(#13868840) Tasks: TOP => db:seed (See full trace by running task with --trace) It is possible to seed the database like this? If yes a follow-up question: I've searched some and it seems that to do this kind of (nested?) mass assignment I need to add 'accepts_nested_attributes_for' for the attributes I want to assign. (Possibly something like 'accepts_nested_attributes_for :article' for the Comment model) Is there a way to allow this similar to the 'without_protection: true'? Because I only want to accept this kind of mass assignment when seeding the database.

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Why does the BigFraction class in the Apache-Commons-Math library return incorrect division results?

    - by Timothy Lee Russell
    In the spirit of using existing, tested and stable libraries of code, I started using the Apache-Commons-Math library and its BigFraction class to perform some rational calculations for an Android app I'm writing called RationalCalc. It works great for every task that I have thrown at it, except for one nagging problem. When dividing certain BigFraction values, I am getting incorrect results. If I create a BigFraction with the inverse of the divisor and multiply instead, I get the same incorrect answer but perhaps that is what the library is doing internally anyway. Does anyone know what I am doing wrong? The division works correctly with a BigFraction of 2.5 but not 2.51, 2.49, etc... // *** incorrect! *** BigFraction one = new BigFraction(1.524); //one: 1715871458028159 / 1125899906842624 BigFraction two = new BigFraction(2.51); //two: 1413004383087493 / 562949953421312 BigFraction three = one.divide(two); //three: 0 Log.i("solve", three.toString()); //should be 0.607171315 ?? //returns 0 // *** correct! **** BigFraction four = new BigFraction(1.524); //four: 1715871458028159 / 1125899906842624 BigFraction five = new BigFraction(2.5); //five: 5 / 2 BigFraction six = four.divide(five); //six: 1715871458028159 / 2814749767106560 Log.i("solve", six.toString()); //should be 0.6096 ?? //returns 0.6096

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • How to control the system volume using javascript

    - by Geetha
    Hi, I am using media player to play audio and video. I am creating own button to increase and decrease the volume of the media player. working fine too. Problem: Even after reaches 0% volume its audible. If the player volume increase the system volume also be increased. Is it possible. How to achieve this task. Control: <object id="mediaPlayer" classid="clsid:22D6F312-B0F6-11D0-94AB-0080C74C7E95" codebase="http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701" height="1" standby="Loading Microsoft Windows Media Player components..." type="application/x-oleobject" width="1"> <param name="fileName" value="" /> <param name="animationatStart" value="true" /> <param name="transparentatStart" value="true" /> <param name="autoStart" value="true" /> <param name="showControls" value="true" /> <param name="volume" value="70" /> </object> Code: function decAudio() { if (document.mediaPlayer.Volume >= -1000) { var newVolume = document.mediaPlayer.Volume - 100; if (newVolume >= -1000) { document.mediaPlayer.Volume = document.mediaPlayer.Volume - 100; } else { document.mediaPlayer.Volume = -1000; } } }

    Read the article

  • Serialize Forms into memory

    - by serhio
    Background: In a desktop MDI application we have a lot of forms. Task: Save the controls contents of closed forms(textbox texts, checkbox checks etc). Limitations: A saved form for (DB/Windows) user? For (DB/Windows) group of users? Both variants may be possible. Question: a) What is the best way? b) if I want do not use files, how to serialize the form into a MemoryStream and then recuperate it if the opening form was been once opened and serialized? StartingPoint: Implemented a form that implements ISerializable. Deserialize the form on opening, serialize onclosing: Public Sub GetObjectData(ByVal info As System.Runtime.Serialization.SerializationInfo, ByVal context As System.Runtime.Serialization.StreamingContext) Implements System.Runtime.Serialization.ISerializable.GetObjectData info.AddValue("tbxExportFolder", tbxExportFolder.Text, GetType(String)) info.AddValue("cbxCheckAliasUnicity", cbxCheckAliasUnicity.Checked, GetType(Boolean)) End Sub Public Sub New(ByVal info As SerializationInfo, ByVal context As StreamingContext) Me.New() Me.tbxExportFolder.Text = info.GetString("tbxExportFolder") Me.cbxCheckAliasUnicity.Checked = info.GetBoolean("cbxCheckAliasUnicity") End Sub Private Sub SerializeMe() Dim binFormatter As New Formatters.Binary.BinaryFormatter Dim fileStream As New FileStream(SerializedFilename, FileMode.Create) Try binFormatter.Serialize(fileStream, Me) Catch Throw Finally fileStream.Close() End Try End Sub Protected Overrides Sub OnClosing(ByVal e As System.ComponentModel.CancelEventArgs) SerializeMe() MyBase.OnClosing(e) End Sub

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • How to keep your unit test Arrange step simple and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • Get Function Pointer to function in a shared library I didn't directly load

    - by bdk
    My Linux application (A) links against a Third Party shared Library (B) which I don't have source code to. This library makes use of another third party shared library that I don't have source code to (C). I believe that (B) uses dlopen to access (C) instead of directly linking. My reasoning for this is that 'ldd' on (B) does not show (C) and objdump -X (B) shows references to dlopen/dlclose/dlsym. My requirement is that I need to in my code for (A) get a function pointer to a function foo() located in (C). Normally I'd use dlsym for this, but I need to pass it the handle returned from dlopen which I don't have since (B) does not expose this. - For the larger context: I need to modify the function in (C) such that everytime it calls its helper function bar() (also located in (C)), it also calls a function with the same signature located in (A) with the same parameters (Basically inject my code into the codepath of (C) foo()-bar(). I believe I've found a way to accomplish this using gdb, but in order to port my gdb command list, but I'm stuck on the step of getting the function pointer. I'm also open to alternatives to accomplish the same task rather than the exact problem as stated above Edit: After writing this I realized I can probably just do another dlopen on the file in my code and the symbols returned via dlsym on that handle should be the same as received via the original dlopen, If I'm reading the dlopen man page correctly. However I'm still interested in advice or assistance with the my larger context, If theres a better way to go about this

    Read the article

  • Existing parsers in c# (BSD license or similar)

    - by Sylverdrag
    I am looking for parsers (in C#) for a bunch of formats. (PHP, ASP, some XML based formats, HTML,...pretty much anything I can get my hands on.) The purpose is to separate the text from the code and do some edits without messing up the code. I had a look at ANTLR, but while it seems like the "right tool", there is just too much prior knowledge assumed. I have an easier time writing a parser from scratch than understanding how to "easily" generate parsers from ANTLR. (I wrote a small parser for a specific type of RTF files within a couple days, so the task is probably within my reach, but as I have no formal knowledge of parsing/lexing, I am at loss with ANTLR) Then it occurred to me that there must existing parsers for many formats, so before I start writing yet another a brand new and potentially buggy version of the wheel, I figured I would check what parsers already exist and can be reused in a commercial product. I could use parsers for just about every format in existence, so this question would be a good place to make a list of all existing free parsers written in C#, if there are any. Thanks in advance for your suggestions

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >