Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 109/184 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Idiomatic way to do list/dict in Cython?

    - by ramanujan
    My problem: I've found that processing large data sets with raw C++ using the STL map and vector can often be considerably faster (and with lower memory footprint) than using Cython. I figure that part of this speed penalty is due to using Python lists and dicts, and that there might be some tricks to use less encumbered data structures in Cython. For example, this page (http://wiki.cython.org/tutorials/numpy) shows how to make numpy arrays very fast in Cython by predefining the size and types of the ND array. Question: Is there any way to do something similar with lists/dicts, e.g. by stating roughly how many elements or (key,value) pairs you expect to have in them? That is, is there an idiomatic way to convert lists/dicts to (fast) data structures in Cython? If not I guess I'll just have to write it in C++ and wrap in a Cython import.

    Read the article

  • SAXException: Unexpected end of file after null

    - by itsadok
    I'm getting the error in the title occasionally from a process the parses lots of XML files. The files themselves seem OK, and running the process again on the same files that generated the error works just fine. The exception occurs on a call to XMLReader.parse(InputStream is) Could this be a bug in the parser (I use piccolo)? Or is it something about how I open the file stream? No multithreading is involved. Piccolo seemed like a good idea at the time, but I don't really have a good excuse for using it. I will to try to switch to the default SAX parser and see if that helps. Update: It didn't help, and I found that Piccolo is considerably faster for some of the workloads, so I went back.

    Read the article

  • Automatically find compiler options for fastest exe on given machine?

    - by dehmann
    Is there a method to automatically find the best compiler options (on a given machine), which result in the fastest possible executable? Naturally, I use g++ -O3, but there are additional flags that may make the code run faster, e.g. -ffast-math and others, some of which are hardware-dependent. Does anyone know some code I can put in my configure.ac file (GNU autotools), so that the flags will be added to the Makefile automatically by the ./configure command? In addition to automatically determining the best flags, I would be interested in some useful compiler flags that are good to use as a default for most optimized executables.

    Read the article

  • SSD and programming

    - by Simon Johnson
    I'm trying to put together a business case for getting every developer in our company an Intel SSD drive. The main codebase contains roughly 400,000 lines of code. My theory is that since the code is scattered about in maybe 1500 files, an SSD drive would be substantially faster for compiles. The logic being that many small reads really punishes the seak-time bottle-neck of a traditional hard-drive. Am I right? Is SSD worth the money in productivity gains by reducing the edit/compile cycle time?

    Read the article

  • Decoding subsampled bitmaps in Android

    - by hgpc
    I decode bitmaps from the SD card using BitmapFactory.decodeFile. Sometimes the bitmaps are bigger than what the application needs or that the heap allows, so I use BitmapFactory.Options.inSampleSize to request a subsampled (smaller) bitmap. The problem is that the platform does not enforce the exact value of inSampleSize, and I sometimes end up with a bitmap either too small, or still too big for the available memory. From http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html#inSampleSize: Note: the decoder will try to fulfill this request, but the resulting bitmap may have different dimensions that precisely what has been requested. Also, powers of 2 are often faster/easier for the decoder to honor. How should I decode bitmaps from the SD card to get a bitmap of the exact size I need while consuming as little memory as possible to decode it?

    Read the article

  • How to run script from command-line

    - by Eric
    I want to write a script that counts the types of objects there are in the ZODB, when they were created, how many users have joined since a given point in time,etc. I am wondering how to accomplish this. So, was wondering if there is a way to pass a script to bin/instance to be executed. I've created this script with a py script but it takes a VERY long time to finish and this is why I would like to do this from the command-line... in the hopes of it running faster. Thanks ERic

    Read the article

  • Slow SelectSingleNode

    - by Simon
    I have a simple structured XML file like this: <ttest ID="ttest00001", NickName="map00001"/> <ttest ID="ttest00002", NickName="map00002"/> <ttest ID="ttest00003", NickName="map00003"/> <ttest ID="ttest00004", NickName="map00004"/> ..... This xml file can be around 2.5MB. In my source code I will have a loop to get nicknames In each loop, I have something like this: nickNameLoopNum = MyXmlDoc.SelectSingleNode("//ttest[@ID=' + testloopNum + "']").Attributes["NickName"].Value This single line will cost me 30 to 40 millisecond. I searched some old articles (dated back to 2002) saying, use some sort of compiled "xpath" can help the situation, but that was 5 years ago. I wonder is there a mordern practice to make it faster? (I'm using .NET 3.5)

    Read the article

  • Java - SwingWorker - Can we call one SwingWorker from other SwingWorker instead of EDT

    - by Yatendra Goel
    I have a SwingWorker as follows: public class MainWorker extends SwingWorker(Void, MyObject) { : : } I invoked the above Swing Worker from EDT: MainWorker mainWorker = new MainWorker(); mainWorker.execute(); Now, the mainWorker creates 10 instances of a MyTask class so that each instance will run on its own thread so as to complete the work faster. But the problem is I want to update the gui from time to time while the tasks are running. I know that if the task was executed by the mainWorker itself, I could have used publish() and process() methods to update the gui. But as the tasks are executed by threads different from the Swingworker thread, how can I update the gui from intermediate results generated by threads executing tasks.

    Read the article

  • Adjacency List Tree Using Recursive WITH (Postgres 8.4) instead of Nested Set

    - by Koobz
    I'm looking for a Django tree library and doing my best to avoid Nested Sets (they're a nightmare to maintain). The cons of the adjacency list model have always been an inability to fetch descendants without resorting to multiple queries. The WITH clause in Postgres seems like a solid solution to this problem. Has anyone seen any performance reports regarding WITH vs. Nested Set? I assume the Nested set will still be faster but as long as they're in the same complexity class, I could swallow a 2x performance discrepancy. Django-Treebeard interests me. Does anyone know if they've implemented the WITH clause when running under Postgres? Has anyone here made the switch away from Nested Sets in light of the WITH clause?

    Read the article

  • Microsoft T-SQL to Oracle PL/SQL translation

    - by Michael Prewecki
    I've worked with T-SQL for years but i've just moved to an organisation that is going to require writing some Oracle stuff, probably just simple CRUD operations at least until I find my feet. I'm not going to be migrating databases from one to the other simply interacting with existing Oracle databases from an Application Development perspective. Is there are tool or utility available to easily translate T-SQL into PL/SQL, a keyword mapper is the sort of thing I'm looking for. P.S. I'm too lazy to RTFM, besides it's not going to be a big part of my role so I just want something to get me up to speed a little faster.

    Read the article

  • Best jQuery/Prototype book for complex ajax?

    - by Burton Kent
    I've been working on a complex app with one main dashboard. I don't particularly like the design because it tries to do too much on one page. So the lead developer thought it would be a good idea to use ajax - because the page is so big. Refreshing part of it is far faster than loading it again. Problem is there's several ways data can be used. Adding items Editing rows Performing actions on selected rows (selected using a checkbox) Changing single items (like location, phone) My problem is making GENERALIZABLE ajax code that can operate on the data in a div, using class names to assemble the proper information for the ajax call. I did pretty well, but can't help but want to see if there's a better way to do it.

    Read the article

  • Get a image from a uiview

    - by Monobono
    Hi I want to perform a shrink animation on a UITableVIew. I experimented a bit and found out that the animation runs much faster when I shrink a UIImageView with an image of the current state of the tableview instead of shrinking the table view itself. I grabbed the image in a method in my main viewcontroller prior to the animation: UIGraphicsBeginImageContext(mainTableView.bounds.size); [resizeContainer.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); Works like a charm, at least almost. On very rare occasions I get weird graphic glitches, where the UIImage starts to overlap a toolbar that lies underneath it. I just want to make sure that I am getting the image in the right way. I am laking the necessary understand of GraphicContexts to be sure about it. To cut a long story short, is my code correct? Thx

    Read the article

  • Move million records from MEMORY table to MYISAM table.

    - by Prashant
    Hi, I am looking for a fast way to move records from a MEMORY table to MYISAM table. MEMORY table has around 0.5 million records. Both tables have exactly the same structure (same number of columns, data types etc.). But the MYISAM table is indexed (B-TREE) on a few columns. There are around 25 columns most of which are unsigned integers. I have already tried using "INSERT INTO SELECT * FROM " query. But is there any faster way to do this? Appreciate your help. Prashant

    Read the article

  • How can I run NUnit(Selenium Grid) tests in parallel?

    - by Benjamin Lee
    My current project uses NUnit for unit tests and to drive UATs written with Selenium. Developers normally run tests using ReSharper's test runner in VS.Net 2003 and our build box kicks them off via NAnt. We would like to run the UAT tests in parallel so that we can take advantage of Selenium Grid/RCs so that they will be able to run much faster. Does anyone have any thoughts on how this might be achieved? and/or best practices for testing Selenium tests against multiple browsers environments without writing duplicate tests automatically? Thank you.

    Read the article

  • Python optimization

    - by Rami Jarrar
    Hi, I do like this: f = open('wl4.txt', 'w') hh = 0 ###################################### for n in range(1,5): for l in range(33,127): if n==1: b = chr(l) + '\n' f.write(b) hh += 1 elif n==2: for s0 in range(33, 127): b = chr(l) + chr(s0) + '\n' f.write(b) hh += 1 elif n==3: for s0 in range(33, 127): for s1 in range(33, 127): b = chr(l) + chr(s0) + chr(s1) + '\n' f.write(b) hh += 1 elif n==4: for s0 in range(33, 127): for s1 in range(33, 127): for s2 in range(33,127): b = chr(l) + chr(s0) + chr(s1) + chr(s2) + '\n' f.write(b) hh += 1 ###################################### print "We Made %d Words." %(hh) ###################################### f.close() So, is there any method to make it faster?

    Read the article

  • Quicker searching in JScript using the Bash

    - by gentlesea
    I am using the following JScript code to search for a string inside a file: var myFile = aqFile.OpenTextFile(fileToSearchIn, aqFile.faRead, aqFile.ctANSI); while(!myFile.IsEndOfFile()) { s = myFile.ReadLine(); if (aqString.Find(s, searchString) != -1) Log.Checkpoint(searchString + " found.", s); } myFile.Close(); This is rather slow. I was thinking about using bash commands in order to speed up the search in file process: var WshShell = new ActiveXObject("WScript.Shell"); var oExec = WshShell.Exec("C:\\cygwin\\bin\\bash.exe -c 'cat \"" + folderName + "/" + fileName + "\"'"); while (!oExec.StdOut.AtEndOfStream) Log.Checkpoint(oExec.StdOut.ReadLine()); while (!oExec.StdErr.AtEndOfStream) Log.Error(oExec.StdErr.ReadLine()); Since every time bash.exe is started a new window opens the searching is not faster than before. Is there a possibility to have the bash run in the background using another switch?

    Read the article

  • differences between using wmode="transparent", "opaque", "window" for an embedded object on webpage

    - by Jian Lin
    when embedding a Flash object with the <object and <embed tag, there is an attribute called "wmode". It seems that most of the time, wmode="transparent" is the same as wmode="opaque" as the Flash doesn't actually have any transparent color so that the bottom HTML element is to be shown. As a result, "opaque" should be faster than "transparent" since it require less processing for transparency, yet most of the time i see Flash object embedded with "transparent" instead of "opaque". "opaque" is needed so that other HTML element won't be covered up by the Flash object. (such as a menu item that pops up an extra sub-menu won't be covered up by the Flash object). By the way, is there formal documentation for wmode's "opaque", "transparent", and "window"? I was only able to find blogs that describe it but not the formal documentation. thanks.

    Read the article

  • Statistics based marketing campaign measurement tools

    - by AFHood
    Currently using SAS as measurement engine and Business Objects as display layer. Looking to develop a new, faster, slicker solution. Has anyone developed or purchased a campaign measurement reporting system? This solution should measure everything from email stats, web stats, customer activity, lift, ROI, etc. Ok.. I'm researching and finding nada... We are working with a team from India and they want to re-write everything from scratch.. Any solutions out there at all?

    Read the article

  • Understanding memory and cpu speed

    - by tipu
    Firstly, I am working on a windows xp 64 machine with 4gb ram and 2.29 ghz x4 I am indexing 220,000 lines of text that are more or less the same length. These are divided into 15 equally sized files. File 1/15 takes 1 minute to index. As the script indexes more files, it seems to take much longer with file 15/15 taking 40 minutes. My understanding is that the more I put in memory, the faster the script is. The dictionary is indexed in a hash, so fetch operations should be O(1). I am not sure where the script would be hanging the CPU. I have the script here.

    Read the article

  • multiple modules under one solution

    - by Vicky
    I have a project under which various distributed applications are placed; the whole project is under a single dll. Hence, if any issue occurs we need to re-build the whole application to run or import to our server. I am looking for the best possible way to make the build process faster and also bit worried about the size of the DLL. Is it a good way to have separate dll for all the modules or split the application in multiple projects??? Can, anyone suggest the best way or example to deal with this situation. Any help would be appreciated. Thanks in advance.

    Read the article

  • Fastest way to list all primes below N in python

    - by jbochi
    This is the best algorithm I could come up with after struggling with a couple of Project Euler's questions. def get_primes(n): numbers = set(range(n, 1, -1)) primes = [] while numbers: p = numbers.pop() primes.append(p) numbers.difference_update(set(range(p*2, n+1, p))) return primes >>> timeit.Timer(stmt='get_primes.get_primes(1000000)', setup='import get_primes').timeit(1) 1.1499958793645562 Can it be made even faster? EDIT: This code has a flaw: Since numbers is an unordered set, there is no guarantee that numbers.pop() will remove the lowest number from the set. Nevertheless, it works (at least for me) for some input numbers: >>> sum(get_primes(2000000)) 142913828922L #That's the correct sum of all numbers below 2 million >>> 529 in get_primes(1000) False >>> 529 in get_primes(530) True EDIT: The rank so far (pure python, no external sources, all primes below 1 million): Sundaram's Sieve implementation by myself: 327ms Daniel's Sieve: 435ms Alex's recipe from Cookbok: 710ms EDIT: ~unutbu is leading the race.

    Read the article

  • Is yield break equivalent to returning Enumerable<T>.Empty from a method returning IEnumerable<T>

    - by Mike Two
    These two methods appear to behave the same to me public IEnumerable<string> GetNothing() { return Enumerable.Empty<string>(); } public IEnumerable<string> GetLessThanNothing() { yield break; } I've profiled each in test scenarios and I don't see a meaningful difference in speed, but the yield break version is slightly faster. Are there any reasons to use one over the other? Is one easier to read than the other? Is there a behavior difference that would matter to a caller?

    Read the article

  • One table, need multiple values from different rows/tuples

    - by WmasterJ
    I have tables like: 'profile_values' userID | fid | value -------+---------+------- 1 | 3 | [email protected] 1 | 45 | 203-234-2345 3 | 3 | [email protected] 1 | 45 | 123-456-7890 And: 'users' userID | name -------+------- 1 | joe 2 | jane 3 | jake I want to join them and have one row with two of the values like: 'profile_values' userID | name | email | phone -------+-------+----------------+-------------- 1 | joe | [email protected] | 203-234-2345 2 | jane | [email protected] | 123-456-7890 I have solved it but it feels clumsy and I want to know if there is a better way to do it. Meaning solutions that are either more readable or faster(optimized) or simply best-practice. Current solution: multiple tables selected, many conditional statements: SELECT u.userID AS memberid, u.name AS first_name, pv1.value AS fname, pv2.value as lname FROM users AS u, profile_values AS pv1, profile_values AS pv2, WHERE u.userID = pv1.userID AND pv1.fid = 3 AND u.userID = pv2.userID AND pv2.fid = 45; Thanks for the help!

    Read the article

  • Speed up compilation with mockito on Android

    - by pbreault
    I am currently developing an android app in eclipse using: One project for the app One project for the tests (Instrumentation and Pojo tests) In the test project, I am importing the mockito library for standard POJO testing. However, when I import the library, the compilation time skyrockets from 1 second to about 30 seconds in eclipse. The cause seems to be that the whole library is converted each time. So basically, each time a make a modification that I want to test, I have to wait 30 seconds. The only workarounds that I have found so far would be: Disable "Build Automatically" Create a project that includes only pojo tests and put mockito only there. Use another library that compiles faster (e.g. easymock) Any other suggestion?

    Read the article

  • Efficient algorithm to generate all solutions of a linear diophantine equation with ai=1

    - by Ben
    I am trying to generate all the solutions for the following equations for a given H. With H=4 : 1) ALL solutions for x_1 + x_2 + x_3 + x_4 =4 2) ALL solutions for x_1 + x_2 + x_3 = 4 3) ALL solutions for x_1 + x_2 = 4 4) ALL solutions for x_1 =4 For my problem, there are always 4 equations to solve (independently from the others). There are a total of 2^(H-1) solutions. For the previous one, here are the solutions : 1) 1 1 1 1 2) 1 1 2 and 1 2 1 and 2 1 1 3) 1 3 and 3 1 and 2 2 4) 4 Here is an R algorithm which solve the problem. library(gtools) H<-4 solutions<-NULL for(i in seq(H)) { res<-permutations(H-i+1,i,repeats.allowed=T) resum<-apply(res,1,sum) id<-which(resum==H) print(paste("solutions with ",i," variables",sep="")) print(res[id,]) } However, this algorithm makes more calculations than needed. I am sure it is possible to go faster. By that, I mean not generating the permutations for which the sums is H Any idea of a better algorithm for a given H ?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >