Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 125/184 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • C++ String pointers

    - by gnm
    In my previous app I had an object like this: class myType { public: int a; string b; } It had a lot of instances scattered everywhere and passed around to nearly every function. The app was slow. Profiling said that 95% of time is eaten by the string allocator function. I know how to work with the object above, but not how to work with string pointers. class myType { public: int a; string* b; } They told me to use pointers as above. How much faster is it with a string pointer? What is copied when I copy the object? How to the following using the class with the pointer: Access the string value Modify the string value without modifying the one in the object (copy?) General things that change if I use string pointers?

    Read the article

  • In Python, how do I search a flat file for the closest match to a particular numeric value?

    - by kaushik
    have file data of format 3.343445 1 3.54564 1 4.345535 1 2.453454 1 and so on upto 1000 lines and i have number given such as a=2.44443 for the given file i need to find the row number of the numbers in file which is most close to the given number "a" how can i do this i am presently doing by loading whole file into list and comparing each element and finding the closest one any other better faster method? my code:i need to ru this for different file each time around 20000 times so want a fast method p=os.path.join("c:/begpython/wavnk/",str(str(str(save_a[1]).replace('phone','text'))+'.pm')) x=open(p , 'r') for i in range(6): x.readline() j=0 o=[] for line in x: oj=str(str(line).rstrip('\n')).split(' ') o=o+[oj] j=j+1 temp=long(1232332) end_time=save_a[4] for i in range((j-1)): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i

    Read the article

  • Combining XSLT transforms

    - by Flynn1179
    Is there a way to combine two XSLT documents into a single XSLT document that does the same as transforming using the original two in sequence? i.e. Combining XSLTA and XSLTB into XSLTC such that XSLTB( XSLTA( xml )) == XSLTC( xml )? There's three reasons I'd like to be able to do this: Simplifies development; some operations need sequential transforms, and although I can generate a combined one by hand, it's a lot more difficult to maintain that two much simpler, separate transforms. Speed; one transform is in most cases hopefully faster than two. I'm currently working on a program that literally just transforms a data file in XML into an XHTML page capable of editing it using one XSLT, and a second XSLT that transforms the XHTML page back into the data file when it's saved. One test I hope to be able to do is to combine the two, and easily confirm that the 'combined' XSLT should leave the data unchanged.

    Read the article

  • what is a good way to do countif in python

    - by tolomea
    I want to count how many members of an iterable meet a given condition. I'd like to do it in a way that is clear and simple and preferably reasonably optimal. My current best ideas are: sum(meets_condition(x) for x in my_list) and len([x for x in my_list if meets_condition(x)]) The first one being iterator based is presumably faster for big lists. And it's the same form as you'd use for testing any and all. However it depends on the fact that int(True) == 1, which is somewhat ugly. The second one seems easier to read to me, but it is different from the any and all forms. Does anyone have any better suggestions? is there a library function somewhere that I am missing?

    Read the article

  • How to improve the speed of a loop containing a sqlalchemy query statement as conditional

    - by LtPinback
    This loop checks if a record is in the sqlite database and builds a list of dictionaries for those records that are missing and then executes a multiple insert statement with the list. This works but it is very slow (at least i think it is slow) as it takes 5 minutes to loop over 3500 queries. I am a complete newbie in python, sqlite and sqlalchemy so I wonder if there is a faster way of doing this. list_dict = [] session = Session() for data in data_list: if session.query(Class_object).filter(Class_object.column_name_01 == data[2]).filter(Class_object.column_name_00 == an_id).count() == 0: list_dict.append({'column_name_00':a_id, 'column_name_01':data[2]}) conn = engine.connect() conn.execute(prices.insert(),list_dict) conn.close() session.close() edit: I moved session = Session() outside the loop. Did not make a difference.

    Read the article

  • Javascript: parseInt() with trailing characters

    - by chris_l
    parseInt("7em", 10); returns 7 in all browsers I tested [*]. But can I rely on this? The reason I ask is, that I want to perform some calculations based on em, like /* elem1.style.top uses em units */ elem2.style.top = parseInt(elem1.style.top, 10) + 1 + "em"; I could do this with regular expressions, but parseInt is easier to use, and probably a bit faster. Or is there another solution (maybe using jQuery)? [*] Tested so far on: IE 6, IE 8, Safari 4, Firefox 3.6, Opera 10.5

    Read the article

  • PHP image resize on the fly vs storing resized images

    - by Pablo
    I'm building a image sharing site and would like to know the pros and cons of resizing images on the fly with php and having the resized images stored. Which is faster? Which is more reliable? how big is the gap between the two methods in speed and performance? Please note that either way the images go through a PHP script for statistics like views or if hotlinking is allow etc... so is not like it will be a direct link for images if i opt to store the resize images. I'll appreciated your comments or any helpful links on the subject, Thanks.

    Read the article

  • MySQL Integer vs DateTime index

    - by David Kuridža
    Let me start by saying I have looked at many similar questions asked, but all of them relate to Timestamp and DateTime field type without indexing. At least that is my understanding. As we all know, there are certain advantages when it comes to DateTime. Putting them aside for a minute, and assuming table's engine is InnoDB with 10+ million records, which query would perform faster when criteria is based on: DateTime with index int with index In other words, it is better to store date and time as DateTime or UNIX timestamp in int? Keep in mind there is no need for any built-in MySQL functions to be used.

    Read the article

  • iPhone: Get value in multi-dimentional array

    - by Nic Hubbard
    I have an array that is many levels deep and I am wondering what is the best way to get one of the child element values that is deep in my array. I assume I need to use a recursive method, but what is the best way to do this? Or is there a faster way to do this? The array comes from an XML Parser that I am using which builds everything into an array like this (using NSLog to show the structure): { children = ( { children = ( { children = ( { children = ( ); data = 12; element = AssetID; } ); data = ""; element = "ns1:GetUserIdByUsernameResponse"; } ); data = ""; element = "SOAP-ENV:Body"; } ); data = ""; element = "SOAP-ENV:Envelope"; } What I would like to get at is the AssetID data, which in this case is 12.

    Read the article

  • mysql stored routine vs. mysql-alternative?

    - by user522962
    We are using a mysql database w/ about 150,000 records (names) total. Our searches on the 'names' field is done through an autocomplete function in php. We have the table indexed but still feel that the searching is a bit sluggish (a few full seconds vs. something like Google Finance w/ near-instant response). We came up w/ 2 possibilities, but wanted to get more insight: Can we create a bunch (many thousands or more) of stored procedures to speed up searches, or will creating that many stored procedures bog-down the db? Is there a faster alternative to mysql for "select" statements (speed on inserting & updating rows isn't too important so we can sacrifice that, if necessary). I've vaguely heard of BigTable & others that don't support JOIN statements....we need JOIN statements for some of our other queries we do. thx

    Read the article

  • Getting Factors of a Number

    - by Dave
    Hi Problem: I'm trying to refactor this algorithm to make it faster. What would be the first refactoring here for speed? public int GetHowManyFactors(int numberToCheck) { // we know 1 is a factor and the numberToCheck int factorCount = 2; // start from 2 as we know 1 is a factor, and less than as numberToCheck is a factor for (int i = 2; i < numberToCheck; i++) { if (numberToCheck % i == 0) factorCount++; } return factorCount; }

    Read the article

  • Display graph without saving using pydot

    - by user506710
    Hello all I am trying to display a simple graph using pydot. My question is that is there any way to display the graph without writing it to a file as currently I use write function to first draw and then have to use the Image module to show the files. However is there any way that the graph directly gets printed on the screen without being saved ?? Also as an update I would like to ask in this same question that I observe that while the image gets saved very quickly when I use the show command of the Image module it takes noticeable time for the image to be seen .... Also sometimes I get the error that the image could'nt be opened because it was either deleted or saved in unavailable location which is not correct as I am saving it at my Desktop..... Does anyone know what's happening and is there a faster way to get the image loaded..... Thanks a lot....

    Read the article

  • Is there a 'catch' with FastFormat?

    - by Roddy
    I just read about the FastFormat C++ i/o formatting library, and it seems too good to be true: Faster even than printf, typesafe, and with what I consider a pleasing interface: // prints: "This formats the remaining arguments based on their order - in this case we put 1 before zero, followed by 1 again" fastformat::fmt(std::cout, "This formats the remaining arguments based on their order - in this case we put {1} before {0}, followed by {1} again", "zero", 1); // prints: "This writes each argument in the order, so first zero followed by 1" fastformat::write(std::cout, "This writes each argument in the order, so first ", "zero", " followed by ", 1); This looks almost too good to be true. Is there a catch? Have you had good, bad or indifferent experiences with it? CW on this question, as there's probably no right answer...

    Read the article

  • Can I replicate some of the optimisations done by the JVM by hand?

    - by Subb
    I'm working on a Sudoku solver at school and we're having a little performance contest. Right now, my algorithm is pretty fast on the first run (about 2.5ms), but even faster when I solve the same puzzle 10 000 times (about 0.5ms for each run). Those timing are, of course, depend of the puzzle being solved. I know the JVM do some optimization when a method is called multiple time, and this is what I suspect is happening. I don't think I can further optimize the algorithm itself (though I'll keep looking), so I was wondering if I could replicate some of the optimizations done by the JVM. Note : compiling to native code is not an option Thanks!

    Read the article

  • Styled Javascript Popup that Connects to Database

    - by user269799
    I want to create a javascript popup box that contains text fields. I want to be able to style this box - using CSS - and I want the textfield entries to be Inserted into a MySQL database. Is this possible? I would be familiar with doing this through web forms and server side scripting but I need it to be a bit more client side this time to make things seem a bit faster. I am thinking I may need to learn some AJAX but any pointers would be a help. GF

    Read the article

  • How to calculate the state of a graph?

    - by zcb
    Given a graph G=(V,E), each node i is associated with 'Ci' number of objects. At each step, for every node i, the Ci objects will be taken away by the neighbors of i equally. After K steps, output the number of objects of the top five nodes which has the most objects. Some Constrains: |V|<10^5, |E|<2*10^5, K<10^7, Ci<1000 My current idea is: represent the transformation in each step with a matrix. This problem is converted to the calculation of the power of matrix. But this solution is much too slow considering |V| can be 10^5. Is there any faster way to do it?

    Read the article

  • What's the recommended way to create an HTML elemnt and bind a listener to it using jQuery?

    - by Bytecode Ninja
    At the moment I achieve this using something like this: var myElem = "<tr id='tr-1'><td>content</td></tr>"; $("#myTable").append(myElem); $("#tr-1").click(function() { // blah blah }); Traditionally, when I wasn't using jQuery, I used to do something like this: var myElem = document.createElement(...); var myTable = document.getElementById("myTable"); myTable.appendChild(myElem); myElem.onclick = function() { // blah blah } The thing is, in the second approach I already have a reference to myElem and I don't have to scan the DOM ($("#tr-1")) to find it, like the jQuery approach, and hence it should be much faster especially in big pages. Isn't there a better jQuery-ish way to accomplish this task?

    Read the article

  • Fastest way to compare Objects of type DateTime

    - by radbyx
    I made this. Is this the fastest way to find lastest DateTime of my collection of DateTimes? I'm wondering if there is a method for what i'm doing inside the foreach, but even if there is, I can't see how it can be faster than what i all ready got. List<StateLog> stateLogs = db.StateLog.Where(p => p.ProductID == product.ProductID).ToList(); DateTime lastTimeStamp = DateTime.MinValue; foreach (var stateLog in stateLogs) { int result = DateTime.Compare(lastTimeStamp, stateLog.TimeStamp); if (result < 0) lastTimeStamp = stateLog.TimeStamp; // sæt fordi timestamp er senere }

    Read the article

  • Why is numpy c extension slow?

    - by Bitwise
    I am working on large numpy arrays, and some native numpy operations are too slow for my needs (for example simple operations such as "bitwise" A&B). I started looking into writing C extensions to try and improve performance. As a test case, I tried the example given here, implementing a simple trace calculation. I was able to get it to work, but was surprised by the performance: for a (1000,1000) numpy array, numpy.trace() was about 1000 times faster than the C extension! This happens whether I run it once or many times. Is this expected? Is the C extension overhead that bad? Any ideas how to speed things up?

    Read the article

  • Find all numbers that appear in each of a set of lists

    - by Ankur
    I have several ArrayLists of Integer objects, stored in a HashMap. I want to get a list (ArrayList) of all the numbers (Integer objects) that appear in each list. My thinking so far is: Iterate through each ArrayList and put all the values into a HashSet This will give us a "listing" of all the values in the lists, but only once Iterate through the HashSet 2.1 With each iteration perform ArrayList.contains() 2.2 If none of the ArrayLists return false for the operation add the number to a "master list" which contains all the final values. If you can come up with something faster or more efficient, funny thing is as I wrote this I came up with a reasonably good solution. But I'll still post it just in case it is useful for someone else. But of course if you have a better way please do let me know.

    Read the article

  • Can I use WCF in this case?

    - by BDotA
    We have a third party application that provied its web services to us by ASMX and it is created at the time of .NET 1.1 in the old days we were using VB 6.0 and connected to it by a PocketSOAP, etc... bt now we want to replace the VB 6.0 with C# 3.5 WinApps and still use that third party web services. so I wish to know what are my options for doing this? which one do you recommend and which one has a faster learning curve? Thanks All.

    Read the article

  • Is it bad practice to select upstream servers based upon the HTTP method?

    - by PartlyCloudy
    I'm wondering if it is bad practice to have a reverse proxy that selects the upstream server depending on the HTTP method used? The background is that I have an abitrary web server that handles POST requests with some logic behind. The same resources also contain static content, that can be retrieved using GET. After some benchmarking I realized that nginx would handle the static content way faster than my abitrary web server doing this. I checked the option to forward incoming requests internally using nginx, which is feasible. But this would lead to the fact that different servers would serve a distinct resource, only depending on issuing a GET or POST, including different header fields.

    Read the article

  • Is there a way to make this C# method shorter and more readable with the help of Linq?

    - by Hamish Grubijan
    The following works, but I figured - since it is all based on IEnumerable, Linq can come handy here is well. By the way, is there an equivalent to Directory.GetFiles() which would return an IEnumerable instead of the array? If it exists, then would it make the code run any faster? The last part of the question is inspired by Python language which favors lightweight generators over concrete lists. private IEnumerable<string> getFiles(string strDirectory, bool bCompressedOnly) { foreach (var strFile in Directory.GetFiles(strDirectory)) { // Don't add any existing Zip files since we don't want to delete previously compressed files. if (!bCompressedOnly || Path.GetExtension(strFile).ToLower().Equals(".zip")) { yield return strFile; } } foreach (var strDir in Directory.GetDirectories(strDirectory)) { foreach (var strFile in getFiles(strDir, bCompressedOnly)) { yield return strFile; } } }

    Read the article

  • Are spinlocks a good choice for a memory allocator?

    - by dsimcha
    I've suggested to the maintainers of the D programming language runtime a few times that the memory allocator/garbage collector should use spinlocks instead of regular OS critical sections. This hasn't really caught on. Here are the reasons I think spinlocks would be better: At least in synthetic benchmarks that I did, it's several times faster than OS critical sections when there's contention for the memory allocator/GC lock. Edit: Empirically, using spinlocks didn't even have measurable overhead in a single-core environment, probably because locks need to be held for such a short period of time in a memory allocator. Memory allocations and similar operations usually take a small fraction of a timeslice, and even a small fraction of the time a context switch takes, making it silly to context switch in the case of contention. A garbage collection in the implementation in question stops the world anyhow. There won't be any spinning during a collection. Are there any good reasons not to use spinlocks in a memory allocator/garbage collector implementation?

    Read the article

  • Twisted Matrix and telnet server implementation

    - by ypercube
    I have a project which is essentially a game server where users connect and send text commands via telnet. The code is in C and really old and unmodular and has several bugs and missing features. The main function alone is half the code. I came to the conclusion that rewriting it in TwistedMachine could actually result in faster completement, besides other benefits. So, here is the questions: What packages and modules I should use? I see a "telnet" module inside "protocols" package. I also see "cronch" package with "ssh" and another "telnet" module. I'm a complete novice regarding Python.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >