Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 29/267 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • speed up the speed of a sql query to mysql?

    - by fayer
    in my mysql database i've got the geonames database, containing all countries, states and cities. i am using this to create a cascading menu so the user could select where he is from: country - state - county - city. but the main problem is that the query will search through all the 7 millions rows in that table each time i want to get the list of children rows, and that is taking a while 10-15 seconds. i wonder how i could speed this up: caching? table views? reorganizing table structure somehow? and most important, how do i do these things? are there good tutorials you could link to me? i appreciate all help and feedback discussing smart ways of handling this issue!

    Read the article

  • Generator speed in python 3

    - by Will
    Hello all, I am going through a link about generators that someone posted. In the beginning he compares the two functions below. On his setup he showed a speed increase of 5% with the generator. I'm running windows XP, python 3.1.1, and cannot seem to duplicate the results. I keep showing the "old way"(logs1) as being slightly faster when tested with the provided logs and up to 1GB of duplicated data. Can someone help me understand whats happening differently? Thanks! def logs1(): wwwlog = open("big-access-log") total = 0 for line in wwwlog: bytestr = line.rsplit(None,1)[1] if bytestr != '-': total += int(bytestr) return total def logs2(): wwwlog = open("big-access-log") bytecolumn = (line.rsplit(None,1)[1] for line in wwwlog) getbytes = (int(x) for x in bytecolumn if x != '-') return sum(getbytes)

    Read the article

  • Unable to get Processor Speed in Device

    - by mukesh
    Hi i am using QueryperformanceFrequency to get the No of cycle i.e processor speed. But it is showing me the wornd value. It is written in the specicfication is the Processor is about 400MHz, but what we aregetting through code is something 16MHz. Please porvide any pointer : The code for Wince device is: enter code here LARGE_INTEGER FrequnecyCounter; QueryPerformanceFrequency(&FrequnecyCounter); CString temp; temp.Format(L"%lld",FrequnecyCounter.QuadPart)`AfxMessageBox(temp); Thanks, Mukesh

    Read the article

  • How to speed this kind of for-loop?

    - by wok
    I would like to compute the maximum of translated images along the direction of a given axis. I know about ordfilt2, however I would like to avoid using the Image Processing Toolbox. So here is the code I have so far: imInput = imread('tire.tif'); n = 10; imMax = imInput(:, n:end); for i = 1:(n-1) imMax = max(imMax, imInput(:, i:end-(n-i))); end Is it possible to avoid using a for-loop in order to speed the computation up, and, if so, how?

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • VB.Net HTTPWebRequest Speed is slow comparing Python URLOpen

    - by regexhacks
    Hi I am coding a web-crawler which will crawl the websites and selectively parse different sections of a web site. I am a .Net developer so the choice was obvious that I did it in .Net but the speed was very slow which included downloading and parsing of HTMLPages Then I tried to just download the contents first using .Net and then same domains using python but the python was very impressive in downloading data. I have achieved downloading using python but the later part is not that easy to code in python, which obviously i don't want to do. The same batch of domain which took 100 seconds in Python was taking 20 minutes in .Net based crawler I tried http://www.eqlit.com/ to download and in took 8 seconds in Python and same was taking 100 Seconds in .Net crawler Does anyone anyone have any idea why this is slow in .Net but fast in python?

    Read the article

  • ASP.NET Speed up DataView sorting/paging

    - by rlb.usa
    I have a page in ASP.NET where I'm using a Repeater to display a record listing. But it's slow as molasses, I've been tasked with speeding it up (sorting,paging). I've got it set up as follows: When user enters page, grab all of the data from the database (500 records, up to 4 relation'ed records) Store it all in Application["MyDataView"] On sort or paging, simply use the data view's internal sort/page method (no db calls) and rebind. I understand that databases can take time to query, but simply to have the DataView call it's sort method (no db calls) takes 10ish seconds, that's an alarmingly slow. Two questions: Why is it taking so long? How can I speed it up? A gridview is not possible.

    Read the article

  • sql query is too slow, how to improve speed

    - by user1289282
    I have run into a bottleneck when trying to update one of my tables. The player table has, among other things, id, skill, school, weight. What I am trying to do is: SELECT id, skill FROM player WHERE player.school = (current school of 4500) AND player.weight = (current weight of 14) to find the highest skill of all players returned from the query UPDATE player SET starter = 'TRUE' WHERE id = (highest skill) move to next weight and repeat when all weights have been completed move to next school and start over all schools completed, done I have this code implemented and it works, but I have approximately 4500 schools totaling 172000 players and the way I have it now, it would take probably a half hour or more to complete (did not wait it out), which is way too slow. How to speed this up? Short of reducing the scale of the system, I am willing to do anything that gets the intended result. Thanks! *the weights are the standard folk style wrestling weights ie, 103, 113, 120, 126, 132, 138, 145, 152, 160, 170, 182, 195, 220, 285 pounds

    Read the article

  • Using FILE_FLAG_NO_BUFFERING will return noticeable speed gain?

    - by 9dan
    Recently noticed detail description of FILE_FLAG_NO_BUFFERING flag in MSDN, and read several Google search results about unbuffered I/O in Windows. http://msdn.microsoft.com/en-us/library/aa363858(v=vs.85).aspx I wondering now, is it really important to consider unbuffered option in file I/O programming? Because many programs use plain old C stream I/O or C++ iostream, I didn't gave any attention to FILE_FLAG_NO_BUFFERING flag before. Let's say we are developing photo explorer program like Picasa. If we implement unbuffered I/O, could thumbnail display speed show noticeable difference in ordinary users?

    Read the article

  • Java deserialization speed

    - by celicni
    I am writing a Java application that among other things needs to read a dictionary text file (each line is one word) and store it in a HashSet. Each time I start the application this same file is being read all over again (6 Megabytes unicode file). That seemed expensive, so I decided to serialize resulting HashSet and store it to a binary file. I expected my application to run faster after this. Instead it got slower: from ~2,5 seconds before to ~5 seconds after serialization. Is this expected result? I thought that in similar cases serialization should increase speed.

    Read the article

  • How to speed up a query?

    - by Soroush Khosravi
    I have a table that every request to the server, stores on it. For each request I will check that it is banned or not. For example it is a query: select * from requests where request_sessID = '4bc0331d983000902b4718c80f12e9b3' AND request_time > (UNIX_TIMESTAMP() - 3600) AND request_isEnable = 1 I also set the engine from InnoDB to MyISAM and row_format to Dynamic but nothing changed. My Hardware is very strong but it took about a minute to execute ! I am a programmer and newbie to mysql How can Speed Up this query? Thanks in Advance

    Read the article

  • aspx page with gridview runs very fast in IE but 20% of the speed on Firefox

    - by frank2009
    Hi there I have a simple aspx page with some search options which queries an SQLEXpress database, and it is displayed in a gridview. For some reason, it runs lightning fast in IE but very slow in Firefox. It has very little code, a gridview a couple of images and a couple of textboxes and a search button. It was done with Expression Web so no additional code added. In production (not local) the speed is very noticiable when doing a search... IE displays the results almost instantly...Firefox might take 3-5 seconds. And everything else runs super fast as well in IE (update, delete etc). Is there a reason for this ? Thanks

    Read the article

  • Flash IO error while uploading photo with low uploading internet speed

    - by Beck
    Actionscript: System.security.allowDomain("http://" + _root.tdomain + "/"); import flash.net.FileReferenceList; import flash.net.FileReference; import flash.external.ExternalInterface; import flash.external.*; /* Main variables */ var session_photos = _root.ph; var how_much_you_can_upload = 0; var selected_photos; // container for selected photos var inside_photo_num = 0; // for photo in_array selection var created_elements = _root.ph; var for_js_num = _root.ph; /* Functions & settings for javascript<->flash conversation */ var methodName:String = "addtoflash"; var instance:Object = null; var method:Function = addnewphotonumber; var wasSuccessful:Boolean = ExternalInterface.addCallback(methodName, instance, method); function addnewphotonumber() { session_photos--; created_elements--; for_js_num--; } /* Javascript hide and show flash button functions */ function block(){getURL("Javascript: blocking();");} function unblock(){getURL("Javascript:unblocking();");} /* Creating HTML platform function */ var result = false; /* Uploading */ function uploadthis(photos:Array) { if(!photos[inside_photo_num].upload("http://" + _root.tdomain + "/upload.php?PHPSESSID=" + _root.phpsessionid)) { getURL("Javascript:error_uploading();"); } } /* Flash button(applet) options and bindings */ var fileTypes:Array = new Array(); var imageTypes:Object = new Object(); imageTypes.description = "Images (*.jpg)"; imageTypes.extension = "*.jpg;"; fileTypes.push(imageTypes); var fileListener:Object = new Object(); var btnListener:Object = new Object(); btnListener.click = function(eventObj:Object) { var fileRef:FileReferenceList = new FileReferenceList(); fileRef.addListener(fileListener); fileRef.browse(fileTypes); } uploadButton.addEventListener("click", btnListener); /* Listeners */ fileListener.onSelect = function(fileRefList:FileReferenceList):Void { // reseting values inside_photo_num = 0; var list:Array = fileRefList.fileList; var item:FileReference; // PHP photo counter how_much_you_can_upload = 3 - session_photos; if(list.length > how_much_you_can_upload) { getURL("Javascript:howmuch=" + how_much_you_can_upload + ";list_length=" + list.length + ";limit_reached();"); return; } // if session variable isn't yet refreshed, we check inner counter if(created_elements >= 3) { getURL("Javascript:limit_reached();"); return; } selected_photos = list; for(var i:Number = 0; i < list.length; i++) { how_much_you_can_upload--; item = list[i]; trace("name: " + item.name); trace(item.addListener(this)); if((item.size / 1024) > 5000) {getURL("Javascript:size_limit_reached();");return;} } result = false; setTimeout(block,500); /* Increment number for new HTML container and pass it to javascript, after javascript returns true and we start uploading */ for_js_num++; if(ExternalInterface.call("create_platform",for_js_num)) { uploadthis(selected_photos); } } fileListener.onProgress = function(file:FileReference, bytesLoaded:Number, bytesTotal:Number):Void { getURL("Javascript:files_process(" + bytesLoaded + "," + bytesTotal + "," + for_js_num + ");"); } fileListener.onComplete = function(file:FileReference, bytesLoaded:Number, bytesTotal:Number):Void { inside_photo_num++; var sendvar_lv:LoadVars = new LoadVars(); var loadvar_lv:LoadVars = new LoadVars(); loadvar_lv.onLoad = function(success:Boolean){ if(loadvar_lv.failed == 1) { getURL("Javascript:type_failed();"); return; } getURL("Javascript:filelinks='" + loadvar_lv.json + "';fullname='" + loadvar_lv.fullname + "';completed(" + for_js_num + ");"); created_elements++; if((inside_photo_num + 1) > selected_photos.length) {setTimeout(unblock,1000);return;} // don't create empty containers anymore if(created_elements >= 3) {return;} result = false; /* Increment number for new HTML container and pass it to javascript, after javascript returns true and we start uploading */ for_js_num++; if(ExternalInterface.call("create_platform",for_js_num)) { uploadthis(selected_photos); } } sendvar_lv.getnum = true; sendvar_lv.PHPSESSID = _root.phpsessionid; sendvar_lv.sendAndLoad("http://" + _root.tdomain + "/upload.php",loadvar_lv,"POST"); } fileListener.onCancel = function(file:FileReference):Void { } fileListener.onOpen = function(file:FileReference):Void { } fileListener.onHTTPError = function(file:FileReference, httpError:Number):Void { getURL("Javascript:http_error(" + httpError + ");"); } fileListener.onSecurityError = function(file:FileReference, errorString:String):Void { getURL("Javascript:security_error(" + errorString + ");"); } fileListener.onIOError = function(file:FileReference):Void { getURL("Javascript:io_error();"); selected_photos[inside_photo_num].cancel(); uploadthis(selected_photos); } <PARAM name="allowScriptAccess" value="always"> <PARAM name="swliveconnect" value="true"> <PARAM name="movie" value="http://www.localh.com/fileref.swf?ph=0&phpsessionid=8mirsjsd75v6vk583vkus50qbb2djsp6&tdomain=www.localh.com"> <PARAM name="wmode" value="opaque"> <PARAM name="quality" value="high"> <PARAM name="bgcolor" value="#ffffff"> <EMBED swliveconnect="true" wmode="opaque" src="http://www.localh.com/fileref.swf?ph=0&phpsessionid=8mirsjsd75v6vk583vkus50qbb2djsp6&tdomain=www.localh.com" quality="high" bgcolor="#ffffff" width="100" height="22" name="fileref" align="middle" allowScriptAccess="always" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer"></EMBED> My uploading speed is 40kb/sec Getting flash error while uploading photos bigger than 500kb and getting no error while uploading photos less than 100-500kb~. My friend has 8mbit uploading speed and has no errors even while uploading 3.2mb photos and more. How to fix this problem? I have tried to re-upload on IO error trigger, but it stops at the same place. Any solution regarding this error? By the way, i was watching process via debugging proxy and figured out, that responce headers doesn't come at all on this IO error. And sometimes shows socket error. If need, i will post serverside php script as well. But it stops at if(isset($_FILES['Filedata'])) { so it won't help :) as all processing comes after this check.

    Read the article

  • How to speed up WPF programs?

    - by Sam
    I love programming with and for Windows Presentation Framework. Mostly I write browser-like apps using WPF and XAML. But what really annoys me is the slowness of WPF. A simple page with only a few controls loads fast enough, but as soon as a page is a teeny weeny bit more complex, like containing a lot of data entry fields, one or two tab controls, and stuff, it gets painful. Loading of such a page can take more than one second. Seconds, indeed, especially on not so fast computers (read: the customers computers) it can take ages. Same with changing values on the page. Everything about the WPF UI is somehow sluggy. This is so mean! They give me this beautiful framework, but make it so excruciatingly slow so I'll have to apologize to our customers all the time! My Question: How do you speed up WPF? How do you profile bottlenecks? How do you deal with the slowness? Since this seems to be an universal problem with WPF, I'm looking for general advice, useful for many situations and problems. Some other related questions: What tools do you use for WPF development Tools to develop WPF or Silverlight applications

    Read the article

  • Still about SSD potentials...write and read speed

    - by Macroideal
    HI Gurus, I have been working on SSD(solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Esp these days i was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test 1. read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. 2. Read and write data from and into files located in SSD... 3. Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why? Thanx very much...If you know tips of ssd...follow me

    Read the article

  • Speed up bitstring/bit operations in Python?

    - by Xavier Ho
    I wrote a prime number generator using Sieve of Eratosthenes and Python 3.1. The code runs correctly and gracefully at 0.32 seconds on ideone.com to generate prime numbers up to 1,000,000. # from bitstring import BitString def prime_numbers(limit=1000000): '''Prime number generator. Yields the series 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ... using Sieve of Eratosthenes. ''' yield 2 sub_limit = int(limit**0.5) flags = [False, False] + [True] * (limit - 2) # flags = BitString(limit) # Step through all the odd numbers for i in range(3, limit, 2): if flags[i] is False: # if flags[i] is True: continue yield i # Exclude further multiples of the current prime number if i <= sub_limit: for j in range(i*3, limit, i<<1): flags[j] = False # flags[j] = True The problem is, I run out of memory when I try to generate numbers up to 1,000,000,000. flags = [False, False] + [True] * (limit - 2) MemoryError As you can imagine, allocating 1 billion boolean values (1 byte 4 or 8 bytes (see comment) each in Python) is really not feasible, so I looked into bitstring. I figured, using 1 bit for each flag would be much more memory-efficient. However, the program's performance dropped drastically - 24 seconds runtime, for prime number up to 1,000,000. This is probably due to the internal implementation of bitstring. You can comment/uncomment the three lines to see what I changed to use BitString, as the code snippet above. My question is, is there a way to speed up my program, with or without bitstring?

    Read the article

  • Winforms: How to speed up Invalidate()?

    - by Pedery
    I'm developing a retained mode drawing application in GDI+. The application can draw simple shapes to a canvas and perform basic editing. The math that does this is optimized to the last byte and is not an issue. I'm drawing on a panel that is using the built-in Controlstyles.DoubleBuffer. Now, my problem arises if I run my app maximized on a big monitor (HD in my case). If I try to draw a line from one corner of the (big) canvas to the diagonally oposite other, it will start to lag and the CPU goes high up. Each graphical object in my app has a boundingbox. Thus, when I invalidate the boundingbox of a line that goes from one corner of the maximized app to the oposite diagonal one, that boundingbox is virtually as big as the canvas. When a user is drawing a line, this invalidation of the boundingbox thus happens on the mousemove event, and there is a clear lag visible. This lag also exists if the line is the only object on the canvas. I've tried to optimize this in many ways. If I draw a shorter line, the CPU and the lag goes down. If I remove the Invalidate() and keep all other code, the app is quick. If I use a Region (that only spans the figure) to invalidate instead of the boundingbox, it is just as slow. If I split the boundingbox into a range of smaller boxes that lie back to back, thus reducing the invalidation area, no visible performance gain can be seen. Thus I'm at a loss here. How can I speed up the invalidation? On a side note, both Paint.Net and Mspaint suffers from the same shortcommings. Word and PowerPoint however, seem to be able to paint a line as described above with no lag and no CPU load at all. Thus it's possible to achieve the desired results, the question is how?

    Read the article

  • AS3 microphone recording/saving works, in-flash PCM playback double speed

    - by Lowgain
    I have a working mic recording script in AS3 which I have been able to successfully use to save .wav files to a server through AMF. These files playback fine in any audio player with no weird effects. For reference, here is what I am doing to capture the mic's ByteArray: (within a class called AudioRecorder) public function startRecording():void { _rawData = new ByteArray(); _microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, _samplesCaptured, false, 0, true); } private function _samplesCaptured(e:SampleDataEvent):void { _rawData.writeBytes(e.data); } This works with no problems. After the recording is complete I can take the _rawData variable and run it through a WavWriter class, etc. However, if I run this same ByteArray as a sound using the following code which I adapted from the adobe cookbook: (within a class called WavPlayer) public function playSound(data:ByteArray):void { _wavData = data; _wavData.position = 0; _sound.addEventListener(SampleDataEvent.SAMPLE_DATA, _playSoundHandler); _channel = _sound.play(); _channel.addEventListener(Event.SOUND_COMPLETE, _onPlaybackComplete, false, 0, true); } private function _playSoundHandler(e:SampleDataEvent):void { if(_wavData.bytesAvailable <= 0) return; for(var i:int = 0; i < 8192; i++) { var sample:Number = 0; if(_wavData.bytesAvailable > 0) sample = _wavData.readFloat(); e.data.writeFloat(sample); } } The audio file plays at double speed! I checked recording bitrates and such and am pretty sure those are all correct, and I tried changing the buffer size and whatever other numbers I could think of. Could it be a mono vs stereo thing? Hope I was clear enough here, thanks!

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Why does a conditional not affect query speed?

    - by Telos
    I have a stored procedure that was taking a "long" period of time to execute. The query only needs to return data in one case, so I figured I could check for that case and just return before hitting the actual query. The only problem is that it still takes the same amount of time to execute with an if statement. I have verified that the code inside the if is not executing, and that if I replace the complex query with a simple select the speed is fine... so now I'm confused. Why is the query being slowed down by code that doesn't get executed when the conditional is false? Here's the query itself: ALTER PROCEDURE [dbo].[pr_cbc_GetCokeInfo] @pa_record int, @pb_record int AS BEGIN SET NOCOUNT ON; declare @ticketRec int SELECT @ticketRec = TicketRecord FROM eservice_live..v_sdticket where TicketRecord=@pa_record AND serviceCompanyID = 1139 AND @pb_record IS NULL if @ticketRec IS NULL return select record = null, doc_ref = @pa_record, memo_type = 'I', memo = 'Bottler: ' + isnull(Bottler, '') + ' ' + 'Sales Loc: ' + isnull(SalesLocation, '') + ' ' + 'Outlet Desc: ' + isnull(OutletDesc, '') + ' ' + 'City: ' + isnull(OutletCity, '') + ' ' + 'EquipNo: ' + isnull(EquipNo, '') + ' ' + 'SerialNo: ' + isnull(SerialNo, '') + ' ' + 'PhaseNo: ' + isnull(cast(PhaseNo as varchar(255)), '') + ' ' + 'StaticIP: ' + isnull(StaticIP, '') + ' ' + 'Air Card: ' + isnull(AirCard, '') FROM eservice_live..v_SDExtendedInfoField ef JOIN eservice_live..CokeSNList csl ON ef.valueText=csl.SerialNo where ef.docType='CLH' AND ef.docref = @ticketRec AND ef.ExtendedDocNumber=5 SET NOCOUNT OFF; END

    Read the article

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • Refactoring - Speed increase

    - by Michael G
    How can I make this function more efficient. It's currently running at 6 - 45 seconds. I've ran dotTrace profiler on this specific method, and it's total time is anywhere between 6,000ms to 45,000ms. The majority of the time is spent on the "MoveNext" and "GetEnumerator" calls. and example of the times are 71.55% CreateTableFromReportDataColumns - 18, 533* ms - 190 calls -- 55.71% MoveNext - 14,422ms - 10,775 calls What can I do to speed this method up? it gets called a lot, and the seconds add up: private static DataTable CreateTableFromReportDataColumns(Report report) { DataTable table = new DataTable(); HashSet<String> colsToAdd = new HashSet<String> { "DataStream" }; foreach (ReportData reportData in report.ReportDatas) { IEnumerable<string> cols = reportData.ReportDataColumns.Where(c => !String.IsNullOrEmpty(c.Name)).Select(x => x.Name).Distinct(); foreach (var s in cols) { if (!String.IsNullOrEmpty(s)) colsToAdd.Add(s); } } foreach (string col in colsToAdd) { table.Columns.Add(col); } return table; }

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • speed up sql INSERTs

    - by sean717
    I have the following method to insert millions of rows of data into a table (I use SQL 2008) and it seems slow, is there any way to speed up INSERTs? Here is the code snippet - I use MS enterprise library public void InsertHistoricData(List<DataRow> dataRowList) { string sql = string.Format( @"INSERT INTO [MyTable] ([Date],[Open],[High],[Low],[Close],[Volumn]) VALUES( @DateVal, @OpenVal, @High, @Low, @CloseVal, @Volumn )"); DbCommand dbCommand = VictoriaDB.GetSqlStringCommand( sql ); DB.AddInParameter(dbCommand, "DateVal", DbType.Date); DB.AddInParameter(dbCommand, "OpenVal", DbType.Currency); DB.AddInParameter(dbCommand, "High", DbType.Currency ); DB.AddInParameter(dbCommand, "Low", DbType.Currency); DB.AddInParameter(dbCommand, "CloseVal", DbType.Currency); DB.AddInParameter(dbCommand, "Volumn", DbType.Int32); foreach (NasdaqHistoricDataRow dataRow in dataRowList) { DB.SetParameterValue( dbCommand, "DateVal", dataRow.Date ); DB.SetParameterValue( dbCommand, "OpenVal", dataRow.Open ); DB.SetParameterValue( dbCommand, "High", dataRow.High ); DB.SetParameterValue( dbCommand, "Low", dataRow.Low ); DB.SetParameterValue( dbCommand, "CloseVal", dataRow.Close ); DB.SetParameterValue( dbCommand, "Volumn", dataRow.Volumn ); DB.ExecuteNonQuery( dbCommand ); } }

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >