Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 419/3203 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • HttpWebRequest is extremely slow!

    - by Earlz
    Hello, I am using an open source library to connect to my webserver. I was concerned that the webserver was going extremely slow and then I tried doing a simple test in Ruby and I got these results Ruby program: 2.11seconds for 100 HTTP GETs C# library: 20.81seconds for 100 HTTP GETs I have profiled and found the problem to be this function: private HttpWebResponse GetRawResponse(HttpWebRequest request) { HttpWebResponse raw = null; try { raw = (HttpWebResponse)request.GetResponse(); //This line! } catch (WebException ex) { if (ex.Response is HttpWebResponse) { raw = ex.Response as HttpWebResponse; } } return raw; } The marked line is takes over 1 second to complete by itself while the ruby program making 1 request takes .3 seconds. I am also doing all of these tests on 127.0.0.1, so network bandwidth is not an issue. What could be causing this huge slow down?

    Read the article

  • Retrieving FBO data in GLSL

    - by Tom Savage
    I'm trying to get MRT working in OpenGL to try out deferred rendering. Here's the situation as I understand it. Create 3 render buffers (for example). Two RGBA8 and one Depth32. Create an FBO. Attach render buffers to FBO. ColorAttachment0/1 for color buffers, DepthAttachment for depth buffer. Bind the FBO. Draw geometry. Send data to different attachments using gl_FragData[] in the frag shader. At this point I would want to take the data in another pass using GLSL, how can a) retrieve data from the framebuffer color attachments, b) get data from the depth component.

    Read the article

  • How To Make RSS Data Accessible

    - by xarzu
    RSS data is nothing more than xml data, right? But is it some sort of special format that a webmaster have to follow inorder to be read from an RSS Reader? How do I make the XML data of my web site formatted for RSS readers?

    Read the article

  • stopwatch accuracy

    - by oo
    How accurate is System.Diagnostics.Stopwatch? I am trying to do some metrics for different code paths and I need it to be exact. Should I be using stopwatch or is there another solution that is more accurate. I have been told that sometimes stopwatch gives incorrect information.

    Read the article

  • Mysql latin1 turkish data and delphi 2010 utf8

    - by sabri.arslan
    Hello, I have tables collating latin1_general_ci and have turkish character values. And i can use this data on delphi 7+zeos with no problem. but i want to upgrade my delphi to 2010 version but zeos too slow as i saw. so i want to use odbc+ado or dbexpress solution. dbexpress solution works fine , display my data as entered and write as entered table without any change to column charset. but dbexpress has problems as i saw. for example when i select * from table which has column types as varchar,decimal,int,tinyint,text give av errors on xp systems. vista and 7 does not give any error and work fine(not fully tested). ado solution(dbgo) works fine but its not show my data as entered.its want everything be utf. but i don't want to convert my data to utf before test everything. how can i see my data as entered and write client side utf and store latin1(as zeos or dbexpress do). i was tried many other options. eg. mysql side collation and charset parameters. sorry for my bad english. i hope someone understand me. thanks.

    Read the article

  • xlwt data garbled

    - by zhangzhong
    I retrieve the data of chinese characters from DB and write the data into excel by xlwt, code as below: ws0.write(unicode(cell, 'big5')) It is ok under Windows, but when I deloyed it under Linux, the data in excel garbled, Could you help to do with it?

    Read the article

  • Is regex too slow? Real life examples where simple non-regex alternative is better

    - by polygenelubricants
    I've seen people here made comments like "regex is too slow!", or "why would you do something so simple using regex!" (and then present a 10+ lines alternative instead), etc. I haven't really used regex in industrial setting, so I'm curious if there are applications where regex is demonstratably just too slow, AND where a simple non-regex alternative exists that performs significantly (maybe even asymptotically!) better. Obviously many highly-specialized string manipulations with sophisticated string algorithms will outperform regex easily, but I'm talking about cases where a simple solution exists and significantly outperforms regex. What counts as simple is subjective, of course, but I think a reasonable standard is that if it uses only String, StringBuilder, etc, then it's probably simple.

    Read the article

  • NSOutlineview - strange behavior after reloading data

    - by matei
    I have a NSOutlineView which loads data from a data source. The table is displayed in a panel. The items shown in the table are items which belong to an object (the relation is one-to many between the object and the items). I have a list of objects in a combo box (in fact a NSPopupButton), and when I select another object in the combo box, I want it's items to be shown in the table (the NSOutlineView). I managed to do all this , however when I select another object from the combo box, not all of it's items are displayed. I have put some logging messages in the data source and it seems that there are some items that are being returned from the data source , but are not shown in the table (and are not queried for children). Now the strange part is that when I click the main window (as I said, all that I described here is in a NSPanel loaded on top of the window) , all the data is displayed correctly. It seems as if clicking the main window triggers something in the NSOutlineView that makes it display the missing items, but I can't tell what it is.

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • jQuery.getScript: data variable in callback undefined

    - by Hannes
    I'm trying to load an external JavaScript using jQuery's getScript(), like this: $.getScript("http://api.recaptcha.net/js/recaptcha_ajax.js", function(data) { window.alert(data); }); but as the alert window shows, the data variable in the callback function is undefined, unlike promised in http://docs.jquery.com/Ajax/jQuery.getScript#urlcallback. Anyone know why this might be? Thanks.

    Read the article

  • Extracting data from multiple servers SQL 2005 SSIS

    - by Raj
    I have created an SSIS package to connect to multiple SQL servers, create a database, a table and a stored procedure. The package also creates a job and schedules it to run every 5 minutes. The requirement is to collect performance metrics. I am using an ado object variable to get the server names and all the above tasks are in a for each loop and everything works fine. Now the problem: I need to create a data flow task, which will connect to each of these servers in turn, copy the performance metrics data over to a central server and purge the source table. I am unable to get this task to work. This task fails with "Unable to obtain Connection" error. Any help will be greatly appreciated. SQL Server Version : 2005 Thanks, Raj

    Read the article

  • When should I open and close a website's cached WCF proxy?

    - by Brandon Linton
    I've browsed around the other articles on StackOverflow that relate to caching WCF proxies for reuse, and I've read this article explaining why I should explicitly open the proxy before calling anything on it. I'm still a little hazy on the best implementation details. My question is: when should I open and close proxies for service calls on a website, and what should their lifetime be (per call, per request, or per web app)? We aren't planning on leveraging cached security contexts at the moment (but it's not unforeseeable). Thanks!

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • Python: Why is IDLE so slow?

    - by Adam Matan
    Hi, IDLE is my favorite Python editor. It offers very nice and intuitive Python shell which is extremely useful for unit-testing and debugging, and a neat debugger. However, code executed under IDLE is insanely slow. By insanely I mean 3 orders of magnitude slow: bash time echo "for i in range(10000): print 'x'," | python Takes 0.052s, IDLE import datetime start=datetime.datetime.now() for i in range(10000): print 'x', end=datetime.datetime.now() print end-start Takes: >>> 0:01:44.853951 Which is roughly 2,000 times slower. Any thoughts, or ideas how to improve this? I guess it has something to do with the debugger in the background, but I'm not really sure. Adam

    Read the article

  • Eclipse JUnit Plugin Test very slow to re-execute Test Suite on Windows

    - by soundasleepful
    I'm having an odd, and stressing, problem with running a large JUnit Plugin test suite in Eclipse. When I try to re-run a JUnit plugin suite that has just been executed, Eclipse hangs for quite some time before it eventually wakes up and launches. It can take up to 5 minutes sometimes, and increases with the size of the suite. Visually, it appears as a GC cleanup, except that I have plenty of GC space available (400 MB freely allocated). The size of the workspace that is has to delete is well under 1 GB, and there are not too many files - definitely less than 20,000. While I was waiting for a new run to start, I decided to manually kill explorer.exe to see if it had any effect. Surprisingly, Eclipse instantly fell out of its freeze and ran as normal. This makes me think that Windows is somehow interfering with the deletion of these workspace files. They're not being put into the Recycle Bin though. The workspace is in C: which I think is out of the range of any workspace/domain stuff. Any ideas?

    Read the article

  • fast algorithm implementation to sort very small set

    - by aaa
    hello. This is the problem I ran into long time ago. I thought I may ask your for your ideas. assume I have very small set of numbers (integers), 4 or 8 elements, that need to be sorted, fast. what would be the best approach/algorithm? my approach was to use the max/min functions. I guess my question pertains more to implementation, rather than type of algorithm. At this point it becomes somewhat hardware dependent , so let us assume Intel 64-bit processor with SSE3 . Thanks

    Read the article

  • Good way to make animations with cocos2d?

    - by Johnny Oin
    Hi there, I'm making a little iphone game, and I would get some clues. Let's imagine: Two background sprites moving pretty fast from right to left, and moving up and down with accelerometer. I guess I can't use animations here, cause the movement of the background is recalculated at each frame. So I use a schedule with an interval of 0.025s and move my sprites at each clock with a : sprite.position = ccp(x, y); So here is my problem: the result is laggy, with only these two sprites. I tried both declaring sprites in the header, and getting them with CCNodes and Tags. It's quite the same. So if someone can give me a hint on what is the best way to do that, that would be so nice. I wonder if the problem can't be the fact that sprites are moving very fast, but i'm not sure. Anyway, thanks for your time. J.

    Read the article

  • Linq to DataTable without enumerating fields

    - by Luciano
    Hi, i´m trying to query a DataTable object without specifying the fields, like this : var linqdata = from ItemA in ItemData.AsEnumerable() select ItemA but the returning type is System.Data.EnumerableRowCollection<System.Data.DataRow> and I need the following returning type System.Data.EnumerableRowCollection<<object,object>> (like the standard anonymous type) Any idea? Thanks

    Read the article

  • data breakpoints in avr studio

    - by Art Spasky
    I want to set data breakpoint for TCNT1 register of ATMega16 in AVR Studio 4.17 Build 666. I add breakpoint by specifing in the Location field of "Add data breakpoint" window the value IO@0x2C. But breakpoint seems not work. Can some one help me how to setup a data breakpoint for an IO reginster?

    Read the article

  • Is there a fast way to jump to element using XMLReader?

    - by Derk
    I am using XMLReader to read a large XML file with about 1 million elements on the level I am reading from. However, I've calculated it will take over 10 seconds when I jump to -for instance- element 500.000 using XMLReader::next ([ string $localname ] ) or XMLReader::read ( void ) This is not very usable. Is there a faster way to do this?

    Read the article

  • Slow SelectSingleNode

    - by Simon
    I have a simple structured XML file like this: <ttest ID="ttest00001", NickName="map00001"/> <ttest ID="ttest00002", NickName="map00002"/> <ttest ID="ttest00003", NickName="map00003"/> <ttest ID="ttest00004", NickName="map00004"/> ..... This xml file can be around 2.5MB. In my source code I will have a loop to get nicknames In each loop, I have something like this: nickNameLoopNum = MyXmlDoc.SelectSingleNode("//ttest[@ID=' + testloopNum + "']").Attributes["NickName"].Value This single line will cost me 30 to 40 millisecond. I searched some old articles (dated back to 2002) saying, use some sort of compiled "xpath" can help the situation, but that was 5 years ago. I wonder is there a mordern practice to make it faster? (I'm using .NET 3.5)

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >