Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 157/537 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Doing a large number of upserts as fast as possible

    - by Jason Swett
    My app (which uses MySQL) is doing a large number of subsequent upserts. Right now my SQL looks like this: INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL OR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') So far I've found INSERT IGNORE to be the fastest way to achieve upserts. Selecting a record to see if it exists and then either updating it or inserting a new one is too slow. Even this is not as fast as I'd like because I need to do a separate statement for each record. Sometimes I'll have around 50,000 of these statements in a row. Is there a way to take care of all of these in just one statement, without deleting any existing records?

    Read the article

  • Best way to retrieve certain field of all documents returned by a lucen search

    - by Philipp
    Hi, I was wondering what the best way is to retrieve a certain field of all documents returned by a Searcher of Lucene. Background: each document has a date field (written on) and I would like to show a timeline of all found documents, so I need to extract the date (day) field of all the documents I find with the search. I currently retrieve every document using Searcher.doc(int, FieldSelector) having the selector only retrieve the certain field. I have indexed 250k documents, the search itself takes no time and returns about 10k document ids. Retrieving those however, takes 20+ seconds. What can I do to speed things up, but still get all the values I need. Thx in advance Philipp

    Read the article

  • Easy way to observe user activity - how improve my database structure.

    - by Thomas
    Welcome, I need some advise to improve perfomence my web application. In the begin I had this structure of database: USER -id (Primary Key) -name -password -email .... PROFILE -user Primary Key, Foreign Key (USER) -birthday -region -photoFile ... PAGES -id (Primary Key) -user Foreign Key(USER) -page -date COMMENTS -id (Primary Key) -user Foreign Key(USER) -page Foreign Key(PAGE) -comment -date FAVOURITES_PAGES -id (Primary Key) -user Foreign Key(USER) -favourite_page Foreign Key(PAGE) -date but now one of the most important page of website is observatory, when everyone can observe activity others users. So I need select all pages, comments and favourites pages some users and display it in one list, sorted by date. For better perfomance (I think) I changed my structure to this: table USER and PROFILE without changes ACTIVITY (additional table- have common fields: user,date) -id (Primary Key) -user Foreign Key(USER) -date -page Foreign Key(PAGE) -comment Foreign Key(COMMENTS) -favourite_page Foreign Key(FAVOURITES_PAGES) PAGES -id (Primary Key) -page COMMENTS -id (Primary Key) -page Foreign Key(PAGE) -comment FAVOURITES_PAGES -id (Primary Key) -favourite_page Foreign Key(PAGE) So now it is very easy get sorted records from all tables. But I have no only foreign key to PAGES, COMMENTS and FAVOURITES_PAGES in ACTIVITY table - there is about ten Foreign Key fields and in one record only one have value, others have None: ACTIVITY id user date page comment ... 1 2 2010-02-23 None 1 2 1 2010-02-21 1 None .... It is corect solution. When I display about 40 records in one page (pagination) I must wait about one secound, but database is almost emty (a few users and about 100 records in others tables). It is depends on amount records per page - I have checked it, but why it takes too long time, becouse of relationships? The website is built in Python/Django. Any advices/opinion?

    Read the article

  • Why is thread local storage so slow?

    - by dsimcha
    I'm working on a custom mark-release style memory allocator for the D programming language that works by allocating from thread-local regions. It seems that the thread local storage bottleneck is causing a huge (~50%) slowdown in allocating memory from these regions compared to an otherwise identical single threaded version of the code, even after designing my code to have only one TLS lookup per allocation/deallocation. This is based on allocating/freeing memory a large number of times in a loop, and I'm trying to figure out if it's an artifact of my benchmarking method. My understanding is that thread local storage should basically just involve accessing something through an extra layer of indirection, similar to accessing a variable via a pointer. Is this incorrect? How much overhead does thread-local storage typically have? Note: Although I mention D, I'm also interested in general answers that aren't specific to D, since D's implementation of thread-local storage will likely improve if it is slower than the best implementations.

    Read the article

  • Changing POST data used by Apache Bench per iteration

    - by Alabaster Codify
    I'm using ab to do some load testing, and it's important that the supplied querystring (or POST) parameters change between requests. I.e. I need to make requests to URLs like: http://127.0.0.1:9080/meth?param=0 http://127.0.0.1:9080/meth?param=1 http://127.0.0.1:9080/meth?param=2 ... to properly exercise the application. ab seems to only read the supplied POST data file once, at startup, so changing its content during the test run is not an option. Any suggestions?

    Read the article

  • SQL Query slow in .NET application but instantaneous in SQL Server Management Studio

    - by user203882
    Here is the SQL SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.trustaccountlogid = ( SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM' ) Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table. Users: Contains users and their details TrustAccount: A User can have multiple TrustAccounts. TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes. Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared. cmd.CommandTimeout = Configuration.DBTimeout; cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid = (SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))"; cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate; // And then... reader = cmd.ExecuteReader(); if (reader.Read()) { double value = (double)reader.GetValue(0); if (System.Double.IsNaN(value)) return 0; else return value; } else return 0;

    Read the article

  • Why are my basic Heroku Apps Taking 2 seconds to load?

    - by viatropos
    I have created two very simple heroku apps to test out the service, but it's often taking several seconds to load the page when I first visit them: Cropify - Basic Sinatra App (on github) Textile2HTML - Even more basic Sinatra App (on github) All I did was create a simple sinatra app and deploy it. I haven't done anything to mess with or test the heroku servers. What can I do to improve response time? It's very slow right now and I'm not sure where to start. The code for the projects are on github if that helps. Thanks so much.

    Read the article

  • C# TabPage.Controls.Add too long

    - by Toto
    Hi, At run time, I add one control to a tabpage and I notice that it takes 0.5 sec to do it. It's rather long and I would like to reduce this time. I tried Suspend/ResumeLayout but for only one action it's no relevant and do not improved anythng. Any ideas ? Thx

    Read the article

  • Benchmarking a particular method in Objective-C

    - by Jasconius
    I have a critical method in an Objective-C application that I need to optimize as much as possible. I first need to take some easy benchmarks on this one single method so I can compare my progress as I optimize. What is the easiest way to track the execution time of a given method in, say, milliseconds, and print that to console.

    Read the article

  • Which is faster in Python: x**.5 or math.sqrt(x)?

    - by Casey
    I've been wondering this for some time. As the title say, which is faster, the actual function or simply raising to the half power? UPDATE This is not a matter of premature optimization. This is simply a question of how the underlying code actually works. What is the theory of how Python code works? I sent Guido van Rossum an email cause I really wanted to know the differences in these methods. My email: There are at least 3 ways to do a square root in Python: math.sqrt, the '**' operator and pow(x,.5). I'm just curious as to the differences in the implementation of each of these. When it comes to efficiency which is better? His response: pow and ** are equivalent; math.sqrt doesn't work for complex numbers, and links to the C sqrt() function. As to which one is faster, I have no idea...

    Read the article

  • Trying to speed up a SQLITE UNION QUERY

    - by user142683
    I have the below SQLITE code SELECT x.t, CASE WHEN S.Status='A' AND M.Nomorebets=0 THEN S.PriceText ELSE '-' END AS Show_Price FROM sb_Market M LEFT OUTER JOIN (select 2010 t union select 2020 t union select 2030 t union select 2040 t union select 2050 t union select 2060 t union select 2070 t ) as x LEFT OUTER JOIN sb_Selection S ON S.MeetingId=M.MeetingId AND S.EventId=M.EventId AND S.MarketId=M.MarketId AND x.t=S.team WHERE M.meetingid=8051 AND M.eventid=3 AND M.Name='Correct Score' With the current interface restrictions, I have to use the above code to ensure that if one selection is missing, that a '-' appears. Some feed would be something like the following SelectionId Name Team Status PriceText =================================== 1 Barney 2010 A 10 2 Jim 2020 A 5 3 Matt 2030 A 6 4 John 2040 A 8 5 Paul 2050 A 15/2 6 Frank 2060 S 10/11 7 Tom 2070 A 15 Is using the above SQL code the quickest & efficient?? Please advise of anything that could help. Messages with updates would be preferable.

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • HttpWebRequest is extremely slow!

    - by Earlz
    Hello, I am using an open source library to connect to my webserver. I was concerned that the webserver was going extremely slow and then I tried doing a simple test in Ruby and I got these results Ruby program: 2.11seconds for 100 HTTP GETs C# library: 20.81seconds for 100 HTTP GETs I have profiled and found the problem to be this function: private HttpWebResponse GetRawResponse(HttpWebRequest request) { HttpWebResponse raw = null; try { raw = (HttpWebResponse)request.GetResponse(); //This line! } catch (WebException ex) { if (ex.Response is HttpWebResponse) { raw = ex.Response as HttpWebResponse; } } return raw; } The marked line is takes over 1 second to complete by itself while the ruby program making 1 request takes .3 seconds. I am also doing all of these tests on 127.0.0.1, so network bandwidth is not an issue. What could be causing this huge slow down?

    Read the article

  • stopwatch accuracy

    - by oo
    How accurate is System.Diagnostics.Stopwatch? I am trying to do some metrics for different code paths and I need it to be exact. Should I be using stopwatch or is there another solution that is more accurate. I have been told that sometimes stopwatch gives incorrect information.

    Read the article

  • Fast modulo 3 or division algorithm?

    - by aaa
    Hello is there a fast algorithm, similar to power of 2, which can be used with 3, i.e. n%3. Perhaps something that uses the fact that if sum of digits is divisible by three, then the number is also divisible. This leads to a next question. What is the fast way to add digits in a number? I.e. 37 - 3 +7 - 10 I am looking for something that does not have conditionals as those tend to inhibit vectorization thanks

    Read the article

  • Why does appending "" to a String save memory?

    - by hsmit
    I used a variable with a lot of data in it, say String data. I wanted to use a small part of this string in the following way: this.smallpart = data.substring(12,18); After some hours of debugging (with a memory visualizer) I found out that the objects field smallpart remembered all the data from data, although it only contained the substring. When I changed the code into: this.smallpart = data.substring(12,18)+""; ..the problem was solved! Now my application uses very little memory now! How is that possible? Can anyone explain this? I think this.smallpart kept referencing towards data, but why? UPDATE: How can I clear the big String then? Will data = new String(data.substring(0,100)) do the thing?

    Read the article

  • Is regex too slow? Real life examples where simple non-regex alternative is better

    - by polygenelubricants
    I've seen people here made comments like "regex is too slow!", or "why would you do something so simple using regex!" (and then present a 10+ lines alternative instead), etc. I haven't really used regex in industrial setting, so I'm curious if there are applications where regex is demonstratably just too slow, AND where a simple non-regex alternative exists that performs significantly (maybe even asymptotically!) better. Obviously many highly-specialized string manipulations with sophisticated string algorithms will outperform regex easily, but I'm talking about cases where a simple solution exists and significantly outperforms regex. What counts as simple is subjective, of course, but I think a reasonable standard is that if it uses only String, StringBuilder, etc, then it's probably simple.

    Read the article

  • how to profile silverlight mvvm application with a lot of custom controls

    - by tomo
    There is a quite big LOB silverlight application and we wrote a lot of custom controls which are rather heavy in drawing. All data is loaded by RIA service, processed and bound (using INofityPropertyChanged interface) to the view. The problem is that first drawing takes a lot time. Following calls to the service (server) and redrawing is quite fast. I used Equatec profiler to track the problem. I saw that processing takes a couple of miliseconds only so my idea is that the drawing by SL engine is slow. I'm wondering if it is possible to profile somehow processes inside SL to check which drawing operations are taking too much time. Are there any guidelines how to implement faster drawing of complex custom controls?

    Read the article

  • Is it possible to disable user sessions for guest in Joomla 1.5

    - by WebolizeR
    Hi; Is it possible to disable session handling in Joomla 1.5 for guests. I donot use user system in the frontend, i assumed that it's not needed to run queries like below: Site will use APC or Memcache as caching system under heavy load, so it's important for me tHanks for your comments # DELETE FROM jos_session WHERE ( time < '1274709357' ) # SELECT * FROM jos_session WHERE session_id = '70c247cde8dcc4dad1ce111991772475' # UPDATE jos_session SET time='1274710257',userid='0',usertype='',username='',gid='0',guest='1',client_id='0',data='__default|a:8:{s:15:\"session.counter\";i:5;s:19:\"session.timer.start\";i:1274709740;s:18:\"session.timer.last\";i:1274709749;s:17:\"session.timer.now\";i:1274709754;s:22:\"session.client.browser\";s:88:\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3\";s:8:\"registry\";O:9:\"JRegistry\":3:{s:17:\"_defaultNameSpace\";s:7:\"session\";s:9:\"_registry\";a:1:{s:7:\"session\";a:1:{s:4:\"data\";O:8:\"stdClass\":0:{}}}s:7:\"_errors\";a:0:{}}s:4:\"user\";O:5:\"JUser\":19:{s:2:\"id\";i:0;s:4:\"name\";N;s:8:\"username\";N;s:5:\"email\";N;s:8:\"password\";N;s:14:\"password_clear\";s:0:\"\";s:8:\"usertype\";N;s:5:\"block\";N;s:9:\"sendEmail\";i:0;s:3:\"gid\";i:0;s:12:\"registerDate\";N;s:13:\"lastvisitDate\";N;s:10:\"activation\";N;s:6:\"params\";N;s:3:\"aid\";i:0;s:5:\"guest\";i:1;s:7:\"_params\";O:10:\"JParameter\":7:{s:4:\"_raw\";s:0:\"\";s:4:\"_xml\";N;s:9:\"_elements\";a:0:{}s:12:\"_elementPath\";a:1:{i:0;s:74:\"C:\xampp\htdocs\sites\iv.mynet.com\libraries\joomla\html\parameter\element\";}s:17:\"_defaultNameSpace\";s:8:\"_default\";s:9:\"_registry\";a:1:{s:8:\"_default\";a:1:{s:4:\"data\";O:8:\"stdClass\":0:{}}}s:7:\"_errors\";a:0:{}}s:9:\"_errorMsg\";N;s:7:\"_errors\";a:0:{}}s:13:\"session.token\";s:32:\"a2b19c7baf223ad5fd2d5503e18ed323\";}' WHERE session_id='70c247cde8dcc4dad1ce111991772475'

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • Python: Why is IDLE so slow?

    - by Adam Matan
    Hi, IDLE is my favorite Python editor. It offers very nice and intuitive Python shell which is extremely useful for unit-testing and debugging, and a neat debugger. However, code executed under IDLE is insanely slow. By insanely I mean 3 orders of magnitude slow: bash time echo "for i in range(10000): print 'x'," | python Takes 0.052s, IDLE import datetime start=datetime.datetime.now() for i in range(10000): print 'x', end=datetime.datetime.now() print end-start Takes: >>> 0:01:44.853951 Which is roughly 2,000 times slower. Any thoughts, or ideas how to improve this? I guess it has something to do with the debugger in the background, but I'm not really sure. Adam

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >