Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 172/537 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Custom session state provider needed for DB storage?

    - by subt13
    I know this question is related to many others, but please bear with me. I am trying an experiment to store all information in database tables instead of the ASP.NET session. In ASP.NET 4 one can create a custom provider for session. So, again should I implement a Custom Session-State Provider or should I just disable session (in Web.config)? Thanks! From the comments my question can be misunderstood. Hopefully this tidbit will help clarify: I don't want to store the session in the database. I want to store information in the database that you would typically store in the session. One reason why: I don't want to carry around a session on every page, especially if that page doesn't care about 90 percent of the information in the session

    Read the article

  • Syncing Data to Remote Services, Best Practices for Caching?

    - by viatropos
    I want to be able to publish events to Eventbrite, Eventful, and Google Calendar for my Google Apps. Each service has slightly different properties for events... I will be syncing many other things too, such as users with Google Contacts and MailChimp, Documents with Google Docs and some other services, etc... So I'm wondering, what is the recommended way of retrieving the data for the end user so that it's reasonably maintainable and optimized? Here are the things I'm thinking that I'm having trouble with: My App keeps a central database of all the models (Event, Document, User, Form, etc.), and whenever Admin creates an object (e.g. create through Eventbrite or through our Admin panel), we sync them and store a copy in our local database. When User goes to the site /events, App retrieves the events from the database. Read Events from a target feed, such as the Eventbrite or Eventful feed, and scrap the local database. Basically, I'm wondering, if we're storing all of the data on a remote service, do we really need to have a local database copy of the data? When would we need to have a local database, when wouldn't we?

    Read the article

  • what can cause large discrepancy between minor GC time and total pause time?

    - by cxcg
    We have a latency-sensitive application, and are experiencing some GC-related pauses we don't fully understand. We occasionally have a minor GC that results in application pause times that are much longer than the reported GC time itself. Here is an example log snippet: 485377.257: [GC 485378.857: [ParNew: 105845K-621K(118016K), 0.0028070 secs] 136492K-31374K(1035520K), 0.0028720 secs] [Times: user=0.01 sys=0.00, real=1.61 secs] Total time for which application threads were stopped: 1.6032830 seconds The total pause time here is orders of magnitude longer than the reported GC time. These are isolated and occasional events: the immediately preceding and succeeding minor GC events do not show this large discrepancy. The process is running on a dedicated machine, with lots of free memory, 8 cores, running Red Hat Enterprise Linux ES Release 4 Update 8 with kernel 2.6.9-89.0.1EL-smp. We have observed this with (32 bit) JVM versions 1.6.0_13 and 1.6.0_18. We are running with these flags: -server -ea -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-TraceClassUnloading Can anybody offer some explanation as to what might be going on here, and/or some avenues for further investigation?

    Read the article

  • Perl launched from Java takes forever

    - by Wade Williams
    I know this is an absolute shot in the dark, but we're absolutely perplexed. A perl (5.8.6) script run by Java (1.5) is taking more than an hour to complete. The same script, when run manually from the command line takes 12 minutes to complete. This is on a Linux host. Logging is the same in both cases and the script is run with the same parameters in both cases. The script does some complex stuff like Oracle DB access, some scp's, etc, but again, it does the exact same actions in both cases. We're stumped. Has anyone ever run into a similar situation? If not and if you were faced with the same situation, how would you consider debugging it?

    Read the article

  • Fast way to pass a simple java object from one thread to another

    - by Adal
    I have a callback which receives an object. I make a copy of this object, and I must pass it on to another thread for further processing. It's very important for the callback to return as fast as possible. Ideally, the callback will write the copy to some sort of lock-free container. I only have the callback called from a single thread and one processing thread. I only need to pass a bunch of doubles to the other thread, and I know the maximum number of doubles (around 40). Any ideas? I'm not very familiar with Java, so I don't know the usual ways to pass stuff between threads.

    Read the article

  • What would be a better way of doing the following

    - by thecoshman
    if(get_magic_quotes_gpc()) { $location_name = trim(mysql_real_escape_string(trim(stripslashes($_GET['location_name'])))); } else { $location_name = trim(mysql_real_escape_string(trim($_GET['location_name']))); } That's the code I have so far. seems to me this code is fundamentally ... OK. Do you think I can safely remove the inner trim(). Please try not a spam me with endless version of this, I want to try to learn how to do this better.

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • Reusing of a PreparedStatement between methods?

    - by MRalwasser
    We all know that we should rather reuse a JDBC PreparedStatement than creating a new instance within a loop. But how to deal with PreparedStatement reuse between different method invocations? Does the reuse-"rule" still count? Should I really consider using a field for the PreparedStatement or should I close and re-create the prepared statement in every invocation? (Of course an instance of such a class would be bound to a Connection which might be a disadvantage) I am aware that the ideal answer might be "it depends". But I am looking for a best practice for less experienced developers that they will do the right choice in most of the cases.

    Read the article

  • WordPress website getting hung up for 10-15 seconds

    - by synergy989
    Problem I have a website (which loads fine): http://testupg.videve.com/ Go to the blog section (and it begins to load, gets hung up, then loads): http://testupg.videve.com/blog/ It's all one WordPress install. The only pages that are getting hung up are the blog page itself and any article page. The video pages load fine etc. What Have I done and noticed. If I disable all plugins, the blog page loads fine. I narrowed it down to DisplayBuddy plugins (Featured posts, Carousel, Slider)...as soon as I enable any single one of them, the site loads slow. The thing that doesn't make sense is, I disabled the sidebar (deleted it from the template) and the article page still loaded slow and it has ZERO instances of this plugin. I disable the plugin and the article page loads fine. How on earth can this be? I am hoping this case above can give just a little bit of insight! One other thing. The video pages use a custom taxonomy so I am wondering if its something to do with the default wordpress taxonomy. Any help is GREATLY appreciated, I have been at this thing for hours and it's time to call in some support. Cheers

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • Does having a longer string in a SQL Like expression allow hinder or help query executing speed?

    - by Allain Lalonde
    I have a db query that'll cause a full table scan using a like clause and came upon a question I was curious about... Which of the following should run faster in Mysql or would they both run at the same speed? Benchmarking might answer it in my case, but I'd like to know the why of the answer. The column being filtered contains a couple thousand characters if that's important. SELECT * FROM users WHERE data LIKE '%=12345%' or SELECT * FROM users WHERE data LIKE '%proileId=12345%' I can come up for reasons why each of these might out perform the other, but I'm curious to know the logic.

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • Best practice for handling memory leaks in large Java projects?

    - by knorv
    In almost all larger Java projects I've been involved with I've noticed that the quality of service of the application degrades with the uptime of the container. This is most probably due to memory leaks in the code. The correct way to solve this problem is obviously to trace back to the root cause of the problem and fix the leaks in the code. The quick and dirty way of solving the problem is simply restarting Tomcat (or whichever servlet container you're using). These are my three questions: Assume that you choose to solve the problem by tracing the root cause of the problem (the memory leaks), how would you collect data to zoom in on the problem? Assume that you choose the quick and dirty way of speeding things up by simply restarting the container, how would you collect data to choose the optimal restart cycle? Have you been able to deploy and run projects over an extended period of time without ever restarting the servlet container to regain snappiness? Or is an occasional servlet restart something that one has to simply accept?

    Read the article

  • How to copy files without slowing down my app?

    - by Kevin Gebhardt
    I have a bunch of little files in my assets which need to be copied to the SD-card on the first start of my App. The copy code i got from here placed in an IntentService works like a charm. However, when I start to copy many litte files, the whole app gets increddible slow (I'm not really sure why by the way), which is a really bad experience for the user on first start. As I realised other apps running normal in that time, I tried to start a child process for the service, which didn't work, as I can't acess my assets from another process as far as I understood. Has anybody out there an idea how a) to copy the files without blocking my app b) to get through to my assets from a private process (process=":myOtherProcess" in Manifest) or c) solve the problem in a complete different way Edit: To make this clearer: The copying allready takes place in a seperate thread (started automaticaly by IntentService). The problem is not to separate the task of copying but that the copying in a dedicated thread somehow affects the rest of the app (e.g. blocking to many app-specific resources?) but not other apps (so it's not blocking the whole CPU or someting) Edit2: Problem solved, it turns out, there wasn't really a problem. See my answer below.

    Read the article

  • Time complexity of a powerset generating function

    - by Lirik
    I'm trying to figure out the time complexity of a function that I wrote (it generates a power set for a given string): public static HashSet<string> GeneratePowerSet(string input) { HashSet<string> powerSet = new HashSet<string>(); if (string.IsNullOrEmpty(input)) return powerSet; int powSetSize = (int)Math.Pow(2.0, (double)input.Length); // Start at 1 to skip the empty string case for (int i = 1; i < powSetSize; i++) { string str = Convert.ToString(i, 2); string pset = str; for (int k = str.Length; k < input.Length; k++) { pset = "0" + pset; } string set = string.Empty; for (int j = 0; j < pset.Length; j++) { if (pset[j] == '1') { set = string.Concat(set, input[j].ToString()); } } powerSet.Add(set); } return powerSet; } So my attempt is this: let the size of the input string be n in the outer for loop, must iterate 2^n times (because the set size is 2^n). in the inner for loop, we must iterate 2*n times (at worst). 1. So Big-O would be O((2^n)*n) (since we drop the constant 2)... is that correct? And n*(2^n) is worse than n^2. if n = 4 then (4*(2^4)) = 64 (4^2) = 16 if n = 100 then (10*(2^10)) = 10240 (10^2) = 100 2. Is there a faster way to generate a power set, or is this about optimal?

    Read the article

  • How to avoid web server traffic peak resulting from iOS Newsstand app receiving a remote notification?

    - by thomers
    I'm developing an iOS Newsstand app. If it is suspended or not running and connected to a WLAN, Newsstand apps can be triggered by a Push remote notification to download the latest issue (in our case around 100MB) in the background. I'm using Urban Airship for the delivery of the Push broadcast. I'm now worrying about many many iOS devices hitting the web server for one big download more or less at the same time, because I expect the majority of the devices will receive the notification in a very short timeframe. Instead of broadcasts to all devices, should I rather send individual notifications to batches of small groups of devices, spreading them out over a longer period of time? And/or would a CDN like Amazon Cloudfront solve that issue easier/anyway?

    Read the article

  • Is is faster to filter and get data or filter then get data ?

    - by remi bourgarel
    Hi I have this kind of request : SELECT myTable.ID, myTable.Adress, -- 20 more columns of all kind of type FROM myTable WHERE EXISTS(SELECT * FROM myLink WHERE myLink.FID = myTable.ID and myLink.FID2 = 666) myLink has a lot of rows. Do you think it's faster to do like this : SELECT myLink.FID INTO @result FROM myLink WHERE myLink.FID2 = 666 UPDATE @result SET Adress = myTable.Adress, -- 20 more columns of all kind of type FROM myTable WHERE myTable.ID = @result.ID

    Read the article

  • What are the suggested alternatives for Class<T>.isAssignableFrom(Class<?> cls)?

    - by Wing C. Chen
    Currently I am doing the profiling to a piece of code. During the profiling, I discovered that this very method call, Class<T>.isAssignableFrom(Class<?> cls) takes up to quite amount of the entire time. Because this is a method from reflection, it takes a lot of time compared to normal keywords or method calls. I am wondering if there are some good alternatives for this method calls?

    Read the article

  • C++, using one byte to store two variables

    - by 2di
    Hi All I am working on representation of the chess board, and I am planning to store it in 32 bytes array, where each byte will be used to store two pieces. (That way only 4 bits are needed per piece) Doing it in that way, results in a overhead for accessing particular index of the board. Do you think that, this code can be optimised or completely different method of accessing indexes can be used? c++ char getPosition(unsigned char* c, int index){ //moving pointer c+=(index>>1); //odd number if (index & 1){ //taking right part return *c & 0xF; }else { //taking left part return *c>>4; } } void setValue(unsigned char* board, char value, int index){ //moving pointer board+=(index>>1); //odd number if (index & 1){ //replace right part //save left value only 4 bits *board = (*board & 0xF0) + value; }else { //replacing left part *board = (*board & 0xF) + (value<<4); } } int main() { char* c = (char*)malloc(32); for (int i = 0; i < 64 ; i++){ setValue((unsigned char*)c, i % 8,i); } for (int i = 0; i < 64 ; i++){ cout<<(int)getPosition((unsigned char*)c, i)<<" "; if (((i+1) % 8 == 0) && (i > 0)){ cout<<endl; } } return 0; } I am equally interested in your opinions regarding chess representations, and optimisation of the method above, as a stand alone problem. Thanks a lot

    Read the article

  • Slow first page load on asp.net site

    - by Tabloo Quijico
    Hi, Every now and then (always after a long period of idle-time, e.g. overnight) when I access a site built using asp.net - it takes around 15 seconds to load the page (15 seconds before I see any progress whatsoever, then the page comes up fast). Further pages on that site, or refreshes, are quick as usual - they are also fast on other machines, only the first one seems to take the 'hit'. Page tracing never through anything up (whole cycle was a fraction of a second) So my question is where else should I be looking? Perhaps IIS? Or could it still be my asp.net app and I'm just looking in the wrong place (the trace) for clues? As I don't have much control over the IIS server, anything I can check through asp.net would be more helpful, before I go ask that particular admin. cheers :D

    Read the article

  • How to speed this kind of for-loop?

    - by wok
    I would like to compute the maximum of translated images along the direction of a given axis. I know about ordfilt2, however I would like to avoid using the Image Processing Toolbox. So here is the code I have so far: imInput = imread('tire.tif'); n = 10; imMax = imInput(:, n:end); for i = 1:(n-1) imMax = max(imMax, imInput(:, i:end-(n-i))); end Is it possible to avoid using a for-loop in order to speed the computation up, and, if so, how?

    Read the article

  • Counting context switches per thread

    - by Sarmun
    Is there a way to see how many context switches each thread generates? (both in and out if possible) Either in X/s, or to let it run and give aggregated data after some time. (either on linux or on windows) I have found only tools that give aggregated context-switching number for whole os or per process. My program makes many context switches (50k/s), probably a lot not necessary, but I am not sure where to start optimizing, where do most of those happen.

    Read the article

  • Any reason not to use USE_ETAGS with CommonMiddleware in Django?

    - by allyourcode
    The only reason I can think of is that calculating ETag's might be expensive. If pages change very quickly, the browser's cache is likely to be invalidated by the ETag. In that case, calculating the ETag would be a waste of time. On the other hand, a giving a 304 response when possible minimizes the amount of time spent in transmission. What are some good guidelines for when ETag's are likely to be a net winner when implemented with Django's CommonMiddleware?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >