Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 186/594 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • devArt's dotConnect for Oracle vs. ODP.net/OCI performanc.

    - by Sieg
    Does anybody have any experience going from ODP.net to devArt's dotConnect for Oracle? Some initial testing is showing Direct Connect in 64bit dotConnect running 30% slower at times than our original ODP.net/OCI 32 bit solution. Trying to determine if that's normal or if something may be wrong in my testing approach. Thanks!

    Read the article

  • Using TCP Acks to measure latency to a server?

    - by Ted Graham
    I am trying to measure latency to a server that I don't control. This is in a colocated environment, so the latency is on the order of 500 us (.5 ms). I understand that Cisco gear frequently deprioritizes ICMP traffic, making ping times unreliable. Is there a way for me to tell if this is the case on the gear I am traversing? Can I use TCP acknowledgements to determine the minimum latency to the remote server? To do this, I would somehow need to force the remote server to send a TCP ack immediately on receiving my data.

    Read the article

  • Cache of Objects or OutPut in View ? Wich is better ?

    - by Felipe
    Hi everybody, I have an ecommerce working in ASP.Net MVC. i'm using Caching to improve more performace in my pages and it's working fine. I'd link to know what is more performative, for example, I can set OutPutCache in my views and and use this cache for all page OR I could get my List of Products in controller, put it on cache (like the code below) and send it to View to render for the user??? private IEnumerable<Products> GetProductsCache(string key, ProductType type) { if (HttpContext.Cache[key] == null) HttpContext.Cache.Insert(key, ProductRepository.GetProducts(type), null, DateTime.Now.AddMinutes(10), Cache.NoSlidingExpiration); return (IEnumerable<Products>)HttpContext.Cache[key]; } public ActionResult Index() { var home = new HomeViewModel() { Products = GetProductsCache("ProductHomeCache", ProductType.Product) Services = GetProductsCache("ServiceHomeCache", ProductType.Service) }; return View(home); } Both works fine, but I'd like to know what is suggested to improve more performace ? Or is there others way to do it better ? PS: sorry for my english! thanks all... Cheers

    Read the article

  • Strategy Pattern with Type Reflection affecting Performances ?

    - by Aurélien Ribon
    Hello ! I am building graphs. A graph consists of nodes linked each other with links (indeed my dear). In order to assign a given behavior to each node, I implemented the strategy pattern. class Node { public BaseNodeBehavior Behavior {get; set;} } As a result, in many parts of the application, I am extensively using type reflection to know which behavior a node is. if (node.Behavior is NodeDataOutputBehavior) workOnOutputNode(node) .... My graph can get thousands of nodes. Is type reflection greatly affecting performances ? Should I use something else than the strategy pattern ? I'm using strategy because I need behavior inheritance. For example, basically, a behavior can be Data or Operator, a Data behavior can IO, Const or Intermediate and finally an IO behavior can be Input or Output. So if I use an enumeration, I wont be able to test for a node behavior to be of data kind, I will need to test it to be [Input, Output, Const or Intermediate]. And if later I want to add another behavior of Data kind, I'm screwed, every data-testing method will need to be changed.

    Read the article

  • Good .NET library for fast streaming / batching trigonometry (Atan)?

    - by Sean
    I need to call Atan on millions of values per second. Is there a good library to perform this operation in batch very fast. For example, a library that streams the low level logic using something like SSE? I know that there is support for this in OpenCL, but I would prefer to do this operation on the CPU. The target machine might not support OpenCL. I also looked into using OpenCV, but it's accuracy for Atan angles is only ~0.3 degrees. I need accurate results.

    Read the article

  • Mysql: Perform of NOT EXISTS. Is it possible to improve permofance?

    - by petRUShka
    I have two tables posts and comments. Table comments have post_id attribute. I need to get all posts with type "open", for which there are no comments with type "good" and created date MAY 1. Is it optimal to use such SQL-query: SELECT posts.* FROM posts WHERE NOT EXISTS ( SELECT comments.id FROM comments WHERE comments.post_id = posts.id AND comments.comment_type = 'good' AND comments.created_at BETWEEN '2010-05-01 00:00:00' AND '2010-05-01 23:59:59') I'm not sure that NOT EXISTS is perfect construction in this situation.

    Read the article

  • How can "set timestamp" be a slow query?

    - by Peder
    My slow query log is full of entries like the following # Query_time: 1.016361 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0 SET timestamp=1273826821; COMMIT; I guess the set timestamp command is issued by replication but I don't understand how set timestamp can take over a second. Any ideas?

    Read the article

  • Threading cost - minimum execution time when threads would add speed

    - by Lukas
    I am working on a C# application that works with an array. It walks through it (meaning that at one time only a narrow part of the array is used). I am considering adding threads in it to make it perform faster (it runs on a dualcore computer). The problem is that I do not know if it would actually help, because threads cost something and this cost could easily be more than the parallel gain... So how do I determine if threading would help?

    Read the article

  • Getting "on the wire" Size of Messages in WCF

    - by Mystagogue
    While I'm making SOAP or REST invocations to WCF, I'd like to have the channel stack on either end (client and server) record the on-the-wire size of the data received. So I'm guessing I need to add a custom behavior to the channel stack on either side. That is, on the server side I'd record the IP-header advertised size that was received. On the client side I'd record the IP-header advertised size that was returned from the server. But this presupposes that this information is visible to a custom WCF behavior at the channel stack level. Perhaps it is only visible at the level of ASP.NET (at a layer beneath WCF)? In short, does anyone have any further insight on if and how this information is accessible? I must qualify that this "size" data will be collected in a production environment, as part of regular business logic calls. This question is related to my earlier bandwidth question.

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • SQL Server Multiple Joins Are Taxing The CPU

    - by durilai
    I have a stored procedure on SQL Server 2005. It is pulling from a Table function, and has two joins. When the query is run using a load test it kills the CPU 100% across all 16 cores! I have determined that removing one of the joins makes the query run fine, but both taxes the CPU. Select SKey From dbo.tfnGetLatest(@ID) a left join [STAGING].dbo.RefSrvc b on a.LID = b.ESIID left join [STAGING].dbo.RefSrvc c on a.EID = c.ESIID Any help is appreciated, note the join is happening on the same table in a different database on the same server.

    Read the article

  • PL/SQL profiler missing data

    - by user289429
    We are using pl/sql profiler to collect metrics. We noticed that on one of the environment the plsql_profiler_runs table is populated with the total execution time but the finer details that gets collected in the table plsql_profiler_data is missing. Any idea why this would be happening? We do use dbms_profiler.flush_data() before stopping the profiler and have seen this work fine in another environment.

    Read the article

  • Very slow Eclipse 4.2, how to make it more responsive?

    - by Laurent
    I'm using Eclipse PDT on a rather large PHP project and the IDE is almost unusable. It takes nearly 30 seconds to open a file, and other actions, like selecting a folder in the file explorer, editing some text, etc. are equally slow. I followed various instructions to speed it up but nothing seems to work. This is my current eclipse.ini file. Any idea how I can improve it? -startup plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.200.v20120522-1813 -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -vmargs -server -Dosgi.requiredJavaVersion=1.7 -Xmn128m -Xms1024m -Xmx1024m -Xss2m -XX:PermSize=128m -XX:MaxPermSize=128m -XX:+UseParallelGC System: Eclipse 4.2.0, Windows 7, 4 GB RAM

    Read the article

  • What's holding up my PHP script?

    - by gAMBOOKa
    We've got a PHP crawler running on our web server. When the crawler is running, there are no cpu, memory or network bandwidth spikes. Everything is normal. But our website (also PHP), hosted on the same server, stops responding. Basically the crawler blocks any other php script from running. What could be the problem? EDIT: ** fsockopen is being used to download files to crawler! **

    Read the article

  • Detecting Connection Speed / Bandwidth in .net/WCF

    - by Mystagogue
    I'm writing both client and server code using WCF, where I need to know the "perceived" bandwidth of traffic between the client and server. I could use ping statistics to gather this information separately, but I wonder if there is a way to configure the channel stack in WCF so that the same statistics can be gathered simultaneously while performing my web service invocations. This would be particularly useful in cases where ICMP is disabled (e.g. ping won't work). In short, while making my regular business-related web service calls (REST calls to be precise), is there a way to collect connection speed data implicitly? Certainly I could time the web service round trip, compared to the size of data used in the round-trip, to give me an idea of throughput - but I won't know how much of that perceived bandwidth was network related, or simply due to server-processing latency. I could perhaps solve that by having the server send back a time delta, representing server latency, so that the client can compute the actual network traffic time. If a more sophisticated approach is not available, that might be my answer...

    Read the article

  • Why is insertion into my tree faster on sorted input than random input?

    - by Juliet
    Now I've always heard binary search trees are faster to build from randomly selected data than ordered data, simply because ordered data requires explicit rebalancing to keep the tree height at a minimum. Recently I implemented an immutable treap, a special kind of binary search tree which uses randomization to keep itself relatively balanced. In contrast to what I expected, I found I can consistently build a treap about 2x faster and generally better balanced from ordered data than unordered data -- and I have no idea why. Here's my treap implementation: http://pastebin.com/VAfSJRwZ And here's a test program: using System; using System.Collections.Generic; using System.Linq; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static Random rnd = new Random(); const int ITERATION_COUNT = 20; static void Main(string[] args) { List<double> rndTimes = new List<double>(); List<double> orderedTimes = new List<double>(); rndTimes.Add(TimeIt(50, RandomInsert)); rndTimes.Add(TimeIt(100, RandomInsert)); rndTimes.Add(TimeIt(200, RandomInsert)); rndTimes.Add(TimeIt(400, RandomInsert)); rndTimes.Add(TimeIt(800, RandomInsert)); rndTimes.Add(TimeIt(1000, RandomInsert)); rndTimes.Add(TimeIt(2000, RandomInsert)); rndTimes.Add(TimeIt(4000, RandomInsert)); rndTimes.Add(TimeIt(8000, RandomInsert)); rndTimes.Add(TimeIt(16000, RandomInsert)); rndTimes.Add(TimeIt(32000, RandomInsert)); rndTimes.Add(TimeIt(64000, RandomInsert)); rndTimes.Add(TimeIt(128000, RandomInsert)); string rndTimesAsString = string.Join("\n", rndTimes.Select(x => x.ToString()).ToArray()); orderedTimes.Add(TimeIt(50, OrderedInsert)); orderedTimes.Add(TimeIt(100, OrderedInsert)); orderedTimes.Add(TimeIt(200, OrderedInsert)); orderedTimes.Add(TimeIt(400, OrderedInsert)); orderedTimes.Add(TimeIt(800, OrderedInsert)); orderedTimes.Add(TimeIt(1000, OrderedInsert)); orderedTimes.Add(TimeIt(2000, OrderedInsert)); orderedTimes.Add(TimeIt(4000, OrderedInsert)); orderedTimes.Add(TimeIt(8000, OrderedInsert)); orderedTimes.Add(TimeIt(16000, OrderedInsert)); orderedTimes.Add(TimeIt(32000, OrderedInsert)); orderedTimes.Add(TimeIt(64000, OrderedInsert)); orderedTimes.Add(TimeIt(128000, OrderedInsert)); string orderedTimesAsString = string.Join("\n", orderedTimes.Select(x => x.ToString()).ToArray()); Console.WriteLine("Done"); } static double TimeIt(int insertCount, Action<int> f) { Console.WriteLine("TimeIt({0}, {1})", insertCount, f.Method.Name); List<double> times = new List<double>(); for (int i = 0; i < ITERATION_COUNT; i++) { Stopwatch sw = Stopwatch.StartNew(); f(insertCount); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } return times.Average(); } static void RandomInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for (int i = 0; i < insertCount; i++) { tree = tree.Insert(rnd.NextDouble()); } } static void OrderedInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for(int i = 0; i < insertCount; i++) { tree = tree.Insert(i + rnd.NextDouble()); } } } } And here's a chart comparing random and ordered insertion times in milliseconds: Insertions Random Ordered RandomTime / OrderedTime 50 1.031665 0.261585 3.94 100 0.544345 1.377155 0.4 200 1.268320 0.734570 1.73 400 2.765555 1.639150 1.69 800 6.089700 3.558350 1.71 1000 7.855150 4.704190 1.67 2000 17.852000 12.554065 1.42 4000 40.157340 22.474445 1.79 8000 88.375430 48.364265 1.83 16000 197.524000 109.082200 1.81 32000 459.277050 238.154405 1.93 64000 1055.508875 512.020310 2.06 128000 2481.694230 1107.980425 2.24 I don't see anything in the code which makes ordered input asymptotically faster than unordered input, so I'm at a loss to explain the difference. Why is it so much faster to build a treap from ordered input than random input?

    Read the article

  • What is faster with PictureBox? Many small redraws or complete redraw.

    - by kornelijepetak
    I have a PictureBox (WinMobile 6 WinForm) on which I draw some images. There is a background image that goes in the background and it does not change. However objects that are drawn on the picturebox are moving during the application so I need to refresh the background. Since items that are redrawn fill from 50% to 80% of the surface, the question is which of the two is faster: 1) Redraw only parts of the background image that have been changed (previous+next location of the moving object). 2) Redraw complete background and then draw all the objects in their current position. Now, the reason for asking is because I am not sure how much of processor power is needed for a single drawImage operation and what are the time consuming factors. I am aware if there is almost complete coverage of the background, it would be stupid to redraw portions of it, because by drawing portions I will have drawn the complete picture. But since sometimes only half of the image had changed (some objects remained in their old position), it may (perhaps) be benefitial to redraw only those regions. But I need your insight on this... Thanks.

    Read the article

  • Delphi: How to diagnose sluggish UI?

    - by Ian Boyd
    i have a form, which you can pretend is laid out like Windows Explorer: panel on the left splitter client panel +------------+#+-----------------------+ | |#| | | |#| | | |#| | | |#| | | Left |#| Client | | |#| | | |#| | | |#| | | |#| | | |#| | +------------+#+-----------------------+ ^ | +----splitter The the left and client area panels are each rich with controls. The problem is that using the splitter is very sluggish. i would expect that a modern 2 GHz computer can re-display the form as fast as a human can push the mouse around. But that's definitely not the case, and it takes about 200-300 ms before the form is fully re-adjusted. The form has about 100 visual controls on it, no code, or custom controls. How do i go about tracing who's the cause of the sluggishness?

    Read the article

  • Every 3rd Insert Is Slow On Ms Sql 2008

    - by Chris
    I have a function that writes 3 lines into a empty table like so: INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (1, 8, 1) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (2, 8, 4) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (3, 8, 3) For some reason only the third query takes a long time to execute - and with each insert it grows longer. Profiler Image I have tried disabling all constraints on the table - same result. I just can't figure out why the first two would run so fast - and the last one would take so long. Any help would be greatly appreciated. Here is the statistics for a query ran MSSMS: Query: ALTER TABLE [dbo].[yaf_ForumAccess] NOCHECK CONSTRAINT ALL INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (1, 9, 1) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (2, 9, 4) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (3, 9, 3) ALTER TABLE [dbo].[yaf_ForumAccess] CHECK CONSTRAINT ALL Stats: Stats

    Read the article

  • Java NIO Servlet to File

    - by Gandalf
    Is there a way (without buffering the whole Inputstream) to take the HttpServletRequest from a Java Servlet and write it out to a file using all NIO? Is it even worth trying? Will it be any faster reading from a normal java.io stream and writing to a java.nio Channel or do they both really need to be pure NIO to see a benefit? Thanks.

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • MySQL query cache vs caching result-sets in the application layer

    - by GetFree
    I'm running a php/mysql-driven website with a lot of visits and I'm considering the possibility of caching result-sets in shared memory in order to reduce database load. However, right now MySQL's query cache is enabled and it seems to be doing a pretty good job since if I disable query caching, the use of CPU jumps to 100% immediately. Given that situation, I dont know if caching result-sets (or even the generated HTML code) locally in shared memory with PHP will result in any noticeable performace improvement. Does anyone out there have any experience on this matter? PS: Please avoid suggesting heavy-artillery solutions like memcached. Right now I'm looking for simple solutions that dont require too much time to implement, deploy and maintain.

    Read the article

  • Efficient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >