Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 171/572 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Java reduce CPU usage

    - by steve
    Greets- We gots a few nutters in work who enjoy using while(true) { //Code } in their code. As you can imagine this maxes out the CPU. Does anyone know ways to reduce the CPU utilization so that other people can use the server as well. The code itself is just constantly polling the internet for updates on sites. Therefore I'd imagine a little sleep method would greatly reduce the the CPU usage. Also all manipulation is being done in String objects (Java) anyone know how much StringBuilders would reduce the over head by? Thanks for any pointers

    Read the article

  • java increase xmx dynamically at runtime

    - by Tomer
    Hi, I have a jvm server in my machine, now I want to have 2 apservers of mine sitting in same machine, however I want the standby one to have a really low amount of memory allocated with xmx because its passive, one the main server (active) goes down I want to allocate more memory to my passive server which is already up without restarting it (I have have them both having too much xmx - note they would consume memory at startup and I cant allow possibility of outOfMemory). So I want passive - low xmx once active goes down I want my passive to receive much more xmx. is there a way for me to achieve that. Thanks

    Read the article

  • In Java, is there a gain in using interfaces for complex models?

    - by Gnoupi
    The title is hardly understandable, but I'm not sure how to summarize that another way. Any edit to clarify is welcome. I have been told, and recommended to use interfaces to improve performances, even in a case which doesn't especially call for the regular "interface" role. In this case, the objects are big models (in a MVC meaning), with many methods and fields. The "good use" that has been recommended to me is to create an interface, with its unique implementation. There won't be any other class implementing this interface, for sure. I have been told that this is better to do so, because it "exposes less" (or something close) to the other classes which will use methods from this class, as these objects are referring to the object from its interface (all public method from the implementation being reproduced in the interface). This seems quite strange to me, as it seems like a C++ use to me (with header files). There I see the point, but in Java? Is there really a point in making an interface for such unique implementation? I would really appreciate some clarifications on the topic, so I could justify not following such kind of behavior, and the hassle it creates from duplicating all declarations.

    Read the article

  • How to set up a load/stress test for a web site?

    - by Ryan
    I've been tasked with stress/load testing our company web site out of the blue and know nothing about doing so. Every search I make on google for "how to load test a web site" just comes back with various companies and software to physically do the load testing. For now I'm more interested in how to actually go about setting up a load test like what I should take into account prior to load testing, what pages within my site I should be testing load against and what things I'm going to want to monitor when doing the test. Our web site is on a multi-tier system complete with a separate database server (IIS 7 Web Server, SQL Server 2000 db). I imagine I'd want to monitor both the web server and the database server for testing load however when setting up scenarios to load test the web server I'd have to use pages that query the database to see any load on the database server at the same time. Are web servers and database servers generally tested simultaneously or are they done as separate tests? As you can see I'm pretty clueless as to the whole operation so any incite as to how to go about this would be very helpful. FYI I have been tinkering with Pylot and was able to create and run a scenario against our site but I'm not sure what I should be looking for in the results or if the scenario I created is even a scenario worth measuring for our site. Thanks in advance.

    Read the article

  • Multi-threaded random_r is slower than single threaded version.

    - by Nixuz
    The following program is essentially the same the one described here. When I run and compile the program using two threads (NTHREADS == 2), I get the following run times: real 0m14.120s user 0m25.570s sys 0m0.050s When it is run with just one thread (NTHREADS == 1), I get run times significantly better even though it is only using one core. real 0m4.705s user 0m4.660s sys 0m0.010s My system is dual core, and I know random_r is thread safe and I am pretty sure it is non-blocking. When the same program is run without random_r and a calculation of cosines and sines is used as a replacement, the dual-threaded version runs in about 1/2 the time as expected. #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define NTHREADS 2 #define PRNG_BUFSZ 8 #define ITERATIONS 1000000000 void* thread_run(void* arg) { int r1, i, totalIterations = ITERATIONS / NTHREADS; for (i = 0; i < totalIterations; i++){ random_r((struct random_data*)arg, &r1); } printf("%i\n", r1); } int main(int argc, char** argv) { struct random_data* rand_states = (struct random_data*)calloc(NTHREADS, sizeof(struct random_data)); char* rand_statebufs = (char*)calloc(NTHREADS, PRNG_BUFSZ); pthread_t* thread_ids; int t = 0; thread_ids = (pthread_t*)calloc(NTHREADS, sizeof(pthread_t)); /* create threads */ for (t = 0; t < NTHREADS; t++) { initstate_r(random(), &rand_statebufs[t], PRNG_BUFSZ, &rand_states[t]); pthread_create(&thread_ids[t], NULL, &thread_run, &rand_states[t]); } for (t = 0; t < NTHREADS; t++) { pthread_join(thread_ids[t], NULL); } free(thread_ids); free(rand_states); free(rand_statebufs); } I am confused why when generating random numbers the two threaded version performs much worse than the single threaded version, considering random_r is meant to be used in multi-threaded applications.

    Read the article

  • How to speed up dumping a DataTable into an Excel worksheet?

    - by AngryHacker
    I have the following routine that dumps a DataTable into an Excel worksheet. private void RenderDataTableOnXlSheet(DataTable dt, Excel.Worksheet xlWk, string [] columnNames, string [] fieldNames) { // render the column names (e.g. headers) for (int i = 0; i < columnNames.Length; i++) xlWk.Cells[1, i + 1] = columnNames[i]; // render the data for (int i = 0; i < fieldNames.Length; i++) { for (int j = 0; j < dt.Rows.Count; j++) { xlWk.Cells[j + 2, i + 1] = dt.Rows[j][fieldNames[i]].ToString(); } } } For whatever reason, dumping DataTable of 25 columns and 400 rows takes about 10-15 seconds on my relatively modern PC. Takes even longer testers' machines. Is there anything I can do to speed up this code? Or is interop just inherently slow?

    Read the article

  • General question: Filesystem or database?

    - by poeschlorn
    Hey guys, i want to create a small document management system. there are several users who store their files. each file which is uploaded contains an info which user uploaded it and the document content itself. In a view there are displayed all files of ONE specific user, ordered by date. What would be better: 1) giving the documents a name or metadata(XML) which contain the date and user (and iterate through them to get the metadata) or 2) giving the files a random/unique name and store metadata in a DB? something like this: date | user | filename What would you say and why? The used programming language is java and the DB is MySQL.

    Read the article

  • Time complexity of a powerset generating function

    - by Lirik
    I'm trying to figure out the time complexity of a function that I wrote (it generates a power set for a given string): public static HashSet<string> GeneratePowerSet(string input) { HashSet<string> powerSet = new HashSet<string>(); if (string.IsNullOrEmpty(input)) return powerSet; int powSetSize = (int)Math.Pow(2.0, (double)input.Length); // Start at 1 to skip the empty string case for (int i = 1; i < powSetSize; i++) { string str = Convert.ToString(i, 2); string pset = str; for (int k = str.Length; k < input.Length; k++) { pset = "0" + pset; } string set = string.Empty; for (int j = 0; j < pset.Length; j++) { if (pset[j] == '1') { set = string.Concat(set, input[j].ToString()); } } powerSet.Add(set); } return powerSet; } So my attempt is this: let the size of the input string be n in the outer for loop, must iterate 2^n times (because the set size is 2^n). in the inner for loop, we must iterate 2*n times (at worst). 1. So Big-O would be O((2^n)*n) (since we drop the constant 2)... is that correct? And n*(2^n) is worse than n^2. if n = 4 then (4*(2^4)) = 64 (4^2) = 16 if n = 100 then (10*(2^10)) = 10240 (10^2) = 100 2. Is there a faster way to generate a power set, or is this about optimal?

    Read the article

  • Oracle: delete suddenly taking a long time

    - by Damo
    Hi We have a feed process which runs every day of the year. As part of that we delete every row from a table (approx 1 million rows) every day, repopulate it using 5 different stored procedures and then commit the transaction. This is the only commit statement that we call. All of a sudden the delete has started takign about 2 hours to complete. The delete is also very simple (delete from T_PROFILE_WORK) This has worked perfectly well for the past year, but in the past week i have noticed this issue. Any help on this is greatly appreciated Thanks Damien

    Read the article

  • Why would restarting MySQL make my site faster?

    - by beagleguy
    hey all, my site started dragging lately, the queries taking exceptionally longer than I would expect with properly tuned indexes. I just restarted the mysql server after 31 days uptime and every query is now substantially faster and the whole site renders 3-4 times faster. Would there be anything that jumps out at you as to why this may have been? Improper settings on my.cnf perhaps? Any ideas as to what I can start looking at to try and pinpoint why? thanks

    Read the article

  • If a table has two xml columns, will inserting records be a lot slower?

    - by Lieven Cardoen
    Is it a bad thing to have two xml columns in one table? + How much slower are these xml columns in terms of updating/inserting/reading data? In profiler this kind of insert normally takes 0 ms, but sometimes it goes up to 160ms: declare @p8 xml set @p8=convert(xml,N'<interactions><interaction correct="false" score="0" id="0" gapid="0" x="61" y="225"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction><interaction correct="false" score="0" id="1" gapid="1" x="64" y="250"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction><interaction correct="false" score="0" id="2" gapid="2" x="131" y="250"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction></interactions>') declare @p14 xml set @p14=convert(xml,N'<contentinteractions/>') exec sp_executesql N'INSERT INTO [dbo].[PackageSessionNodes]([dbo].[PackageSessionNodes].[PackageSessionId], [dbo].[PackageSessionNodes].[TreeNodeId],[dbo].[PackageSessionNodes].[Duration], [dbo].[PackageSessionNodes].[Score],[dbo].[PackageSessionNodes].[ScoreMax], [dbo].[PackageSessionNodes].[Interactions],[dbo].[PackageSessionNodes].[BrainTeaser], [dbo].[PackageSessionNodes].[DateCreated], [dbo].[PackageSessionNodes].[CompletionStatus], [dbo].[PackageSessionNodes].[ReducedScore], [dbo].[PackageSessionNodes].[ReducedScoreMax], [dbo].[PackageSessionNodes].[ContentInteractions]) VALUES (@ins_dboPackageSessionNodesPackageSessionId, @ins_dboPackageSessionNodesTreeNodeId, @ins_dboPackageSessionNodesDuration, @ins_dboPackageSessionNodesScore, @ins_dboPackageSessionNodesScoreMax, @ins_dboPackageSessionNodesInteractions, @ins_dboPackageSessionNodesBrainTeaser, @ins_dboPackageSessionNodesDateCreated, @ins_dboPackageSessionNodesCompletionStatus, @ins_dboPackageSessionNodesReducedScore, @ins_dboPackageSessionNodesReducedScoreMax, @ins_dboPackageSessionNodesContentInteractions) ; SELECT SCOPE_IDENTITY() as new_id This is the table: CREATE TABLE [dbo].[PackageSessionNodes]( [PackageSessionNodeId] [int] IDENTITY(1,1) NOT NULL, [PackageSessionId] [int] NOT NULL, [TreeNodeId] [int] NOT NULL, [Duration] [int] NULL, [Score] [float] NOT NULL, [ScoreMax] [float] NOT NULL, [Interactions] [xml] NOT NULL, [BrainTeaser] [bit] NOT NULL, [DateCreated] [datetime] NULL, [CompletionStatus] [int] NOT NULL, [ReducedScore] [float] NOT NULL, [ReducedScoreMax] [float] NOT NULL, [ContentInteractions] [xml] NOT NULL, CONSTRAINT [PK_PackageSessionNodes] PRIMARY KEY CLUSTERED ( [PackageSessionNodeId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[PackageSessionNodes] WITH CHECK ADD CONSTRAINT [FK_PackageSessionNodes_PackageSessions] FOREIGN KEY([PackageSessionId]) REFERENCES [dbo].[PackageSessions] ([PackageSessionId]) ON UPDATE CASCADE ON DELETE CASCADE GO ALTER TABLE [dbo].[PackageSessionNodes] CHECK CONSTRAINT [FK_PackageSessionNodes_PackageSessions] GO ALTER TABLE [dbo].[PackageSessionNodes] WITH CHECK ADD CONSTRAINT [FK_PackageSessionNodes_TreeNodes] FOREIGN KEY([TreeNodeId]) REFERENCES [dbo].[TreeNodes] ([TreeNodeId]) GO ALTER TABLE [dbo].[PackageSessionNodes] CHECK CONSTRAINT [FK_PackageSessionNodes_TreeNodes] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_Score] DEFAULT ((-1)) FOR [Score] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ScoreMax] DEFAULT ((-1)) FOR [ScoreMax] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_DateCreated] DEFAULT (getdate()) FOR [DateCreated] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ReducedScore] DEFAULT ((-1)) FOR [ReducedScore] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ReducedScoreMax] DEFAULT ((-1)) FOR [ReducedScoreMax] GO

    Read the article

  • Efficiently remove points with same slope

    - by Ram
    Hi, In one of mine applications I am dealing with graphics objects. I am using open source GPC library to clip/merge two shapes. To improve accuracy I am sampling (adding multiple points between two edges) existing shapes. But before displaying back the merged shape I need to remove all the points between two edges. But I am not able to find an efficient algorithm that will remove all points between two edges which has same slope with minimum CPU utilization. Currently all points are of type PointF Any pointer on this will be a great help.

    Read the article

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • resort on a std::vector vs std::insert

    - by Abruzzo Forte e Gentile
    I have a sorted std::vector of relative small size ( from 5 to 20 elements ). I used std::vector since the data is continuous so I have speed because of cache. On a specific point I need to remove an element from this vector. I have now a doubt: which is the fastest way to remove this value between the 2 options below? setting that element to 0 and call sort to reorder: this has complexity but elements are on the same cache line. call erase that will copy ( or memcpy who knows?? ) all elements after it of 1 place ( I need to investigate the behind scense of erase ). Do you know which one is faster? I think that the same approach could be thought about inserting a new element without hitting the max capacity of the vector. Regards AFG

    Read the article

  • javascript and css loadings

    - by Mike
    I was wondering, If I have, let's say 6 javascripts includes on a page and 4-5 css includes as well on the same page, does it actually makes it optimal for the page to load if I do create one file or perhaps two and append them all together instead of having bunch of them?

    Read the article

  • Can I optimize this at all?

    - by Moshe
    I'm working on an iOS app and I'm using the following code for one of my tables to return the number of rows in a particular section: return [[kSettings arrayForKey:@"views"] count]; Is there any other way to write that line of code so that it is more memory efficient? EDIT: kSettings = NSUserDefaults standardUserDefaults. Is there any way to rewrite my line of code so that whatever memory it occupies is released sooner than it is released now?

    Read the article

  • Can I have a CASE statement within a WHILE loop?

    - by John
    This is what I'm doing: while (@counter < 3 and @newBalance >0) begin CASE when @counter = 1 then ( @monFee1 = @monthlyFee, @newBalance = @newBalance-@fee) when @counter = 2 then ( @monFee2 = @monthlyFee, @newBalance = @newBalance-@fee) END @counter = @counter +1 end I get this error: Incorrect syntax near the keyword 'CASE'. No idea why. Please help!

    Read the article

  • Treeview Slow in IE?!?!

    - by Mike
    I have a treeview with around 200 records that needs to be fully expanded at all times (so no loading on demand). It is inside of an update panel with the updatemode set to conditional. There are other update panels on the page as well that are set to conditional. Depending on user actions the tree may need to be rebuilt by calling databind and updating the updatepanel. Everything works fine in firefox, longest postback about 2 seconds. With IE I have to wait up to 30 seconds sometimes and the action may have nothing to do with the tree just changing a dropdown in its own updatepanel takes forever. I have considered the size of viewstate and just raw HTML generated may be causing the delay but wouldn't that effect both browsers? Anyone have anyideas what is making it so slow in IE??? Thanks!

    Read the article

  • What to have in mind when building a AJAX-based webapp

    - by Industrial
    Hi everyone, We're in the first steps of what will be a AJAX-based webapp where information and generated HTML will be sent backwards and forwards with the help of JSON/POST techniques. We're able to get the data out quickly without putting to much load on the database with the help of a cache-layer that features memcached as well as disc-based cache. Besides that - what's essential to have in mind when designing AJAX heavy webapps? Thanks a lot,

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • Fastest Way to generate 1,000,000+ random numbers in python

    - by Sandro
    I am currently writing an app in python that needs to generate large amount of random numbers, FAST. Currently I have a scheme going that uses numpy to generate all of the numbers in a giant batch (about ~500,000 at a time). While this seems to be faster than python's implementation. I still need it to go faster. Any ideas? I'm open to writing it in C and embedding it in the program or doing w/e it takes. Constraints on the random numbers: A Set of numbers 7 numbers that can all have different bounds: eg: [0-X1, 0-X2, 0-X3, 0-X4, 0-X5, 0-X6, 0-X7] Currently I am generating a list of 7 numbers with random values from [0-1) then multiplying by [X1..X7] A Set of 13 numbers that all add up to 1 Currently just generating 13 numbers then dividing by their sum Any ideas? Would pre calculating these numbers and storing them in a file make this faster? Thanks!

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >