Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 33/594 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • C# debug vs release performance

    - by sagie
    Hi. I've encountered in the following paragraph: “Debug vs Release setting in the IDE when you compile your code in Visual Studio makes almost no difference to performance… the generated code is almost the same. The C# compiler doesn’t really do any optimisation. The C# compiler just spits out IL… and at the runtime it’s the JITer that does all the optimisation. The JITer does have a Debug/Release mode and that makes a huge difference to performance. But that doesn’t key off whether you run the Debug or Release configuration of your project, that keys off whether a debugger is attached.” The source is here and the podcast is here. Can someone direct me to a microsoft an article that can actualy prove this?

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • server performance: multiple external connections and performance

    - by websiteguru
    I am creating a php script that requires the server to make several cURL requests per run. I'll be running this script through cron every 3 minutes. Im looking to maximize the amount of cURL requests I can make in a 24 hr period. What I am wondering is if it would be better from a performance standpoint to get a dedicated server, or several small shared hosting accounts. With the problem being number of external connections and not system resources I'm wondering which is the best approach.

    Read the article

  • javascript object access performance

    - by youdontmeanmuch
    In Javascript, when your getting a property of an object, is there a performance penalty to getting the whole object vs only getting a property of that object? Also Keep in mind I'm not talking about DOM access these are pure simple Javascript objects. For example: Is there some kind of performance difference between the following code: Assumed to be faster but not sure: var length = some.object[key].length; if(length === condition){ // Do something that doesnt need anything inside of some.object[key] } else{ var object = some.object[key]; // Do something that requires stuff inside of some.object[key] } I think this would be slower but not sure if it matters. var object = some.object[key]; if(object.length === condition){ // Do something that doesnt need anything inside of some.object[key] } else{ // Do something that requires stuff inside of some.object[key] }

    Read the article

  • Is it possible to programmatically control c# health monitoring without using the web.config file?

    - by Adam
    I have developed my own custom provider for the health monitoring; however, I use parameters in the constructor and this is not allowed when using the health monitoring from the web.config file. Does anyone know if I can turn on/off the monitoring and have it watch properly through code (possibly in my global.asax file on application startup). Or, is it possible for me to create my own watcher that will do the same thing as the health monitor. Or, finally - can I just pass variables from the web.config setup (i'm not familiar with the public token part of the provider type declaration). Thanks in advance

    Read the article

  • Android -- Object Creation/Memory Allocation vs. Performance

    - by borg17of20
    Hello all, This is probably an easy one. I have about 20 TextViews/ImageViews in my current project that I access like this: ((TextView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Text)).setText(""); //or ((ImageView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Right)).setVisibility(View.INVISIBLE); My question is this, am I better off, from a performance standpoint, just assigning these object variables? Further, am I losing some performance to the constant "search" process that goes on as a part of the findViewById(...) method? (i.e. Does findsViewById(...) use some sort of hashtable/hashmap for look-ups or does it implement an iterative search over the view hierarchy?) At present, my program never uses more than 2.5MB of RAM, so will assigning 20 or so more object variables drastically affect this? I don't think so, but I figured I'd ask. Thanks.

    Read the article

  • java statistics collection for performance evaluation

    - by user384706
    What is the most efficient way to collect and report performance statistic analysis from an application? If I have an application that uses a series of network apis, and I want to report statistics at runtime, e.g. Method doA() was called 3 times and consumed on avg 500ms Method doB() was called 5 times and consumed on avg 1200ms etc Then, I thought of using a well defined data structure (of collection) that each thread updates per remote call, and this can be used for the report. But I think that it will make the performance worse, for the time spend for statistics collection. Am I correct? How would I procceed if I used a background thread for this, and the other threads that did the remote calls were unaware of this collection gathering? Thanks

    Read the article

  • Best way to handle MySQL date for performance with thousands of users

    - by bitLost
    I am currently part of a team designing a site that will potentially have thousands of users who will be doing a number of date related searches. During the design phase we have been trying to determine which makes more sense for performance optimization. Should we store the datetime field as a mysql datetime. Or should be break it up into a number of fields (year, month, day, hour, minute, ...) The question is with a large data set and a potentially large set of users, would we gain performance wise breaking the datetime into multiple fields and saving on relying on mysql date functions? Or is mysql already optimized for this?

    Read the article

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • PHP include(): File size & performance

    - by Tom
    An inexperienced PHP question: I've got a PHP script file that I need to include on different pages lots of times in lots of places. I have the option of either breaking the included file down into several smaller files and include these on a as-needed basis... OR ... I could just keep it all together in a single PHP file. I'm wondering if there's any performance impact of using a larger vs. smaller file for include() in this context? For example, is there any performance difference between a 200KB file and a 20KB file? Thank you.

    Read the article

  • Recommend a free temperature-monitoring utility for cores + video card, on Vista?

    - by smci
    Looking for your recommendations for a free temperature-monitoring utility, for my PC (Core 2) and graphics card for Vista. (Question reposted with the hyperlinks now I have 10 reputation). I don't want all the geeky details, I don't overclock, I don't see the need to mess with my fan speeds or motherboard settings, I just want something fairly basic to help with basic troubleshooting of intermittent overheats on video card and/or mobo: must run on Windows Vista (yes, don't laugh). ideally displays temperature when minimized to toolbar, and/or: automatically alerts me when temperature on either core or the video card exceeds a threshold ideally measures temperature of video card and system as well, not just the cores. HDD temperature is not necessary I think. logging is nice, graphs are also nice portability to Linux and Mac is nice Apparently Everest is the best paid option, but I'm not prepared to spend $40. I found the following free options, but no head-to-head at-a-glance comparison: CoreTemp (only does cores, not video card?) Open Hardware Monitor (nice graphs, displays when minimized to toolbar, no alerts) RealTemp (has alerts, works minimized, lightweight install) HWMonitor (no alerts, CNET: "[free version is] simple but effective") from CPUID CPUCool (not free: 21-day trialware, then $18) SpeedFan from Almico (too geeky, detail overload; CNET: "most users won't be able to make head or tail of the data this utility provides") Motherboard Monitor (CNET: not recommended, requires expert knowledge of your mobo, dangerous) Intel Thermal Analysis Tool (only does cores, not video card? has logging) Useful discussions I found: hardwarecanucks.com , superuser.com 1, 2 , forums.techarena.in (Update: I downloaded Real Temp 3.60 and it meets all my needs, the customizable alert temperature is great. Open Hardware Monitor seems to be the other one that mostly meets my needs, except no alerts; but it is portable. I tried SpeedFan but the interface is very cluttered, too much unnecessary detail (needs a Basic/Advanced mode and a revamp of the interface.) The answer to my underlying issue is nVidia Geforce LE 7500 video card which runs very hot.)

    Read the article

  • Is there a monitoring software suite that will alert me if it has received no activity in a time period?

    - by matt b
    This might be a very basic question, but I am not very familiar with the exact features of Nagios versus Munin versus other monitoring tools. Let's say we have a process that needs to run daily for some very important infrastructure reasons. We've had cases where the process did not run or was otherwise down for a number of days before anyone noticed. I'd like to set up a system that will enable me to easily know when the daily run did not take place for some reason. I can set up this process to send an email on every successful run (or every failed run), but I do not trust that the people receiving this email would notice an absence of an "I'm OK" message. What I am envisioning is some type of "tripwire" service which this V.I.P. (very-important-process) can send a status message to each time it runs, whether successfully or not; and if the "tripwire" service has not received any word from the VIP within a configurable amount of time, it can then send an alert to someone. (The difference between what I envision and the first approach I outlined is a service that sends a message only in abnormal conditions, rather than a service that sends messages each day that the status is normal/OK). Can Nagios be set up to send an alert like this, if it has not heard from a certain service/device/process in N days? Is there another tool out there which does have this feature?

    Read the article

  • What's the format of Real World Performance Day?

    - by william.hardie
    A question that has cropped a lot of late is "what's the format of Real World Performance Day?" Not an unreasonable question you might think. Sure enough, a quick check of the Independent Oracle User Group's website tells us a bit about the Real World Performance Day event, but no formal agenda? This was one of the questions I posed to Tom Kyte (one of the main presenters) in our recent podcast. Tom tells us that this isn't your traditional event where one speaker follows another with loads of slides. In fact, the Real World Performance Day features Tom and fellow Oracle performance experts - Andrew Holdsworth and Graham Wood - continuously on stage throughout the day. All three will be discussing database performance challenges and solutions from development, architectural design and management perspectives. There's going to be multi-terabyte demos on show, less of the traditional slides, and more interactive debate and discussion going on. Tune-in and hear what else Tom has to say about this fairly unique event!

    Read the article

  • Analysing Group & Individual Member Performance -RUP

    - by user23871
    I am writing a report which requires the analysis of performance of each individual team member. This is for a software development project developed using the Unified Process (UP). I was just wondering if there are any existing group & individual appraisal metrics used so I don't have to reinvent the wheel... EDIT This is by no means correct but something like: Individual Contribution (IC) = time spent (individual) / time spent (total) = Performance = ? (should use individual contribution (IC) combined with something to gain a measure of overall performance).... Maybe I am talking complete hash and I know generally its really difficult to analyse performance with numbers but any mathematicians out there that can lend a hand or know a somewhat more accurate method of analysing performance than arbitrary marking (e.g. 8 out 10)

    Read the article

  • Strange performance behaviour for 64 bit modulo operation

    - by codymanix
    The last three of these method calls take approx. double the time than the first four. The only difference is that their arguments doesn't fit in integer anymore. But should this matter? The parameter is declared to be long, so it should use long for calculation anyway. Does the modulo operation use another algorithm for numbersmaxint? I am using amd athlon64 3200+, winxp sp3 and vs2008. Stopwatch sw = new Stopwatch(); TestLong(sw, int.MaxValue - 3l); TestLong(sw, int.MaxValue - 2l); TestLong(sw, int.MaxValue - 1l); TestLong(sw, int.MaxValue); TestLong(sw, int.MaxValue + 1l); TestLong(sw, int.MaxValue + 2l); TestLong(sw, int.MaxValue + 3l); Console.ReadLine(); static void TestLong(Stopwatch sw, long num) { long n = 0; sw.Reset(); sw.Start(); for (long i = 3; i < 20000000; i++) { n += num % i; } sw.Stop(); Console.WriteLine(sw.Elapsed); } EDIT: I now tried the same with C and the issue does not occur here, all modulo operations take the same time, in release and in debug mode with and without optimizations turned on: #include "stdafx.h" #include "time.h" #include "limits.h" static void TestLong(long long num) { long long n = 0; clock_t t = clock(); for (long long i = 3; i < 20000000LL*100; i++) { n += num % i; } printf("%d - %lld\n", clock()-t, n); } int main() { printf("%i %i %i %i\n\n", sizeof (int), sizeof(long), sizeof(long long), sizeof(void*)); TestLong(3); TestLong(10); TestLong(131); TestLong(INT_MAX - 1L); TestLong(UINT_MAX +1LL); TestLong(INT_MAX + 1LL); TestLong(LLONG_MAX-1LL); getchar(); return 0; } EDIT2: Thanks for the great suggestions. I found that both .net and c (in debug as well as in release mode) does't not use atomically cpu instructions to calculate the remainder but they call a function that does. In the c program I could get the name of it which is "_allrem". It also displayed full source comments for this file so I found the information that this algorithm special cases the 32bit divisors instead of dividends which was the case in the .net application. I also found out that the performance of the c program really is only affected by the value of the divisor but not the dividend. Another test showed that the performance of the remainder function in the .net program depends on both the dividend and divisor. BTW: Even simple additions of long long values are calculated by a consecutive add and adc instructions. So even if my processor calls itself 64bit, it really isn't :( EDIT3: I now ran the c app on a windows 7 x64 edition, compiled with visual studio 2010. The funny thing is, the performance behavior stays the same, although now (I checked the assembly source) true 64 bit instructions are used.

    Read the article

  • C# performance varying due to memory

    - by user1107474
    Hope this is a valid post here, its a combination of C# issues and hardware. I am benchmarking our server because we have found problems with the performance of our quant library (written in C#). I have simulated the same performance issues with some simple C# code- performing very heavy memory-usage. The code below is in a function which is spawned from a threadpool, up to a maximum of 32 threads (because our server has 4x CPUs x 8 cores each). This is all on .Net 3.5 The problem is that we are getting wildly differing performance. I run the below function 1000 times. The average time taken for the code to run could be, say, 3.5s, but the fastest will only be 1.2s and the slowest will be 7s- for the exact same function! I have graphed the memory usage against the timings and there doesnt appear to be any correlation with the GC kicking in. One thing I did notice is that when running in a single thread the timings are identical and there is no wild deviation. I have also tested CPU-bound algorithms and the timings are identical too. This has made us wonder if the memory bus just cannot cope. I was wondering could this be another .net or C# problem, or is it something related to our hardware? Would this be the same experience if I had used C++, or Java?? We are using 4x Intel x7550 with 32GB ram. Is there any way around this problem in general? Stopwatch watch = new Stopwatch(); watch.Start(); List<byte> list1 = new List<byte>(); List<byte> list2 = new List<byte>(); List<byte> list3 = new List<byte>(); int Size1 = 10000000; int Size2 = 2 * Size1; int Size3 = Size1; for (int i = 0; i < Size1; i++) { list1.Add(57); } for (int i = 0; i < Size2; i = i + 2) { list2.Add(56); } for (int i = 0; i < Size3; i++) { byte temp = list1.ElementAt(i); byte temp2 = list2.ElementAt(i); list3.Add(temp); list2[i] = temp; list1[i] = temp2; } watch.Stop(); (the code is just meant to stress out the memory) I would include the threadpool code, but we used a non-standard threadpool library. EDIT: I have reduced "size1" to 100000, which basically doesn't use much memory and I still get a lot of jitter. This suggests it's not the amount of memory being transferred, but the frequency of memory grabs?

    Read the article

  • Solution to route/proxy SNMP Traps (or Netflow, generic UDP, etc) for network monitoring?

    - by Christopher Cashell
    I'm implementing a network monitoring solution for a very large network (approximately 5000 network devices). We'd like to have all devices on our network send SNMP traps to a single box (technically this will probably be an HA pair of boxes) and then have that box pass the SNMP traps on to the real processing boxes. This will allow us to have multiple back-end boxes handling traps, and to distribute load among those back end boxes. One key feature that we need is the ability to forward the traps to a specific box depending on the source address of the trap. Any suggestions for the best way to handle this? Among the things we've considered are: Using snmptrapd to accept the traps, and have it pass them off to a custom written perl handler script to rewrite the trap and send it to the proper processing box Using some sort of load balancing software running on a Linux box to handle this (having some difficulty finding many load balancing programs that will handle UDP) Using a Load Balancing Appliance (F5, etc) Using IPTables on a Linux box to route the SNMP traps with NATing We've currently implemented and are testing the last solution, with a Linux box with IPTables configured to receive the traps, and then depending on the source address of the trap, rewrite it with a destination nat (DNAT) so the packet gets sent to the proper server. For example: # Range: 10.0.0.0/19 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.0.0/19 -j DNAT --to-destination 10.1.2.3 # Range: 10.0.33.0/21 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.33.0/21 -j DNAT --to-destination 10.1.2.3 # Range: 10.1.0.0/16 Site: xyz01 Destination: bar01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.1.0.0/16 -j DNAT --to-destination 10.3.2.1 This should work with excellent efficiency for basic trap routing, but it leaves us completely limited to what we can mach and filter on with IPTables, so we're concerned about flexibility for the future. Another feature that we'd really like, but isn't quite a "must have" is the ability to duplicate or mirror the UDP packets. Being able to take one incoming trap and route it to multiple destinations would be very useful. Has anyone tried any of the possible solutions above for SNMP traps (or Netflow, general UDP, etc) load balancing? Or can anyone think of any other alternatives to solve this?

    Read the article

  • BizTalk Server Monitoring &ndash; SharePoint Web Part

    - by SURESH GIRIRAJAN
    I have been worked with customers using BizTalk as shared infrastructure in the enterprise, where we have two or more BizTalk apps running on it for different Business groups. Also these customers are not using BizTalk ESB portal even though they are using BizTalk ESB exception framework. So main issue with all these Business groups are they don’t have visibility into the BizTalk apps running in prod, even though they are using SCOM and other monitoring stuff in place. So I am trying to address few issues I am going to list below and how I try to mitigate them, first one on the list is how to get visibility into prod, how to provision those access to the BizTalk resources with minimal activity and how can we take advantage of the resources we have today. So I was working on creating REST data services for BizTalk RFID a year ago and available on codeplex. I thought to extend that idea to take advantage of BizTalk Data Services available in codeplex. I extended the BizTalk data services I will upload the updated service soon. So let me start thru how my solution works, so first step I am using the BizTalk data service (REST service) which expose most of the BizTalk artifacts as resources such as Applications, Orchestrations, Send ports, Receive ports, Host instances and In process instances etc. BizTalk Server Monitoring – SharePoint Web Part I am hosting the BizTalk data service in IIS with application pool configured to run under BizTalk administrator credentials. So with this setup I am making the service to make accessible anonymous. Next step of this solution I have created a SharePoint Visual web part which consumes the BizTalk data service and display all the BizTalk Application and Platform settings in read only mode. Even though BizTalk data services offers to browse resources as well perform actions like starting, stopping Orchestrations, Send ports, Receive locations, Host instances etc. Host Instances BizTalk Applications BizTalk Running / Suspended Instances So having this BizTalk Monitoring SharePoint web part, will be added to the SharePoint. This eliminates the need for granting access to the BizTalk users explicitly, so when you have BizTalk contractor or BizTalk application user need to have access to the BizTalk environment all the need is have access to the SharePoint website. You can configure the web part point to different end point based on your environment. I am making this as read only as part of this to make easier for the users and in terms of provisioning. This removes the dependency of BizTalk admin at least for viewing the BizTalk application status and errors etc. If we need to make any changes to the BizTalk application then its application owner responsibility to co-ordinate with BizTalk admins. There are options like BizTalk ESB portal, BizTalk 360 etc… but this one of the approach to reduce number of steps required to give access to BizTalk application users and also to maximize the resource we have in enterprise today. Also you can expose this data service thru Azure Service Bus and access from other apps like mobile devices or create a web site hosted in Azure etc. One last thing I have tested only with BizTalk Server 2010 on x64 VM only, but it should work on other version. I will try to upload the code shortly with instructions how to setup etc.… I welcome thoughts and suggestions… Hope this helps….

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • MySQL performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The ODBC queries we run against the MySQL server can easily return 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? (Updated for the why): The database in question was built to accomodate reporting needs and contains massive amounts of data. We usually work with subsets of this data at a granular level in external applications such as SAS or Excel, hence the reason for the large amounts of data being transmitted. The queries are not poorly structured - they are very simple and the appropriate joins/indexes etc are being used. I've removed 'query' from the Title of the post as I realised this question is more to do with general MySQL performance rather than query related performance. I was kind of hoping that someone with a Gigabit connection may be able to actually quantify some results for me here by running a query that returns a decent amount of data, then they could limit their connection speed to 100Mb and rerun the same query. Hopefully this could be done in an environment where loads are reasonably stable so as not to skew the results. If ethernet speed can improve the situation I wanted some quantifiable results to help argue my case for upgrading the network connections. Thanks Rob

    Read the article

  • iPhone openGLES performance tuning

    - by genesys
    Hey there! I'm trying now for quite a while to optimize the framerate of my game without really making progress. I'm running on the newest iPhone SDK and have a iPhone 3G 3.1.2 device. I invoke arround 150 drawcalls, rendering about 1900 Triangles in total (all objects are textured using two texturelayers and multitexturing. most textures come from the same textureAtlasTexture stored in pvrtc 2bpp compressed texture). This renders on my phone at arround 30 fps, which appears to me to be way too low for only 1900 triangles. I tried many things to optimize the performance, including batching together the objects, transforming the vertices on the CPU and rendering them in a single drawcall. this yelds 8 drawcalls (as oposed to 150 drawcalls), but performance is about the same (fps drop to arround 26fps) I'm using 32byte vertices stored in an interleaved array (12bytes position, 12bytes normals, 8bytes uv). I'm rendering triangleLists and the vertices are ordered in TriStrip order. I did some profiling but I don't really know how to interprete it. instruments-sampling using Instruments and Sampling yelds this result: http://neo.cycovery.com/instruments_sampling.gif telling me that a lot of time is spent in "mach_msg_trap". I googled for it and it seems this function is called in order to wait for some other things. But wait for what?? instruments-openGL instruments with the openGL module yelds this result: http://neo.cycovery.com/intstruments_openglES_debug.gif but here i have really no idea what those numbers are telling me shark profiling: profiling with shark didn't tell me much either: http://neo.cycovery.com/shark_profile_release.gif the largest number is 10%, spent by DrawTriangles - and the whole rest is spent in very small percentage functions Can anyone tell me what else I could do in order to figure out the bottleneck and could help me to interprete those profiling information? Thanks a lot!

    Read the article

  • Java performance problem with LinkedBlockingQueue

    - by lofthouses
    Hello, this is my first post on stackoverflow...i hope someone can help me i have a big performance regression with Java 6 LinkedBlockingQueue. In the first thread i generate some objects which i push in to the queue In the second thread i pull these objects out. The performance regression occurs when the take() method of the LinkedBlockingQueue is called frequently. I monitored the whole program and the take() method claimed the most time overall. And the throughput goes from ~58Mb/s to 0.9Mb/s... the queue pop and take methods ar called with a static method from this class public class C_myMessageQueue { private static final LinkedBlockingQueue<C_myMessageObject> x_queue = new LinkedBlockingQueue<C_myMessageObject>( 50000 ); /** * @param message * @throws InterruptedException * @throws NullPointerException */ public static void addMyMessage( C_myMessageObject message ) throws InterruptedException, NullPointerException { x_queue.put( message ); } /** * @return Die erste message der MesseageQueue * @throws InterruptedException */ public static C_myMessageObject getMyMessage() throws InterruptedException { return x_queue.take(); } } how can i tune the take() method to accomplish at least 25Mb/s, or is there a other class i can use which will block when the "queue" is full or empty. kind regards Bart P.S.: sorry for my bad english, i'm from germany ;)

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >