Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 442/545 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Class.Class vs Namespace.Class for top level general use class libraries?

    - by Joan Venge
    Which one is more acceptable (best-practice)?: namespace NP public static class IO public static class Xml ... // extension methods using NP; IO.GetAvailableResources (); vs public static class NP public static class IO public static class Xml ... // extension methods NP.IO.GetAvailableResources (); Also for #2, the code size is managed by having partial classes so each nested class can be in a separate file, same for extension methods (except that there is no nested class for them) I prefer #2, for a couple of reasons like being able to use type names that are already commonly used, like IO, that I don't want to replace or collide. Which one do you prefer? Any pros and cons for each? What's the best practice for this case? EDIT: Also would there be a performance difference between the two?

    Read the article

  • Does HTML5 only replace the video aspects of Flash/Silverlight?

    - by John
    I see a lot of talk how HTML5 video tag will kill Flash. But while video is the most widely used part of Flash/SL, it's only a small part of their technical abilities. For instance you can write a game using full 3D graphics and socket connections in Flex, and serious business applications, etc. Is the thinking that Javascript will kill those parts of Flash/Flex/SL? Because while that seems feasible now for even quite rich web-apps, what about any kind of high-performance app like real-time graphics?

    Read the article

  • Which are the current/emerging desktop development technologies worth looking into?

    - by heeboir
    Greetings, With all the existing development towards web development and emerging technologies in that area, I'm left wondering; what is a state of the art way to implement desktop applications in this day and age? If you were to start a new application of considerable size from scratch what technology would you invest your efforts in (focusing on cross platform portability, decent performance and interoperability with existing standards)? I've looked into the Adobe Air platform which appears quite impressive but seems rather limited to support a large application. Would something like Java/SWT still be the sensible choice? Do things like GWT fit the bill? Thanks P.S. I'm leaving my question a bit open-ended in an effort to gather diverse answers. Surely this a subjective matter and there is no right and wrong answer.

    Read the article

  • Using pthread condition variable with rwlock

    - by Doomsday
    Hello, I'm looking for a way to use pthread rwlock structure with conditions routines in C++. I have two questions: First: How is it possible and if we can't, why ? Second: Why current POSIX pthread have not implemented this behaviour ? To understand my purpose, I explain what will be my use: I've a producer-consumer model dealing with one shared array. The consumer will cond_wait when the array is empty, but rdlock when reading some elems. The producer will wrlock when adding(+signal) or removing elems from the array. The benefit of using rdlock instead of mutex_lock is to improve performance: when using mutex_lock, several readers would block, whereas using rdlock several readers would not block.

    Read the article

  • Forced to use too many hidden fields; looking for an alternative approach

    - by harisri786
    I am looking for a better approach to do this. I have around 70 to 80 hidden fields in my page. This hidden fields are initialized at the server side and then used at the client side for validations, calculations, etc,. using java script. I wanted to know if there is any other alternative approach to using hidden fields in asp.net. I guess, these many hidden fields are increasing the page size and hence affecting the performance of my web page and I want to do away with it. FYI: I am working on an asp.net web application.

    Read the article

  • Capture IP packets on Dialup connection - Windows 7

    - by Assaf Levy
    Our product utilizes (the wonderful) Winpcap to capture ip packets from all devices with an IP address and analyze them in real time. Unfortunately, we discovered that it does NOT capture any packets on dialup (e.g. PPP) connections on Windows 7, and that there are no near-term plans for enabling this (1). So we need something else. Microsoft Network Monitor and Windows Packet Filter are two options that surfaced during a bit of googling, but before delving into research I wanted to ask the experienced: what are out options, given the following requirements: Capture all in/outbound IP packets on the machine. Complete background processing - no UI should be involved. Support Windows Vista / 7. Performance (user should not feel the difference). Thanks in advance.

    Read the article

  • Restricting deletion with NHibernate

    - by FrontSvin
    I'm using NHibernate (fluent) to access an old third-party database with a bunch of tables, that are not related in any explicit way. That is a child tables does have parentID columns which contains the primary key of the parent table, but there are no foreign key relations ensuring these relations. Ideally I would like to add some foreign keys, but cannot touch the database schema. My application works fine, but I would really like impose a referential integrity rule that would prohibit deletion of parent objects if they have children, e.i. something similar 'ON DELETE RESTRICT' but maintained by NHibernate. Any ideas on how to approach this would be appreciated. Should I look into the OnDelete() method on the IInterceptor interface, or are there other ways to solve this? Of course any solution will come with a performance penalty, but I can live with that.

    Read the article

  • Looking for a source code management system with a good GUI client

    - by Anders Öhrt
    We are currently using CS-RCS Pro for source code management, and are looking for to replace this due to performance issues. It is based on client side file access with no own protocol, which makes it painfully slow to use over a slow VPN line since it always rewrites the whole history of a file. It does however have a GUI client which is very simple and gives a great overview. We have three main requirements in a SCM: Fast. It must have a server side service or some other smart way so working with files with a large history is fast. A good Windows GUI client (not Explorer shell integration, not VS or Eclipse IDE integration), so working with files and branches is easy. The possibility to have several branches checked out at once in different directories. Does anyone have a recommendation of a SCM which fulfills there requirements?

    Read the article

  • Django: common template subsections

    - by Parand
    What's a good way to handle commonly occurring subsections of templates? For example, there is a sub-header section that's used across 4 different pages. The pages are different enough to not work well with template inheritance (ie. "extends" doesn't fit well). Is "include" the recommended method here? It feels a bit heavyweight, requiring each subsection or snippet to be in its own file. Are there any performance issues in using include or is it smart about forming template from the subsections (ie. if I make extensive use of it, do I pay any penalties)? I think what I'm looking for is something like template tags, but without the programming - a simple way to create a library of html template tags I can sprinkle in other templates.

    Read the article

  • Oracle spatial search within distance

    - by KA_lin
    I have the following table Cities: ID(int),City(char),latitude(float),longitude(float). Now based on a user`s longitude(ex:44.8) and latitude(ex:46.3) I want to search for all the cities near him within 100 miles/KM. I have found some examples but don`t know how to adapt them to my case select * from GEO.Cities a where SDO_WITHIN_DISTANCE([I don`t know], MDSYS.SDO_GEOMETRY(2001, 8307, MDSYS.SDO_POINT_TYPE(44.8,46.3, NULL) ,NULL, NULL), 'distance = 1000') = 'TRUE'; Any help would be appreciated. P.S: If it is possible to have the distance and to be sorted P.P.S: I want to do it in this way due to performance issues, I have done this in this way http://www.scribd.com/doc/2569355/Geo-Distance-Search-with-MySQL but it takes too long...

    Read the article

  • The case of the mysterious MySQL caching across restarts

    - by shanusmagnus
    I found a very slow MySQL query in my web app. The weird thing is that the query is only slow the first time it's executed, despite the fact that the query_cache is set to its default (query_cache_size 0) like so: mysql> show variables like 'query%'; +------------------------------+---------+ | Variable_name | Value | +------------------------------+---------+ | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | +------------------------------+---------+ The even weirder thing is that this speedup persists even after the MySQL server has been stopped and restarted (I'm using OSX, and perform this restart using the system preferences pane.) The only way I can re-create the poor performance of the initial query is by rebooting the system. So my question is: how is this happening? Obviously some sort of caching at work, but where? And how does it persist across database restarts? This query is mediated through our web app, which comes via PHP/Apache, but there are no extra bells and whistles, and the curious caching also persists across Apache restarts. Help?

    Read the article

  • Objective-C style question: do "release" or "nil" properties in dealloc?

    - by Piotr Czapla
    Hi, Apple usually release ivars in dealloc but is there anything wrong with nilling the properties in dealloc? I mean instead of this: - (void) dealoc(){ [myRetainedProperty release]; [super dealloc]; } write code like this: - (void) dealoc(){ self.myRetainedProperty = nil; [super dealloc]; } I know that it is one additional method call but on the other hand it is safer as it doesn't crashes when you change your property form retain to assign and forget to amend dealloc. What do you think? Can you think about any other reason to use release instead of setting nil besides performance?

    Read the article

  • Take advantage of multiple cores executing SQL statements

    - by willvv
    I have a small application that reads XML files and inserts the information on a SQL DB. There are ~ 300 000 files to import, each one with ~ 1000 records. I started the application on 20% of the files and it has been running for 18 hours now, I hope I can improve this time for the rest of the files. I'm not using a multi-thread approach, but since the computer I'm running the process on has 4 cores I was thinking on doing it to get some improvement on the performance (although I guess the main problem is the I/O and not only the processing). I was thinking on using the BeginExecutingNonQuery() method on the SqlCommand object I create for each insertion, but I don't know if I should limit the max amount of simultaneous threads (nor I know how to do it). What's your advice to get the best CPU utilization? Thanks

    Read the article

  • C# asp.net MVC: When to update LastActivityDate?

    - by Oskar Kjellin
    I'm using asp.net mvc and creating a public website. I need to keep track of users that are online. I see that the standard way in asp.net of doing this is to keep track of LastActivityDate. My question is when should I update this? If I update it every time the users clicks somewhere, I will feel a performance draw back. However if I do not do that, people that only surf arround will be listed as offline. What is the best way to do this in asp.net MVC?

    Read the article

  • SQL Server, fetching data from multiple joined tables. Why is slow?

    - by user562192
    I have problem with performance when retrieving data from SQL Server. My sql query looks something like this: SELECT table_1.id, table_1.value, table_2.id, table_2.value,..., table_20.id, table_20.value From table_1 INNER JOIN table_2 ON table_1.id = table_2.table_1_id INNER JOIN table_3 ON table_2.id = table_3.table_2_id... WHERE table_1.row_number BETWEEN 1 AND 20 So, I am fetching 20 results. This query takes about 5 seconds to execute. When I select only table_1.id, it returns results instantly. Because of that, I guess that problem is not in JOINs, it is in retrieving data from multiple tables. Any suggestions how I would speed up this query?

    Read the article

  • Go, AppEngine: How to structure templates for application

    - by laslowh
    How are people handling the use of templates in their Go-based AppEngine applications? Specifically, I'm looking for a project structure that affords the following: Hierarchical (directory) structure of templates and partial templates Allow me to use HTML tools/editors on my templates (embedding template text in xxx.go files makes this difficult) Automatic reload of template text when on dev server Potential stumbling blocks are: template.ParseGlob() will not traverse recursively. For performance reasons it has been recommended not to upload your templates as raw text files (because those text files reside on different servers than executing code). Please note that I am not looking for a tutorial/examples of the use of the template package. This is more of an app structure question. That being said, if you have code that solves the above problems, I would love to see it. Thanks in advance.

    Read the article

  • Can I replicate some of the optimisations done by the JVM by hand?

    - by Subb
    I'm working on a Sudoku solver at school and we're having a little performance contest. Right now, my algorithm is pretty fast on the first run (about 2.5ms), but even faster when I solve the same puzzle 10 000 times (about 0.5ms for each run). Those timing are, of course, depend of the puzzle being solved. I know the JVM do some optimization when a method is called multiple time, and this is what I suspect is happening. I don't think I can further optimize the algorithm itself (though I'll keep looking), so I was wondering if I could replicate some of the optimizations done by the JVM. Note : compiling to native code is not an option Thanks!

    Read the article

  • C#. Struct design. Why 16 byte is recommended size?

    - by maxima120
    I read Cwalina book (recommendations on development and design of .NET apps). He says that good designed struct has to be less than 16 bytes in size (for performance purpose). My questions is - why exactly is this? And (more important) can I have larger struct with same efficiency if I run my .NET 3.5 (soon to be .NET 4.0) 64-bit application on i7 under Win7 x64 (is this limitation CPU / OS based)? Just to stress again - I need as efficient struct as it is possible. I try to keep it in stack all the time, the application is heavily multi-threaded and runs on sub-millisecond intervals, the current size of the struct is 64 byte.

    Read the article

  • Worried about spiders repeatedly hitting high-demand page

    - by Matt Thrower
    Due to some rather bizarre architectural considerations I've had to set up something that really ought to run as a console application as a web page. It does the job of writing a large variety of text files and xml feeds from our site data for various other services to pick up so obviously it takes a little while to run and is pretty processor intensive. However, before I deploy it I'm rather worried that it might get hit repeatedly by spiders and the like. It's fine for the data to be re-written but continual hits on this page are going to trigger performance issues for obvious reasons. Is this something I ought to worry about? Or in reality is spider traffic unlikely to be intensive enough to cause problems?

    Read the article

  • Making simple tabs in android

    - by user2910566
    I am new to Android. I am making a tab Activity that has 3 tabs in it. . I came across reading some interesting articles that tab can be made in three ways:: Regular TabHost Using simple Fragments Using Action Bar Sherlock I have a set of questions Which is a better choice & why ? Which gives more flexibility, efficiency & performance ? Which would be the preferd choice in case of requirement changes happen in future ? My research indicate :: ActionBarsherlock is better ! Is there something better than this ? If so what is it ?

    Read the article

  • Stopwatch vs. using System.DateTime.Now for timing events

    - by Randy Minder
    I wanted to track the performance of a piece of my application so I initially stored the start time using System.DateTime.Now and the end time also using System.DateTime.Now. The difference between the two was how long my code took to execute. I noticed though that the difference didn't appear to be accurate. So I tried using a Stopwatch object. This turned out to be much, much more accurate. Can anyone tell me why Stopwatch would be more accurate than calculating the difference between a start and end time using System.DateTime.Now? Thanks.

    Read the article

  • c++ Sorting a vector based on values of other vector, or what's faster?

    - by pollux
    Hi, There are a couple of other posts about sorting a vector A based on values in another vector B. Most of the other answers tell to create a struct or a class to combine the values into one object and use std::sort. Though I'm curious about the performance of such solutions as I need to optimize code which implements bubble sort to sort these two vectors. I'm thinking to use a vector<pair<int,int>> and sort that. I'm working on a blob-tracking application (image analysis) where I try to match previously tracked blobs against newly detected blobs in video frames where I check each of the frames against a couple of previously tracked frames and of course the blobs I found in previous frames. I'm doing this at 60 times per second (speed of my webcam). Any advice on optimizing this is appreciated. The code I'm trying to optimize can be shown here: http://code.google.com/p/projectknave/source/browse/trunk/knaveAddons/ofxBlobTracker/ofCvBlobTracker.cpp?spec=svn313&r=313 Thanks

    Read the article

  • How to handle huge table ?

    - by misha-moroshko
    I would like to display to user a table which contains ~500,000 rows. The table should be calculated based on a file (i.e. the table is known at the beginning, and does not change). I guess that building the HTML text of this table is not a good idea in terms of memory performance. So how can I build such table ? I would like user to be able to scroll the table using a vertical scroll bar. Is it possible to build on the fly only the visible part of the table ? I'm afraid to see delays because of this. Is it a better idea to use server side programming rather than Javascript ? Any advise would be appreciated.

    Read the article

  • When is BIG, big enough for a database?

    - by David ???
    I'm developing a Java application that has performance at its core. I have a list of some 40,000 "final" objects, i.e., I have an initialization input data of 40,000 vectors. This data is unchanged throughout the program's run. I am always preforming lookups against a single ID property to retrieve the proper vectors. Currently I am using a HashMap over a sub-sample of a 1,000 vectors, but I'm not sure it will scale to production. When is BIG, actually big enough for a use of DB? One more thing, an SQLite DB is a viable option as no concurrency is involved, so I guess the "threshold" for db use, is perhaps lower.

    Read the article

  • Is it a problem if i query again and again to SQL Server 2005 and 2000?

    - by learner
    Window app i am constructing is for very low end machines (Celeron with max 128 RAM). From the following two approaches which one is the best (I don't want that application becomes memory hog for low end machines):- Approach One:- Query the database Select GUID from Table1 where DateTime <= @givendate which is returning me more than 300 thousands records (but only one field i.e. GUID - 300 thousands GUIDs). Now running a loop to achieve next process of this software based on GUID. Second Approach:- Query the database Select Top 1 GUID from Table1 where DateTime <= @givendate with top 1 again and again until all 300 thousands records done. It will return me only one GUID at a time, and I can do my next step of operation. What do you suggest which approach will use the less Memory Resources?? (Speed / performance is not the issue here).

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >