Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 482/594 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Why should I use an N-Tier Approach When using an SqlDatasource is ALOT EASIER ?

    - by The_AlienCoder
    When it comes to web development I have always tried to work SMART not HARD. So for along time My Aproach to interacting with databases in my AspNet projects has been this : 1) Create my stored procedures 2) Drag an SQLDatasource control on my aspx page 3) Bind a DataList Control to my SQLDatasource 4) Insert, Update & Delete by using my Datalist or programmatically using built in SQLDatasource methods e.g MySqlDataSource.InsertParameters["author"].DefaultValue = TextBox1.Text; MySqlDataSource.Insert(); Recently however I got a relatively easy web project. So I decided to employ a 3-tier Model...But I got exhausted halfway and just didnt seem worth it ! It seemed like I was working too HARD for a project that could have been easily accomplished by a couple of SqlDataSource Controls. So Why Is the N-Tier Model better than my Approach? Has it anything to do with performance? What are the advantages of the ObjectDataSource control over the SqlDataSource Control?

    Read the article

  • mod_perl memory leak

    - by marghi
    Hello, I recently discovered that one of our sites has a memory leak in it, it's very strange because it happened all of the sudden. I've used GTop to measure the memory size per process and it tells me that the real value is somewhere around 65 MB (on the server) per request and and additional 5 MB shared. I tried preloading the modules in the startup.pl file a indicated in the performance tuning article for mod_perl. Nothing happened if fact the shared memory decreased down to 3.7 MB, in this situation I thought that my application is leaking memory do before any line of code got executed I measured the memory just to find out that the total value is in fact 64 MB, my questions are: Is there a default preallocation of memory for each process? Is there a configuration issue? Is mod_perl leaking memory ? Thank you very much.

    Read the article

  • NSArray vs. SQLite for Complex Queries on iPhone

    - by GingerBreadMane
    Developing for iPhone, I have a collection of points that I need to make complex queries on. For example: "How many points have a y-coordinate of 10" and "Return all points with an X-coordinate between 3 and 5 and a y-coordinate of 7". Currently, I am just cycling through each element of an NSArray and checking to see if each element matches my query. It's a pain to write the queries though. SQLite would be much nicer. I'm not sure which would be more efficient though since a SQLite database resides on disk and not in memory (to my understanding). Would SQLite be as efficient or more efficient here? Or is there a better way to do it other than these methods that I haven't thought of? I would need to perform the multiple queries with multiple sets of points thousands of times, so the best performance is important.

    Read the article

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • Are you using Virtual Machine as your primary development enviroment?

    - by Click Ok
    Recently I have purchased a notebook that cames with Windows Home Basic (that don't have with ASP.Net/IIS. I thought in upgrade the Windows version to one with ASP.Net/IIS, but I thought in another possibility: I have an Hard Disk Case with a 360Gb HD. I thought in create a virtual machine with Windows Ultimate (installing too ASP.Net, IIS and Visual Studio 2008) in this HD Case, then I can access my "development enviroment" in any computer that I will work on (my desktop machine and my notebook). But I was worried about the performance. I don't have experience working in Virtual Machines (I use it just to quick compatibility tests)... Are you using Virtual Machine as your primary development enviroment? What your finds? ==================== Thanks for your answers! It really did help me! I would like to know too about portability ie.: the virtual machine that I created in my laptop will work in the desktop? I will need re-activate Windows?

    Read the article

  • What are some "mental steps" a developer must take to begin moving from SQL to NO-SQL (CouchDB, Fath

    - by Byron Sommardahl
    I have my mind firmly wrapped around relational databases and how to code efficiently against them. Most of my experience is with MySQL and SQL. I like many of the things I'm hearing about document-based databases, especially when someone in a recent podcast mentioned huge performance benefits. So, if I'm going to go down that road, what are some of the mental steps I must take to shift from SQL to NO-SQL? If it makes any difference in your answer, I'm a C# developer primarily (today, anyhow). I'm used to ORM's like EF and Linq to SQL. Before ORMs, I rolled my own objects with generics and datareaders. Maybe that matters, maybe it doesn't. Here are some more specific: How do I need to think about joins? How will I query without a SELECT statement? What happens to my existing stored objects when I add a property in my code? (feel free to add questions of your own here)

    Read the article

  • Implementing Tagging using Core Data on the iPhone

    - by Jonathan Penn
    I have an application that uses CoreData and I'm trying to figure out the best way to implement tagging and filtering by tag. For my purposes, if I was doing this in raw SQLite I would only need three tables, tags, item_tags and of course my items table. Then filtering would be as simple as joining between the three tables where only items are related to the given tags. Quite straightforward. But, is there a way to do this in CoreData and utilizing NSFetchedResultsController? It doesn't seem that NSPredicate give you the ability to filter through joins. NSPredicate's aren't full SQL anyway so I'm probably barking up the wrong tree there. I'm trying to avoid reimplementing my app using SQLite without CoreData since I'm enjoying the performance CoreData gives me in other areas. Yes, I did consider (and built a test implementation) diving into the raw SQLite that CoreData generates, but that's not future proof and I want to avoid that, too. Has anyone else tried to tackle tagging/filtering with CoreData in a UITableView with NSFetchedResultsController

    Read the article

  • How can I configure different worker pools using celery?

    - by Chris R
    I need to deploy a queued execution service with (generally) the following three classes of worker: A periodic, low-priority job class that takes a long time and can be processed serially; these jobs should only use 0..2 workers in the system at most. A periodic, deadline-sensitive job class that take a short to medium amount of time (say, topping out at 5 minutes) An ad-hoc job class, that is higher priority than #1, but can interleave with #2. Any workers from class #2 that are inactive when this type of job comes in should handle it, without ever starving the pool of workers for #2 All three job classes are the same task, the only difference between them is how they're requested; they'll take the same input and generate the same output, but each one has different performance guarantees. How can I implement this using celery?

    Read the article

  • GetHashCode Method reliability in Silverlight/WP7.1

    - by abhinav
    I am attempting to hash and keep(the hash) an object of type IEnumerable<anotherobject> which has about a 1000 entries. I'll be generating another such object, but this time I'd like to check for any changes in the values of the entries using the hash codes of the two objects. Basically, I was wondering if GetHashCode() is apt for this, both from a performance perspective and reliability perspective (getting different values for different object values and same values for same object values, always). If I have to override it, what would be a good way to do so, does it always depend on the type of anotherobject and what Equals means when comparing two anotherobjects? Is there a generic way to do it? This concern is because my object can be quite big.

    Read the article

  • How do I calculate a good hash code for a list of strings?

    - by Ian Ringrose
    Background: I have a short list of strings. The number of strings is not always the same, but are nearly always of the order of a “handful” In our database will store these strings in a 2nd normalised table These strings are never changed once they are written to the database. We wish to be able to match on these strings quickly in a query without the performance hit of doing lots of joins. So I am thinking of storing a hash code of all these strings in the main table and including it in our index, so the joins are only processed by the database when the hash code matches. So how do I get a good hashcode? I could: Xor the hash codes of all the string together Xor with multiply the result after each string (say by 31) Cat all the string together then get the hashcode Some other way So what do people think? (If you care we are using .NET and SqlServer)

    Read the article

  • ExtJS Grid slow with 3000+ records

    - by Oliver Watkins
    I am using ExtJS Grid and its getting pretty slow with 3000+ records. Sorting takes about 4 seconds. Compared to other more Javascript tables, this is pretty slow. I am thinking maybe to use pagination in my table. However after reading the documentation, I am still a bit unsure about how pagination works in extjs. Does this pull data from the server each time u turn a page? I would prefer that wasn't the case. I would prefer the 3000 records are saved in the browser and then what is rendered is just a portion of those rows. Also I am using Extjs version 4.2.1. If I upgrade to version 5. will I get some performance improvements?

    Read the article

  • Multivalue Mysql Inserts using HibernateTemplate

    - by Langali
    I am using Spring HibernateTemplate and need to insert hundreds of records into a mysql database every second. Not sure what is the most performant way of doing it, but I am trying to see how the multi value mysql inserts do using hibernate. String query = "insert into user(age, name, birth_date) values(24, 'Joe', '2010-05-19 14:33:14'), (25, 'Joe1', '2010-05-19 14:33:14')" getHibernateTemplate().execute(new HibernateCallback(){ public Object doInHibernate(Session session) throws HibernateException, SQLException { return session.createSQLQuery(query).executeUpdate(); } }); But I get this error: 'could not execute native bulk manipulation query.' Please check your query ..... Any idea of I can use a multi value mysql insert using Hibernate? or is my query incorrect? Any other ways that I can improve the performance? I did try the saveOrUpdateAll() method, and that wasn't good enough!

    Read the article

  • Is there any significant benefit to reading string directly from control instead of moving it into a

    - by Kevin
    sqlInsertFrame.Parameters.AddWithValue("@UserName", txtUserName.txt); Given the code above...if I don't have any need to move the textbox data into a string variable, is it best to read the data directly from the control? In terms of performance, it would seem smartest to not create any unnecessary variables which use up memory if its not needed. Or is this a situation where its technically true but doesn't yield any real world results due to the size of the data in question. Forgive me, I know this is a very basic question.

    Read the article

  • Which is the best API/Library to use when accessing a WebCam in .Net?

    - by Doctor Jones
    Which is the best API to use when accessing a WebCam in .Net? (I know they can be webcam specific, I am willing to buy a new webcam if it means better results). I want to write a desktop application that will take video from a webcam and store it in MPEG4 formats (DivX, Xvid, etc...). I would also like to access bitmap stills from the device so I can do image comparison between frames. I have tried various libraries, and none have really been a great fit (some have performance issues (very inconsistent framerates), some have image quality limitations, some just crash out for seemingly no reason. I want to get high quality video (as high as I can get) and a decent framerate. My webcam is more than up to the job and I was hoping that there would be a nice Managed .Net library around that would help my cause. Are webcam APIs all just incredibly bad?

    Read the article

  • Is it possible to find out what FlashBuilder is doing during compilation?

    - by justkevin
    I've found that Flash Builder 4 (formerly Flex Builder) has trouble working with large projects. After a certain point, builds seem to take longer and longer. I've tried many different ways of improving build time including: Moving embedded resources into externally linked projects. Using -incremental. Tweaking the .ini jvm settings including memory and -server. Turning off automatic build (I'd prefer not to have to do this, because one of the main reasons for using an IDE is to be told about errors as you make them). Deleting the project and re-checking out from the repository. While some of these may help a bit, the performance is still annoyingly slow. I feel if I knew what was taking so long I could refactor my projects to build faster. Is there some setting that tells FlashBuilder to let me see what parts of the build process take so much time?

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • Designing a table to store EXIF data

    - by rafale
    I'm looking to get the best performance out of querying a table containing EXIF data. The queries in question will only search the EXIF data for the specified strings and return the row index on a match. With that said, would it better to store the EXIF data in a table with separate columns for each of the tags, or would storing all of the tags in a single column as one long delimited string suit me just as well? There are around 115 EXIF tags I'll be storing, and each record would be around 1500 to 2000 chars in length if concatenated into a single string.

    Read the article

  • Why hasn't anybody started a hosted continuous integration service?

    - by Teflon Ted
    There's a dozen services that provide hosted version control, hosted ticket tracking, hosted project management, and combinations of all of the above, there's even hosted web-based IDEs. But nobody's yet offered a hosted continuous integration service; at least that I can find. The concept seems simple enough: I register and provide the URL to my source code repository, it grabs my code and builds it via ant/rake/whatever, then runs the suite of tests and some metrics (code coverage, performance, etc.). Is there some prohibitive barrier to entry I'm not considering?

    Read the article

  • Dealing with Windows line-endings in Python

    - by Adam Nelson
    I've got a 700MB XML file coming from a Windows provider. As one might expect, the line endings are '\r\n' (or ^M in vi). What is the most efficient way to deal with this situation aside from getting the supplier to send over '\n' :-) Use os.linesep Use rstrip() (requiring opening the file ... which seems crazy) Using Universal newline support is not standard on my Mac Snow Leopard - so isn't an option. I'm open to anything that requires Python 2.6+ but it needs to work on Snow Leopard and Ubuntu 9.10 with minimal external requirements. I don't mind a small performance penalty but I am looking for the standard best way to deal with this.

    Read the article

  • Managing Data Prefetching and Dependencies with .NET Typed Datasets

    - by Derek Morrison
    I'm using .NET typed datasets on a project, and I often get into situations where I prefetch data from several tables into a dataset and then pass that dataset to several methods for processing. It seems cleaner to let each method decide exactly which data it needs and then load the data itself. However, several of the methods work with the same data, and I want the performance benefit of loading data in the beginning only once. My problem is that I don't know of a good way or pattern to use for managing dependencies (I want to be sure I load all the data that I'm going to need for each class/method that will use the dataset). Currently, I just end up looking through the code for the various classes that will use the dataset to make sure I'm loading everything appropriately. What are good approaches or patterns to use in this situation? Am I doing something fundamentally wrong? Although I'm using typed datasets, this seems like it would be a common situation where prefetching data is used. Thanks!

    Read the article

  • Interview Question: .Any() vs if (.Length > 0) for testing if a collection has elements

    - by Chris
    In a recent interview I was asked what the difference between .Any() and .Length > 0 was and why I would use either when testing to see if a collection had elements. This threw me a little as it seems a little obvious but feel I may be missing something. I suggested that you use .Length when you simply need to know that a collection has elements and .Any() when you wish to filter the results. Presumably .Any() takes a performance hit too as it has to do a loop / query internally.

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • parsing/matching string occurrence in C

    - by David
    I have the following string: const char *str = "\"This is just some random text\" 130 28194 \"Some other string\" \"String 3\"" I would like to get the the integer 28194 of course the integer varies, so I can't do strstr("20194"). So I was wondering what would be a good way to get that part of the string? I was thinking to use #include <regex.h> which I already have a procedure to match regexp's but not sure how the regexp in C will look like using the POSIX style notation. [:alpha:]+[:digit:] and if performance will be an issue. Or will it be better using strchr,strstr? Any ideas will be appreciate it

    Read the article

  • Different i18n in spring according to url

    - by Fanooos
    I have a spring web application that is required to work as following the application will be accessed from two different URLs www.domain1.com and www.domain2.com and it is required that the two URLs looks like two different applications with different CSS and I18n. for the css part is done but I am stuck with the i18n part How to make spring load different i18n properties file according to the domain name? The solution that I thought in is to implement a filter that check the request URL and according to the URL it clears the message source bean and load the required i18n file but it does not looks good for the performance by the way I am using ReloadableResourceBundleMessageSource message source Another solution is to implement two different message sources. The problem with this solution is that from the source code I can manage the bean that I use but how can I tell the fmt:message tag which data source to use ? Thanks in advance and best regards

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >