Search Results

Search found 14643 results on 586 pages for 'performance comparison'.

Page 128/586 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Possible Performance Considerations using Linq to SQL Repositories

    - by Robert Harvey
    I have an ASP.NET MVC application that uses Linq to SQL repositories for all interactions with the database. To deal with data security, I do trimming to filter data to only those items to which the user has access. This occurs in several places: Data in list views Links in a menu bar A treeview on the left hand side containing links to content Role-based security A special security attribute, inheriting from AuthorizeAttribute, that implements content-based authorization on every controller method. Each of these places instantiates a repository, which opens a Linq to Sql DataContext and accesses the database. So, by my count, each request for a page access opens at least six separate Linq to SQL DataContexts. Should I be concerned about this from a performance perspective, and if so, what can be done to mitigate it?

    Read the article

  • what is a performance way to 'tree-walking' through my Entity Framework data

    - by Greg
    Hi, I have a Entity Framework design with a few tables that define a "graph". So there can be a large chain of relationships between objects in the few tables via concept of parent/child relationships. What is a performance way to 'tree-walking' through my Entity Framework data? That is I assume I wouldn't want to load the full set of all NODES and RELATIONSHIPS from the database for the purpose of walking the tree, where the end result may only be identifying leaf nodes? Or would this be OK with the way lazy loading may work at the column/parameter level? Else how could I load just the skeleton of the objects and then when needing to refer to any attributes have them lazy load then?

    Read the article

  • HTML Audio performance

    - by user1888309
    I'm working on HTML drum machine, and I`ve met some performance issues, rhythm start to break if BPM is higher than 110 but I'm expecting to make it work on BPM over 180. I guess that it can be related with format or codec of audio files, however it also maybe that my code is not very optimised (as I can see from JS CPU profiling it's not). So I'm expecting you guys give me some code review or some hints on optimisation. Although all similar projects I've found on internet didn't work good and maybe it's just restrictions of Audio API. By the way, it's very raw and sounds works only on Chrome under Mac OS, so any advise on audio encoding for web also would be great Project on Github pages Screenshot of Groove which breaks UPDATE Ok, I've found that I was encoding audio files incorrectly, after fixing that rhythm stopped breaking, and also it started working in Mozilla. But still there are issues on windows OS.

    Read the article

  • Performance on joins in linq

    - by swapna
    HI , I am going to rewrite a store procedure in LINQ. What this sp is doing is joining 12 tables and get the data and insert it into another table. it has 7 left outer joins and 4 inner joins.And returns one row of data. Now question. 1)What is the best way to achieve this joins in linq. 2) do you think this affect performance (its only retrieving one row of data at a given point of time) Please advice. Thanks SNA.

    Read the article

  • Performance testing on .xap files...

    - by Radhi
    Hi All, I want to know that can i use profiler to do performance testing of .xap files. if you have any articles for the same topic please provide it to me. and if there are any other tools available to do this please tell me. in my project we have to check that when we logged into the Silverlight 4 .0 application. the screen takes 5 seconds to load. so i have to check which method is taking time to do this. in our project there are services which calls other services too,, and we have used CAL. so need to identify the bottleneck... please help...

    Read the article

  • Writing shorter code/algorithms, is more efficient (performance)?

    - by Carlos
    After coming across the code golf trivia around the site it is obvious people try to find ways to write code and algorithms as short as the possibly can in terms of characters, lines and total size, even if that means writing something like: n=input() while n>1:n=(n/2,n*3+1)[n%2];print n So as a beginner I start to wonder whether size actually matters :D. It is obviously a very subjective question highly dependent on the actual code being used, but what is the rule of thumb in the real world. In the case that size wont matter, how come then we don't focus more on performance rather than size?

    Read the article

  • Measuring the performance of classification algorithm

    - by Silver Dragon
    I've got a classification problem in my hand, which I'd like to address with a machine learning algorithm ( Bayes, or Markovian probably, the question is independent on the classifier to be used). Given a number of training instances, I'm looking for a way to measure the performance of an implemented classificator, with taking data overfitting problem into account. That is: given N[1..100] training samples, if I run the training algorithm on every one of the samples, and use this very same samples to measure fitness, it might stuck into a data overfitting problem -the classifier will know the exact answers for the training instances, without having much predictive power, rendering the fitness results useless. An obvious solution would be seperating the hand-tagged samples into training, and test samples; and I'd like to learn about methods selecting the statistically significant samples for training. White papers, book pointers, and PDFs much appreciated!

    Read the article

  • MySQL Prepared Statements vs Stored Procedures Performance

    - by amardilo
    Hi there, I have an old MySQL 4.1 database with a table that has a few millions rows and an old Java application that connects to this database and returns several thousand rows from this this table on a frequent basis via a simple SQL query (i.e. SELECT * FROM people WHERE first_name = 'Bob'. I think the Java application uses client side prepared statements but was looking at switching this to the server, and in the example mentioned the value for first_name will vary depending on what the user enters). I would like to speed up performance on the select query and was wondering if I should switch to Prepared Statements or Stored Procedures. Is there a general rule of thumb of what is quicker/less resource intensive (or if a combination of both is better)

    Read the article

  • LINQ - Using where or join - Performance difference ?

    - by Patrick Säuerl
    Hi Based on this question: http://stackoverflow.com/questions/3013034/what-is-difference-between-where-and-join-in-linq My question is following: Is there a performance difference in the following two statements: from order in myDB.OrdersSet from person in myDB.PersonSet from product in myDB.ProductSet where order.Persons_Id==person.Id && order.Products_Id==product.Id select new { order.Id, person.Name, person.SurName, product.Model,UrunAdi=product.Name }; and from order in myDB.OrdersSet join person in myDB.PersonSet on order.Persons_Id equals person.Id join product in myDB.ProductSet on order.Products_Id equals product.Id select new { order.Id, person.Name, person.SurName, product.Model,UrunAdi=product.Name }; I would always use the second one just because it´s more clear. My question is now, is the first one slower than the second one? Does it build a cartesic product and filters it afterwards with the where clauses ? Thank you.

    Read the article

  • Will MySql caching cause performance problems?

    - by Camran
    I am about to upload my website onto a VPS. It is a classifieds website, where all data is stored in MySql and Solr. I wonder if when using MySql:s cache, the server will slow down? Ie, if somebody makes a search for the first time, and MySql is to cache the query, will the caching make the server slower than if it would not cache anything? After the caching is done I know things will improve in terms of performance... But I would like to know if I should even use the cache or not, what do you think? Thanks

    Read the article

  • Lots of pointer casts in QGraphicsView framework and performance

    - by kleimola
    Since most of the convenience functions of QGraphicsScene and QGraphicsItem (such as items(), collidingItems(), childItems() etc.) return a QList you're forced to do lots of qgraphicsitem_cast or static_cast and QGraphicsItem::Type() checks to get hold of the actual items when you have lots of different type of items in the scene. I thought doing lots of subclass casts were not a desirable coding style, but I guess in this case there are no other viable way, or is there? QList<QGraphicsItem *> itemsHit = someItem->collidingItems(Qt::IntersectsItemShape); foreach (QGraphicsItem *item, itemsHit) { if (item->type() == QGraphicsEllipseItem::type()) { QGraphicsEllipseItem *ellipse = qgraphicsitem_cast<QGraphicsEllipseItem *>(item); // do something } else if (item->type() == MyItemSubclass::type()) { MyItemSubClass *myItem = qgraphicsitem_cast<MyItemSubClass *>(item); // do something } // etc } The above qgraphicsitem_cast could be replaced by static_cast since correct type is already verified. When doing lots of these all the time (very dynamic scene), will the numerous casting affect performance beyond the normal if-else evaluation?

    Read the article

  • Disk IO Performance Limitations based on numbers of folders/files

    - by Josh
    I have an application where users are allowed to upload images to the server. Our Web Server is a windows 2008 server and we have a site (images.mysite.com) that points to a shared drive on a unix box. The code used to do the uploading is C# 3.5. The system currently supports a workflow where after a threshold is met a new subfolder can be generated. The question we have is how many files and/or subfolders can you have in a single folder before there is a degredation in performance - in serving the images up through IIS 7 and reading/writing through code?

    Read the article

  • Performance improvement to a big if clause in SQL Server function

    - by Miles D
    I am maintaining a function in SQL Server 2005, that based on an integer input parameter needs to call different functions e.g. IF @rule_id = 1 -- execute function 1 ELSE IF @rule_id = 2 -- execute function 2 ELSE IF @rule_id = 3 ... etc The problem is that there are a fair few rules (about 100), and although the above is fairly readable, its performance isn't great. At the moment it's implemented as a series of IF's that do a binary-chop, which is much faster, but becomes fairly unpleasant to read and maintain. Any alternative ideas for something that performs well and is fairly maintainable?

    Read the article

  • Performance side effect with static internal Util classes?

    - by Fostah
    For a util class that contains a bunch of static functionality that's related to the same component, but has different purposes, I like to use static internal classes to organize the functionality, like so: class ComponentUtil { static class Layout { static int calculateX(/* ... */) { // ... } static int calculateY(/* ... */) { // ... } } static class Process { static int doThis(/* ... */) { // ... } static int doThat(/* ... */) { // ... } } } Is there any performance degradation using these internal classes vs. just having all the functionality in the Util class?

    Read the article

  • Simple performance testing tool in C#?

    - by Tomas
    Hi, At first -I need to do it as my university project so I am not interested in using existing tools. I would like to know whether it is even possible to write a very simple tool that I could use for performance testing of web applications. It would only record actions (I do not know, maybe just packet sniffering?) and then replay. However I have basic idea (record packets on port 80 and sending them again), I do not know how to measure time for each transaction as they are not differentiated. Any help is greatly appreciated, thank you!

    Read the article

  • Performance: Subquerry or Joining

    - by Auro
    HelloHello I got a little Question about Performance of a Subquerry /Joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • C++ performance when accessing class members

    - by Dr. Acula
    I'm writing something performance-critical and wanted to know if it could make a difference if I use: int test( int a, int b, int c ) { // Do millions of calculations with a, b, c } or class myStorage { public: int a, b, c; }; int test( myStorage values ) { // Do millions of calculations with values.a, values.b, values.c } Does this basically result in similar code? Is there an extra overhead of accessing the class members? I'm sure that this is clear to an expert in C++ so I won't try and write an unrealistic benchmark for it right now

    Read the article

  • SqlCeCommand ExecuteNonQuery performance issue

    - by Michael
    I've been asked to resolve an issue with a .Net/SqlServerCe application. Specifically, after repeated inserts against the db, performance becomes increasingly degraded. In one instance at ~200 rows, in another at ~1000 rows. In the latter case the code being used looks like this: Dim cm1 As System.Data.SqlServerCe.SqlCeCommand = cn1.CreateCommand cm1.CommandText = "INSERT INTO Table1 Values(?,?,?,?,?,?,?,?,?,?,?,?,?)" For j = 0 To ds.Tables(0).Rows.Count - 1 'this is 3110 For i = 0 To 12 cm1.Parameters(tbl(i, 0)).Value = Vals(j,i) 'values taken from a different db Next cm1.ExecuteNonQuery() Next The specifics aren't super important (like what 'tbl' is, etc) but rather whether or not this code should be expected to handle this number of inserts, or if the crawl I'm witnessing is to be expected.

    Read the article

  • Performance when accessing class members

    - by Dr. Acula
    I'm writing something performance-critical and wanted to know if it could make a difference if I use: int test( int a, int b, int c ) { // Do millions of calculations with a, b, c } or class myStorage { public: int a, b, c; }; int test( myStorage values ) { // Do millions of calculations with values.a, values.b, values.c } Does this basically result in similar code? Is there an extra overhead of accessing the class members? I'm sure that this is clear to an expert in C++ so I won't try and write an unrealistic benchmark for it right now

    Read the article

  • Performance effect of using print statements in Python script

    - by Sudar
    I have a Python script that process a huge text file (with around 4 millon lines) and writes the data into two separate files. I have added a print statement, which outputs a string for every line for debugging. I want to know how bad it could be from the performance perspective? If it is going to very bad, I can remove the debugging line. Edit It turns out that having a print statement for every line in a file with 4 million lines is increasing the time way too much.

    Read the article

  • Performance: Subquery or Joining

    - by Auro
    Hello I got a little question about performance of a subquery / joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • C++ STL: Array vs Vector: Raw element accessing performance

    - by oh boy
    I'm building an interpreter and as I'm aiming for raw speed this time, every clock cycle matters for me in this (raw) case. Do you have any experience or information what of the both is faster: Vector or Array? All what matters is the speed I can access an element (opcode receiving), I don't care about inserting, allocation, sorting, etc. I'm going to lean myself out of the window now and say: Arrays are at least a bit faster than vectors in terms of accessing an element i. It seems really logical for me. With vectors you have all those security and controlling overhead which doesn't exist for arrays. (Why) Am I wrong? No, I can't ignore the performance difference - even if it is so small - I have already optimized and minimized every other part of the VM which executes the opcodes :)

    Read the article

  • Does async and await incease performance of an ASP.Net application

    - by Kerezo
    I recently read a article about c#-5 and new $ nice asynchronous programming. I see it works greate in windows application. The question came to me before is if this feature can increase ASP.Net performance? consider this code: public T GetData() { var d = GetSomeData(); return d; } and public async T GetData2() { var d = await GetSomeData(); return d; } Has in an ASP.Net appication that two codes difference? thanks

    Read the article

  • Does 'throw' or 'try...catch' hinder performance?

    - by Richard
    I've been reading all over the place (including here) about when exception should / shouldn't be used. I now want to change my code that would throw to make the method return false and handle it like that, but my question is: Is it the throwing or try..catch-ing that can hinder performance...? What I mean is, would this be acceptable: bool method someMmethod() { try { // ...Do something catch (Exception ex) // Don't care too much what at the moment... { // Output error // Return false } return true // No errors Or would there be a better way to do it? (I'm bloody sick of seeing "Unhandled exception..." LOL!)

    Read the article

  • Performance problem on a query.

    - by yapiskan
    Hi, I have a performance problem on a query. First table is a Customer table which has millions records in it. Customer table has a column of email address and some other information about customer. Second table is a CommunicationInfo table which contains just Email addresses. And What I want in here is; how many times the email address in CommunicationInfo table repeats in Customers table. What could be the the most performer query. The basic query that I can explain this situation is; Select ci.Email, count(*) from Customer c left join CommunicationInfo ci on c.Email1 = ci.Email or c.Email2 = ci.Email Group by ci.Email But sure, it takes about 5, 6 minutes in execution. Thanks in Advance.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >