Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 118/933 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • In .NET which loop runs faster for or foreach

    - by Binoj Antony
    In c#/VB.NET/.NET which loop runs faster for or foreach? Ever since I read that for loop works faster than foreach a long time ago I assumed it stood true for all collections, generic collection all arrays etc. I scoured google and found few articles but most of them are inconclusive (read comments on the articles) and open ended. What would be ideal is to have each scenarios listed and the best solution for the same e.g: (just example of how it should be) for iterating an array of 1000+ strings - for is better than foreach for iterating over IList (non generic) strings - foreach is better than for Few references found on the web for the same: Original grand old article by Emmanuel Schanzer CodeProject FOREACH Vs. FOR Blog - To foreach or not to foreach that is the question asp.net forum - NET 1.1 C# for vs foreach [Edit] Apart from the readability aspect of it I am really interested in facts and figures, there are applications where the last mile of performance optimization squeezed do matter.

    Read the article

  • Weird UIView transforms in Retina iPhone

    - by ggambett
    I'm having a problem I don't understand. I'm developing an OpenGL app for iOS. Because at some points I want to force the orientation of the view programatically, and Apple for whatever reason doesn't make it easy (or even possible), I'm doing it by hand. I return always NO in shouldAutoRotateToInterfaceOrientation, and when I want to change the orientation (to portrait, for example), I do something like this in the UIView: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(0, 0, 768, 1024)]; This works fine. In order to support Retina devices, I started checking [UIScreen mainScreen].scale, and setting self.contentScaleFactor accordingly. I also modified the code above to account for the new dimensions, like this: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(0, 0, 2*768, 2*1024)]; Same rotation, different size. The weird result with this is that I get a "screen" with the right size, but offsetted half a screen to the bottom and the left. To correct for this, I need to do the following: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(-768, -1024, 2*768 - 768, 2*1024 - 1024)]; This works, but it's ugly, I also need to make similar corrections when I get touch coordinates, and worst of all, I don't understand what's going on or why the above "correction" works. Can anyone shed some light on this issue?

    Read the article

  • relating data stored in NoSQL DB to data stored in SQL DB

    - by seanbrant
    Whats the best way to use a SQL DB along side a NoSQL DB? I want to keep my users and other data in postgres but have some data that would be better suited for a NoSQL DB like redis. I see a lot of talk about switching to NoSQL but little talk on integrating it with existing systems. I think it would be foolish to throw the baby out with the bath water and ditch SQL all together, unless it makes things easier to maintain and develop. I'm wondering what the best approach is for relating data stored in SQL to my data in redis. I was thinking of something along the line of this. User object stored in SQL Book object in redis, key sh1 hash of value, value is a JSON string Relations stored in redis, key User.pk:books, value redis set of sha1's Anyone have experience, tips, better ways?

    Read the article

  • How to structure a Visual Studio project for the data access layer

    - by Akk
    I currently have a project that uses various DB access technologies mainly for showcasing or for demos. Currently we have: Namespace App.Data (App.Data.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql The above structure is ok as we only use Sql Server as the DB. But going forward we will be including Oracle, MySql etc. So what would be a better structure with this in mind? I thought about: Namespace App.Data.SqlServer (App.Data.SqlServer.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql Or would it just be better to have separate assemblies for each database and access technology?: Namespace App.Data.SqlServer.NHibernate (App.Data.SqlServer.NHibernate.dll) Namespace App.Data.SqlServer.EntityFramework(App.Data.SqlServer.EntityFramework.dll) Namespace App.Data.Oracle.NHibernate (App.Data.Oracle.NHibernate.dll) Namespace App.Data.MySql.NHibernate (App.Data.MySql.Oracle.dll)

    Read the article

  • dynamic searchable fields, best practice?

    - by boblu
    I have a Lexicon model, and I want user to be able to create dynamic feature to every lexicon. And I have a complicate search interface that let user search on every single feature (including the dynamic ones) belonged to Lexicon model. I could have used a serialized text field to save all the dynamic information if they are not for searching. In case I want to let user search on all fields, I have created a DynamicField Model to hold all dynamically created features. But imagine I have 1,000,000,000 lexicon, and if one create a dynamic feature for every lexicon, this will result creating 1,000,000,000 rows in DynamicField model. So the sql search function will become quite inefficient while a lot of dynamic features created. Is there a better solution for this situation? Which way should I take? searching for a better db design for dynamic fields try to tuning mysql(add cache fields, add index ...) with current db design

    Read the article

  • Multiple Table Joins to Improve Performance?

    - by EdenMachine
    If I have a table structure like this: Transaction [TransID, ...] Document [DocID, TransID, ...] Signer [SignerID, ...] Signature [SigID, DocID, SignerID, ...] And the business logic is like this: Transactions can have multiple documents Documents can have multiple signatures And the same signer can have multiple signatures in multiple documents within the same transaction So, now to my actual question: If I wanted to find all the documents in a particular transaction, would it be better, performance-wise, if I also stored the TransID and the DocID in the Signer table as well so I have smaller joins. Otherwise, I'd have to join through the Signature Document Transaction Documents to get all the documents in the transaction for that signer. I think it's really messy to have that many relationships in the Signer table though and it doesn't seem "correct" to do it that way (also seems like an update nightmare) but I can see that it might be better performance for direct joins. Thoughts? TIA!

    Read the article

  • What the difference between zend framework and Wordpress as framework ?

    - by justjoe
    i only know wordpress and start to seek another alternative framework, zend. i heard hearsay that zend's better from others framework. if you're "a serous coder", or try to act like one, you need to use it on building your web app. some said zend is better. But it's subjective. It's fast ans secure. But nobody tell me the reason or at leas compare it with with wordpress. ultimate question : Do zend have theme or plugin just like wordpress ? any hint will be helpful

    Read the article

  • PHP Framework Benefits / Downfalls

    - by Lizard
    I have been a PHP developer for about 10 years now and until about a month ago I have never used a framework. The framework I am now using due to an existing codebase is cakePHP 1.2 I can see certain benefits of the frameworks with the basic helpers like default layouts. I can deffinately seen the benefits of MVC keeping the logic sperate etc. But the query building just seems to be bloated. Is this expected? Am I likely to be able to build better queries than the framework could build? I just feel I could get my apps running better without a framework. What are your thoughts?

    Read the article

  • Show friendly message on ASP.NET Ajax error

    - by balexandre
    You all know how annoying is this: I do have a log system and the correct error is well explicit there, but I want to give a better message to the user. I keep trying several ways but I'm using Telerik components and well jQuery and I ended up using both ASP.NET Ajax methods and jQuery, so I use function pageLoad() { try { var manager = Sys.WebForms.PageRequestManager.getInstance(); manager.add_endRequest(endRequest); manager.add_beginRequest(OnBeginRequest); manager } catch (err) { alert(err); } } as well $(document).ready(function() { ... } that alert(err) is never fired even upon OnClick events what's the best approach to avoid this message errors and provide a cleaner way? all this happens in <asp:UpdatePanel> as I use that when I didn't know better (3 years ago!) and I really don't want to mess up and build all again from scratch :( Any help is greatly appreciated Updated with more error windows after volpav solution

    Read the article

  • Javascript code in ASP.NET MVC Partial Views (ASCX) or not?

    - by Alex
    Is there a "best practice" for placing Javascript code when you have many partial views and JS code that's specific to them? I feel like I'm creating a maintenance nightmare by having many partial views and then a bunch of independent Javascript files for them which need to be synced up when there is a partial view change. It appears, for maintenance purposes, better to me to put the JS code with the partial view. But then I'm violating generally accepted practices that all JS code should be at the bottom of the page and not mixed in, and also I'd end up with multiple references to the same JS file (as I'd include a reference in each ASCX for intellisense purposes). Does anyone have a better idea? Thank you!

    Read the article

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • Check if a string substitution rule will ever generate another string.

    - by Mgccl
    Given two strings S and T of same length. Given a set of replacement rules, that find substring A in S and replace it with string B. A and B have the same length. Is there a sequence of rule application, such that it make string S into string T? I believe there is no better way to answer this than try every single rule in every single state. Which would be exponential time. But I don't know if there are better solutions to it.

    Read the article

  • Whats the Best Practice for a Search SQL Query?

    - by Marc V
    I have a SQL 2008 Express database, which have following tables: CREATE TABLE Videos (VideoID bigint not null, Title varchar(100) NULL, Description varchar(MAX) NULL, isActive bit NULL ) CREATE TABLE Tags (TagID bigint not null, Tag varchar(100) NULL ) CREATE TABLE VideoTags (VideoID bigint not null, TagID bigint not null ) Now I need SQL query to search for word (i.e. Beyonce Halo Music Video) against these tables. Which videos have: For Title exact phrase will get 0.5 points For Description exact phrase will get 0.4 points For tags exact phrase will get 0.3 points For title all words will get 0.2 points For description all words will get 0.2 points For title one or more words will get 0.1 points For description one or more words will get 0.1 points And I will show these videos on basis of points. What will be the SQL Query for this? A LINQ query will be more better. If you know a better way to achieve this, please help.

    Read the article

  • How do I create a point system in a Rails app that assigns points to users and non-authenticated-use

    - by codyvbrown
    I'm building a question and answer application on top of twitter and I'm hitting some snags because I'm inevitably dealing with two classes of users: authenticated and non-authenticated. The site enable users to give points to other users, who may or may not be authenticated, and I want to create a site-wide point system where the application stores and displays this information on their profile. I want to save this point data to the user because that would be faster and more efficient but non-authenticated users aren't in our system, we only have the twitter handle. So instead we display the points in our system like this: @points = point.all( :select => "tag, count(*) AS count", # Return tag and count :group => 'tag', # Group by the tag :order => "2 desc", :conditions => {:twitter_handle => params[:username]}) Is there a better way to do this? Is there a better way to associate data with non-authenticated users?

    Read the article

  • Is this a correct iText design?

    - by Lucas
    I´m making some pdf reports to be used on a web app. I wonder if the way I´m taking to make the designs is appropriated. This would be a screenshot of the way I´m doing the things. As you can see, I´m using tables to position everything in the document. I think this is a pretty much similar design to HTML. But I want to know is there is a better way to get the same result I got. This is the document without cell borders: I could post the code if necessary. By the way, why should I spend long hours programming these kind of stuff with iText tool when I could do things faster and maybe better looking with iReport? I like iText, it´s just a question. Sorry for my english and thanks!

    Read the article

  • What to return when making an Ajax request

    - by Russell
    When we return data from an Ajax call, is it better to return a document containing HTML to display on the page or return an Xml/json data which can be processed? I know different circumstances may determine what 'better' means, but I really want to know which will be more appropriate for different circumstances. I am working on the framework for a large ASP .Net application, using jQuery Ajax (forms plugin). My initial thought was to return the data as Xml, then process accordingly. Then this increases processing required in Javascript, to populate the page. I am trying to balance flexible, clear and simple. Thanks in advance for your knowledge and information.

    Read the article

  • A btter way to represent Same value given multiple values(C#3.0)

    - by Newbie
    I have a situation for which I am looking for a more elegant solution. Consider the below cases "BKP","bkp","book-to-price" (will represent) BOOK-TO-PRICE "aop","aspect oriented program" (will represent) ASPECT-ORIENTED-PROGRAM i.e. if the user enter BKP or bkp or book-to-price , the program should treat that as BOOK-TO-PRICE. The same holds good for the second example(ASPECT-ORIENTED-PROGRAM). I have the below solution: Solution: if (str == "BKP" || str == "bkp" || str == "book-to-price" ) return "BOOK-TO-PRICE". But I think that there can be many other better solutions . Could you people please give some suggestion.(with an example will be better) I am using C#3.0 and dotnet framework 3.5

    Read the article

  • .NET Performance: Deep Recursion vs Queue

    - by JeffN825
    I'm writing a component that needs to walk large object graphs, sometimes 20-30 levels deep. What is the most performant way of walking the graph? A. Enqueueing "steps" so as to avoid deep recursion or B. A DFS (depth first search) which may step many levels deep and have a "deep" stack trace at times. I guess the question I'm asking is: Is there a performance hit in .NET for doing a DFS that causes a "deep" stack trace? If so, what is the hit? And would I better better off with some BFS by means of queueing up steps that would have been handled recursively in a DFS? Sorry if I'm being unclear. Thanks.

    Read the article

  • Algorithm for grouping friends at the cinema [closed]

    - by Tim Skauge
    I got a brain teaser for you - it's not as simple as it sounds so please read and try to solve the issue. Before you ask if it's homework - it's not! I just wish to see if there's an elegant way of solving this. Here's the issue: X-number of friends want's to go to the cinema and wish to be seated in the best available groups. Best case is that everyone sits together and worst case is that everyone sits alone. Fewer groups are preferred over more groups. Sitting alone is least preferred. Input is the number of people going to the cinema and output should be an array of integer arrays that contains: Ordered combinations (most preferred are first) Number of people in each group Below are some examples of number of people going to the cinema and a list of preferred combinations these people can be seated: 1 person: 1 2 persons: 2, 1+1 3 persons: 3, 2+1, 1+1+1 4 persons: 4, 2+2, 3+1, 2+1+1, 1+1+1+1 5 persons: 5, 3+2, 4+1, 2+2+1, 3+1+1, 2+1+1+1, 1+1+1+1+1 6 persons: 6, 3+3, 4+2, 2+2+2, 5+1, 3+2+1, 2+2+1+1, 2+1+1+1+1, 1+1+1+1+1+1 Example with more than 7 persons explodes in combinations but I think you get the point by now. Question is: What does an algorithm look like that solves this problem? My language by choice is C# so if you could give an answer in C# it would be fantastic!

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • Objects instead of global variables in Perl

    - by Gaurav Dadhania
    I don't know if this is the right thing to do. But I'm lookig for tutorials/articles on using objects instead of global variables to store state. For eg. package something # some code here... # that generates errors and uses # something::errors to track errors. package something::errors sub new { my ($this) = @_; bless $this; return $this; } sub setErrors{ my ($this, @errors) = @_; $this->{errors} = \@errors; } sub getErrors{ my ($this) = @_; return $this->{errors}; } Is this better than using global varibles? Any down-sides to this? Any approach which might be better? Thanks.

    Read the article

  • Objective - C, fastest way to show sequence of images in UIImageView

    - by Almas Adilbek
    I have hundreds of images, which are frame images of one animation (24 images per second). Each image size is 1024x690. My problem is, I need to make smooth animation iterating each image frame in UIImageView. I know I can use animationImages of UIImageView. But it crashes, because of memory problem. Also, I can use imageView.image = [UIImage imageNamed:@""] that would cache each image, so that the next repeat animation will be smooth. But, caching a lot of images crashed app. Now I use imageView.image = [UIImage imageWithContentsOfFile:@""], which does not crash app, but doesn't make animation so smooth. Maybe there is a better way to make good animation of frame images? Maybe I need to make some preparations, in order to somehow achieve better result. I need your advices. Thank you!

    Read the article

  • Statistics Question: Kernel Smoothing in R

    - by James Thompson
    I have data of this form: x y 1 0.19 2 0.26 3 0.40 4 0.58 5 0.59 6 1.24 7 0.68 8 0.60 9 1.12 10 0.80 11 1.20 12 1.17 13 0.39 I'm currently plotting a kernel-smoothed density estimate of the x versus y using this code: smoothed = ksmooth( d$resi, d$score, bandwidth = 6 ) plot( smoothed ) I simply want a plot of the x versus smoothed(y) values, which is ## Heading ## However, the documentation for ksmooth suggests that this isn't the best kernel-smoothing package available: This function is implemented purely for compatibility with S, although it is nowhere near as slow as the S function. Better kernel smoothers are available in other packages. What other kernel smoothers are better can these smoothers be found?

    Read the article

  • What FIX implementation do you recommend for use with .NET

    - by Ajaxx
    I am reviewing implementation choices for FIX when using .NET. A few obvious choices come to mind, but I want to know if there are other options, better choices or if we've made the same decision as a lot of you. QuickFIX - Stable, C++ implementation - so you've got unmanaged code to interop with. FIX4NET - C# implementation - seems to have some gaps in its implementation. DIY - Chime in here if you've made your own FIX engine Let me throw in some caveats here. I'm not looking for sub 100 microsecond processing. Performance is a requirement, but not so much that it's driving my decisions. A solid product that is stable, performs well and is flexible enough to deal with vendor specific dialects is the sweet spot. The more we can do in .NET the better.

    Read the article

  • How can my team avoid frequent errors after refactoring?

    - by SDD64
    to give you a little background: I work for a company with roughly twelve Ruby on Rails developers (+/- interns). Remote work is common. Our product is made out of two parts: a rather fat core, and thin up to big customer projects built upon it. Customer projects usually expand the core. Overwriting of key features does not happen. I might add that the core has some rather bad parts that are in urgent need of refactorings. There are specs, but mostly for the customer projects. The worst part of the core are untested (as it should be...). The developers are split into two teams, working with one or two PO for each sprint. Usually, one customer project is strictly associated with one of the teams and POs. Now our problem: Rather frequently, we break each others stuff. Some one from Team A expands or refactors the core feature Y, causing unexpected errors for one of Team B's customer projects. Mostly, the changes are not announced over the teams, so the bugs hit almost always unexpected. Team B, including the PO, thought about feature Y to be stable and did not test it before releasing, unaware of the changes. How to get rid of those problems? What kind of 'announcement technique' can you recommend me?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >