Search Results

Search found 23661 results on 947 pages for 'worse is better'.

Page 123/947 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • In .NET which loop runs faster for or foreach

    - by Binoj Antony
    In c#/VB.NET/.NET which loop runs faster for or foreach? Ever since I read that for loop works faster than foreach a long time ago I assumed it stood true for all collections, generic collection all arrays etc. I scoured google and found few articles but most of them are inconclusive (read comments on the articles) and open ended. What would be ideal is to have each scenarios listed and the best solution for the same e.g: (just example of how it should be) for iterating an array of 1000+ strings - for is better than foreach for iterating over IList (non generic) strings - foreach is better than for Few references found on the web for the same: Original grand old article by Emmanuel Schanzer CodeProject FOREACH Vs. FOR Blog - To foreach or not to foreach that is the question asp.net forum - NET 1.1 C# for vs foreach [Edit] Apart from the readability aspect of it I am really interested in facts and figures, there are applications where the last mile of performance optimization squeezed do matter.

    Read the article

  • How to structure a Visual Studio project for the data access layer

    - by Akk
    I currently have a project that uses various DB access technologies mainly for showcasing or for demos. Currently we have: Namespace App.Data (App.Data.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql The above structure is ok as we only use Sql Server as the DB. But going forward we will be including Oracle, MySql etc. So what would be a better structure with this in mind? I thought about: Namespace App.Data.SqlServer (App.Data.SqlServer.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql Or would it just be better to have separate assemblies for each database and access technology?: Namespace App.Data.SqlServer.NHibernate (App.Data.SqlServer.NHibernate.dll) Namespace App.Data.SqlServer.EntityFramework(App.Data.SqlServer.EntityFramework.dll) Namespace App.Data.Oracle.NHibernate (App.Data.Oracle.NHibernate.dll) Namespace App.Data.MySql.NHibernate (App.Data.MySql.Oracle.dll)

    Read the article

  • relating data stored in NoSQL DB to data stored in SQL DB

    - by seanbrant
    Whats the best way to use a SQL DB along side a NoSQL DB? I want to keep my users and other data in postgres but have some data that would be better suited for a NoSQL DB like redis. I see a lot of talk about switching to NoSQL but little talk on integrating it with existing systems. I think it would be foolish to throw the baby out with the bath water and ditch SQL all together, unless it makes things easier to maintain and develop. I'm wondering what the best approach is for relating data stored in SQL to my data in redis. I was thinking of something along the line of this. User object stored in SQL Book object in redis, key sh1 hash of value, value is a JSON string Relations stored in redis, key User.pk:books, value redis set of sha1's Anyone have experience, tips, better ways?

    Read the article

  • Ubuntu 12.04 Very slow especially with Android Studio

    - by Dew
    I have an old laptop with the following specification: Memory: 485 MiB, Processor: Genuine intel CPU T2300 @ 1.66 GHz ×2, OS Type: 32 bit, Disk: 78.1 GB, I installed on it Ubuntu 12.04 LTS and I noticed that the overall system is very slow in responding. I tried to search about that in the internet and I found some articles talking about how to make Ubuntu 12.04 LTS run fast I applied all what they said including download LXDE desktop environment and then nothing different in the system response time. Then I need to develop some android applications so, I download Android Studio (Beta) 0.8.6. The problem became worse than before whenever I tried to open the Android Studio the screen is frozen for some minutes then it took time to download the projects and initialize the work space also, when I tried to move the cursor he is move very slowly. When I tried to run my first application on the AVD it took three hours and still not run yet. I delete the Android Studio and install it again several times, I was trying to solve the problem but still nothing change. Please if you have any suggestions that may help me make my laptop and Android Studio work faster I will appreciate it for you. Thank you in advance.

    Read the article

  • dynamic searchable fields, best practice?

    - by boblu
    I have a Lexicon model, and I want user to be able to create dynamic feature to every lexicon. And I have a complicate search interface that let user search on every single feature (including the dynamic ones) belonged to Lexicon model. I could have used a serialized text field to save all the dynamic information if they are not for searching. In case I want to let user search on all fields, I have created a DynamicField Model to hold all dynamically created features. But imagine I have 1,000,000,000 lexicon, and if one create a dynamic feature for every lexicon, this will result creating 1,000,000,000 rows in DynamicField model. So the sql search function will become quite inefficient while a lot of dynamic features created. Is there a better solution for this situation? Which way should I take? searching for a better db design for dynamic fields try to tuning mysql(add cache fields, add index ...) with current db design

    Read the article

  • Multiple Table Joins to Improve Performance?

    - by EdenMachine
    If I have a table structure like this: Transaction [TransID, ...] Document [DocID, TransID, ...] Signer [SignerID, ...] Signature [SigID, DocID, SignerID, ...] And the business logic is like this: Transactions can have multiple documents Documents can have multiple signatures And the same signer can have multiple signatures in multiple documents within the same transaction So, now to my actual question: If I wanted to find all the documents in a particular transaction, would it be better, performance-wise, if I also stored the TransID and the DocID in the Signer table as well so I have smaller joins. Otherwise, I'd have to join through the Signature Document Transaction Documents to get all the documents in the transaction for that signer. I think it's really messy to have that many relationships in the Signer table though and it doesn't seem "correct" to do it that way (also seems like an update nightmare) but I can see that it might be better performance for direct joins. Thoughts? TIA!

    Read the article

  • What the difference between zend framework and Wordpress as framework ?

    - by justjoe
    i only know wordpress and start to seek another alternative framework, zend. i heard hearsay that zend's better from others framework. if you're "a serous coder", or try to act like one, you need to use it on building your web app. some said zend is better. But it's subjective. It's fast ans secure. But nobody tell me the reason or at leas compare it with with wordpress. ultimate question : Do zend have theme or plugin just like wordpress ? any hint will be helpful

    Read the article

  • PHP Framework Benefits / Downfalls

    - by Lizard
    I have been a PHP developer for about 10 years now and until about a month ago I have never used a framework. The framework I am now using due to an existing codebase is cakePHP 1.2 I can see certain benefits of the frameworks with the basic helpers like default layouts. I can deffinately seen the benefits of MVC keeping the logic sperate etc. But the query building just seems to be bloated. Is this expected? Am I likely to be able to build better queries than the framework could build? I just feel I could get my apps running better without a framework. What are your thoughts?

    Read the article

  • Javascript code in ASP.NET MVC Partial Views (ASCX) or not?

    - by Alex
    Is there a "best practice" for placing Javascript code when you have many partial views and JS code that's specific to them? I feel like I'm creating a maintenance nightmare by having many partial views and then a bunch of independent Javascript files for them which need to be synced up when there is a partial view change. It appears, for maintenance purposes, better to me to put the JS code with the partial view. But then I'm violating generally accepted practices that all JS code should be at the bottom of the page and not mixed in, and also I'd end up with multiple references to the same JS file (as I'd include a reference in each ASCX for intellisense purposes). Does anyone have a better idea? Thank you!

    Read the article

  • Deploying InfoPath forms &ndash; idiosyncrasies

    - by PointsToShare
    Well, I have written a sophisticated PowerShell script to expedite the deployment of InfoPath forms - .XSN file.  Along the way by way of trial and error (mostly error and error), I discovered a few little things. Here they are. •    Regardless of how the install command is run – PowerShell or the GUI in Central Admin – SharePoint enwraps the XSN inside a solution – WSP, then installs and deploys the solution. •    The solution is named by concatenating “form-“ with the first 16 characters (or less if the file name is shorter than 16) of the file name and the required WSP at the end. So if the form name was MyInfopathForm.xsn the solution name will be form-MyInfopathForm.wsp, but for WithdrawalOfRequestsForRefund.xsn it will be named form-WithdrawalOfRequ.wsp •    It only gets worse! Had there already been a solution file with the same name, Microsoft appends a three digit number to the name, like MyInfopathForm-123.wsp. Remember a digit is a finger, I suspect a middle finger, so when you deploy the same form – many versions of it, or as it was in my case – testing a script time and again, you’ll end up with many such digit (middle finger) appended solutions, all un-deployed except the last one. This is not a bug. It’s a feature!   Well, there are ways around it. When by hand, remove the solution from the solution store before deploying the form again. In the script I do the same thing. And finally - an important caveat; Make sure that all your form names are unique in the first 16 characters. If you also have a form with the name forWithdrawalOfRequestForRelief.xsn, you’re in trouble! That’s all folks!

    Read the article

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • Show friendly message on ASP.NET Ajax error

    - by balexandre
    You all know how annoying is this: I do have a log system and the correct error is well explicit there, but I want to give a better message to the user. I keep trying several ways but I'm using Telerik components and well jQuery and I ended up using both ASP.NET Ajax methods and jQuery, so I use function pageLoad() { try { var manager = Sys.WebForms.PageRequestManager.getInstance(); manager.add_endRequest(endRequest); manager.add_beginRequest(OnBeginRequest); manager } catch (err) { alert(err); } } as well $(document).ready(function() { ... } that alert(err) is never fired even upon OnClick events what's the best approach to avoid this message errors and provide a cleaner way? all this happens in <asp:UpdatePanel> as I use that when I didn't know better (3 years ago!) and I really don't want to mess up and build all again from scratch :( Any help is greatly appreciated Updated with more error windows after volpav solution

    Read the article

  • Is this a correct iText design?

    - by Lucas
    I´m making some pdf reports to be used on a web app. I wonder if the way I´m taking to make the designs is appropriated. This would be a screenshot of the way I´m doing the things. As you can see, I´m using tables to position everything in the document. I think this is a pretty much similar design to HTML. But I want to know is there is a better way to get the same result I got. This is the document without cell borders: I could post the code if necessary. By the way, why should I spend long hours programming these kind of stuff with iText tool when I could do things faster and maybe better looking with iReport? I like iText, it´s just a question. Sorry for my english and thanks!

    Read the article

  • Check if a string substitution rule will ever generate another string.

    - by Mgccl
    Given two strings S and T of same length. Given a set of replacement rules, that find substring A in S and replace it with string B. A and B have the same length. Is there a sequence of rule application, such that it make string S into string T? I believe there is no better way to answer this than try every single rule in every single state. Which would be exponential time. But I don't know if there are better solutions to it.

    Read the article

  • How do I create a point system in a Rails app that assigns points to users and non-authenticated-use

    - by codyvbrown
    I'm building a question and answer application on top of twitter and I'm hitting some snags because I'm inevitably dealing with two classes of users: authenticated and non-authenticated. The site enable users to give points to other users, who may or may not be authenticated, and I want to create a site-wide point system where the application stores and displays this information on their profile. I want to save this point data to the user because that would be faster and more efficient but non-authenticated users aren't in our system, we only have the twitter handle. So instead we display the points in our system like this: @points = point.all( :select => "tag, count(*) AS count", # Return tag and count :group => 'tag', # Group by the tag :order => "2 desc", :conditions => {:twitter_handle => params[:username]}) Is there a better way to do this? Is there a better way to associate data with non-authenticated users?

    Read the article

  • Whats the Best Practice for a Search SQL Query?

    - by Marc V
    I have a SQL 2008 Express database, which have following tables: CREATE TABLE Videos (VideoID bigint not null, Title varchar(100) NULL, Description varchar(MAX) NULL, isActive bit NULL ) CREATE TABLE Tags (TagID bigint not null, Tag varchar(100) NULL ) CREATE TABLE VideoTags (VideoID bigint not null, TagID bigint not null ) Now I need SQL query to search for word (i.e. Beyonce Halo Music Video) against these tables. Which videos have: For Title exact phrase will get 0.5 points For Description exact phrase will get 0.4 points For tags exact phrase will get 0.3 points For title all words will get 0.2 points For description all words will get 0.2 points For title one or more words will get 0.1 points For description one or more words will get 0.1 points And I will show these videos on basis of points. What will be the SQL Query for this? A LINQ query will be more better. If you know a better way to achieve this, please help.

    Read the article

  • Network printer problem with 12.04

    - by G. He
    I had an HP LaserJet 3030 printer connected to an Ubuntu box. It worked fine with 11.10. I was able to print from Ubuntu as well as from Windows and Mac from the home network. About a month or so ago, I upgraded 11.10 to 12.04, then things started to falling apart. My Windows 7 laptop couldn't print to the printer any more. Today, I installed many updates on 12.04, hoping that would fix the printing problem. Unfortunately, it made the situation much worse. Now not only my Windows 7 laptop won't print, my XP desktop won't print either. Every time I print something from the Windows computer, the Ubuntu box logs an error message in /var/log/samba/log.'machineName' as: _spoolss_OpenPrinterEx: Cannot open a printer handle for printer \\server. It is interesting that it uses the server name as the printer, not the \\server\xyzprinter as the printer name. Anyone had a similar problem? Anyway to work around the problem?

    Read the article

  • What to return when making an Ajax request

    - by Russell
    When we return data from an Ajax call, is it better to return a document containing HTML to display on the page or return an Xml/json data which can be processed? I know different circumstances may determine what 'better' means, but I really want to know which will be more appropriate for different circumstances. I am working on the framework for a large ASP .Net application, using jQuery Ajax (forms plugin). My initial thought was to return the data as Xml, then process accordingly. Then this increases processing required in Javascript, to populate the page. I am trying to balance flexible, clear and simple. Thanks in advance for your knowledge and information.

    Read the article

  • .NET Performance: Deep Recursion vs Queue

    - by JeffN825
    I'm writing a component that needs to walk large object graphs, sometimes 20-30 levels deep. What is the most performant way of walking the graph? A. Enqueueing "steps" so as to avoid deep recursion or B. A DFS (depth first search) which may step many levels deep and have a "deep" stack trace at times. I guess the question I'm asking is: Is there a performance hit in .NET for doing a DFS that causes a "deep" stack trace? If so, what is the hit? And would I better better off with some BFS by means of queueing up steps that would have been handled recursively in a DFS? Sorry if I'm being unclear. Thanks.

    Read the article

  • A btter way to represent Same value given multiple values(C#3.0)

    - by Newbie
    I have a situation for which I am looking for a more elegant solution. Consider the below cases "BKP","bkp","book-to-price" (will represent) BOOK-TO-PRICE "aop","aspect oriented program" (will represent) ASPECT-ORIENTED-PROGRAM i.e. if the user enter BKP or bkp or book-to-price , the program should treat that as BOOK-TO-PRICE. The same holds good for the second example(ASPECT-ORIENTED-PROGRAM). I have the below solution: Solution: if (str == "BKP" || str == "bkp" || str == "book-to-price" ) return "BOOK-TO-PRICE". But I think that there can be many other better solutions . Could you people please give some suggestion.(with an example will be better) I am using C#3.0 and dotnet framework 3.5

    Read the article

  • Differentiating between user script input formats

    - by KChaloux
    I have a .NET project at work that provides a couple of (Iron)Python scripts to the customers, to allow them to customize the output of the program. The application generates code for certain machines, and supports a couple of different formats. Until recently, we only provided a script for one format. We're expanding upon that to include support for the others. If the user is using a script, they select their input script before generating the output code. A script designed for Format1 output is going to cause errors if they're trying to generate Format2 output. I need to deal with this. One option would just be to let the customers use common sense, and if they load the wrong script it will just fail, or worse, produce inaccurate data. I'm inclined to provide a little more protection than that. At the moment I'm considering putting a shebang-style comment line at the top of the script, ala: # OUTPUT - Format1 If the user tries to run a Format2 process with a Format1 script, it will warn them. Alternatively I could create different file extensions for the input scripts that vary by type. The file-type comment approach helps prevent the script from actually loading improperly, at the cost of failing to warn the user until they've already selected it, via a dialog box. Using different file extensions would allow me to cut down on visual clutter when providing a File Dialog, but doesn't actually stop them from loading the wrong script. So I'm really not sure if the right approach is to just leave it alone, or provide some safeguards.

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • FlockDB - What is it? And best cases for it uses.

    - by Guru
    Just came across FlockDB graph database. Details at github /flockDB. Twitter claims it uses FlockDB for the following: Twitter runs FlockDB on a large cluster of machines. we use it to store social graphs (who follows whom, who blocks whom) and secondary indices at twitter. At first glance, setup and trying it doesn't look straight forward. Have anyone already used it / setup this? If so, please answer the following general queries. What kind of applications is it better suited for? (Twitter claims it is simple and very rough, it remains to see what it meant though) How is FlockDB better than other graph db / noSQL db. Have you setup FlockDB, used it for a application? Early advices any? Note: I am evaluating the FlockDB and other graph databases mainly for learning them. Perhaps, I will build an application for that.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >