Search Results

Search found 9381 results on 376 pages for 'vs macros'.

Page 4/376 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Rank Source Control Optionsl-VSS vs CVS vs none vs your own hell

    - by Roman A. Taycher
    It seems like a lit of people here and on many programmer wikis/blogs/ect. elsewhere really dislike VSS. A lot of people also have a serious dislike for cvs. In many places I have heard a lot of differing opinions on whether or not using vss or cvs is better or worse then using no source control, please rate the worst and explain why!!!!! you rated them this way. Feel free to throw in your own horrible system in the rankings. If you feel it depends on the circumstances try to explain the some of the different scenarios which lead to different rankings. (note:I see a lot of discussion of what is better but little of what is worse.) second note: while both answers are nice I'm looking less for good replacements and more for a comparison of which is worse and more importantly why!!!!!

    Read the article

  • Rank Source Control Options-VSS vs CVS vs none vs your own hell

    - by Roman A. Taycher
    It seems like a lot of people here and on many programmer wikis/blogs/ect. elsewhere really dislike VSS. A lot of people also have a serious dislike for cvs. In many places I have heard a lot of differing opinions on whether or not using VSS or cvs is better or worse then using no source control, please rate the worst and explain why!!!!! you rated them this way. Feel free to throw in your own horrible system in the rankings. If you feel it depends on the circumstances try to explain the some of the different scenarios which lead to different rankings. (note:I see a lot of discussion of what is better but little of what is worse.) second note: while both answers are nice I'm looking less for good replacements and more for a comparison of which is worse and more importantly why!

    Read the article

  • Performance - FunctionCall vs Event vs Action vs Delegate

    - by hwcverwe
    Currently I am using Microsoft Sync Framework to synchronize databases. I need to gather information per record which is inserted/updated/deleted by Microsoft Sync Framework and do something with this information. The sync speed can go over 50.000 records per minute. So that means my additional code need to be very lightweight otherwise it will be a huge performance penalty. Microsoft Sync Framework raises an SyncProgress event for each record. I am subscribed to that code like this: // Assembly1 SyncProvider.SyncProgress += OnSyncProgress; // .... private void OnSyncProgress(object sender, DbSyncProgressEventArgs e) { switch (args.Stage) { case DbSyncStage.ApplyingInserts: // MethodCall/Delegate/Action<>/EventHandler<> => HandleInsertedRecordInformation // Do something with inserted record info break; case DbSyncStage.ApplyingUpdates: // MethodCall/Delegate/Action<>/EventHandler<> => HandleUpdatedRecordInformation // Do something with updated record info break; case DbSyncStage.ApplyingDeletes: // MethodCall/Delegate/Action<>/EventHandler<> => HandleDeletedRecordInformation // Do something with deleted record info break; } } Somewhere else in another assembly I have three methods: // Assembly2 public class SyncInformation { public void HandleInsertedRecordInformation(...) {...} public void HandleUpdatedRecordInformation(...) {...} public void HandleInsertedRecordInformation(...) {...} } Assembly2 has a reference to Assembly1. So Assembly1 does not know anything about the existence of the SyncInformation class which need to handle the gathered information. So I have the following options to trigger this code: use events and subscribe on it in Assembly2 1.1. EventHandler< 1.2. Action< 1.3. Delegates using dependency injection: public class Assembly2.SyncInformation : Assembly1.ISyncInformation Other? I know the performance depends on: OnSyncProgress switch using a method call, delegate, Action< or EventHandler< Implementation of SyncInformation class I currently don't care about the implementation of the SyncInformation class. I am mainly focused on the OnSyncProgress method and how to call the SyncInformation methods. So my questions are: What is the most efficient approach? What is the most in-efficient approach? Is there a better way than using a switch in OnSyncProgress?

    Read the article

  • Visual Studio macro to navigate to T4MVC link

    - by shannon
    I use T4MVC and I'm happy with it and want to keep it - it keeps down run time defects. Unfortunately, it makes it harder to navigate to views and content (a.k.a. Views and Links in T4MVC) though. Even using Resharper, I can't navigate to the referenced item: T4MVC and Resharper Navigation Can I get a hand building a macro to do this? Never having built a VS IDE macro before, I don't have a grasp on how to get at some things, like the internal results of the "Go To Definition" process, if that's even possible. If you aren't familiar with T4MVC, here's generally what the macro might do to help: Given the token: Links.Content.Scripts.jQuery_js in the file MyView.cshtml, '(F12) Go To Definition'. This behaves properly. Having arrived at the the related assignment: public readonly string jQuery_js = "~/Content/Scripts/jQuery.js"; in a file generated by T4MVC (which is very nice, thank you David, but we really don't ever need to see), capture the string assigned and close the file. Navigate in Solution Explorer to the PhysicalPath represented by the captured string. This process would also work for views/layouts/master-pages/partials, etc. If you provide a macro or link to a macro to do this, or have another solution, wonderful. Otherwise, hints on how to do step 3 simply in a VS macro would be especially appreciated and receive upvote from me. I'd post the macro back here as an answer when done. Thanks!

    Read the article

  • What is the worst real-world macros/pre-processor abuse you've ever come across?

    - by Trevor Boyd Smith
    What is the worst real-world macros/pre-processor abuse you've ever come across (please no contrived IOCCC answers *haha*)? Please add a short snippet or story if it is really entertaining. The goal is to teach something instead of always telling people "never use macros". p.s.: I've used macros before... but usually I get rid of them eventually when I have a "real" solution (even if the real solution is inlined so it becomes similar to a macro). Bonus: Give an example where the macro was really was better than a not-macro solution. Related question: When are C++ macros beneficial?

    Read the article

  • Adobe Photoshop Vs Lightroom Vs Aperture

    - by Aditi
    Adobe Photoshop is the standard choice for photographers, graphic artists and Web designers. Adobe Photoshop Lightroom  & Apple’s Aperture are also in the same league but the usage is vastly different. Although Photoshop is most popular & widely used by photographers, but in many ways it’s less relevant to photographers than ever before. As Lightroom & Aperture is aimed squarely at photographers for photo-processing. With this write up we are going to help you choose what is right for you and why. Adobe Photoshop Adobe Photoshop is the most liked tool for the detailed photo editing & designing work. Photoshop provides great features for rollover and Image slicing. Adobe Photoshop includes comprehensive optimization features for producing the highest quality Web graphics with the smallest possible file sizes. You can also create startling animations with it. Designers & Editors know how important precise masking is, PhotoShop lets you do that with various detailing tools. Art history brush, contact sheets, and history palette are some of the smart features, which add to its viability. Download Whether you’re producing printed pages or moving images, you can work more efficiently and produce better results because of its smooth integration across other adobe applications. Buy supporting layer effects, it allows you to quickly add drop shadows, inner and outer glows, bevels, and embossing to layers. It also provides Seamless Web Graphics Workflow. Photoshop is hands-down the BEST for editing. Photoshop Cons: • Slower, less precise editing features in Bridge • Processing lots of images requires actions and can be slower than exporting images from Lightroom • Much slower with editing and processing a large number of images Aperture Apple Aperture is aimed at the professional photographer who shoots predominantly raw files. It helps them to manage their workflow and perform their initial Raw conversion in a better way. Aperture provides adjustment tools such as Histogram to modify color and white balance, but most of the editing of photos is left for Photoshop. It gives users the option of seeing their photographs laid out like slides or negatives on a light table. It boasts of – stars, color-coding and easy techniques for filtering and picking images. Aperture has moved forward few steps than Photoshop, but most of the editing work has been left for Photoshop as it features seamless Photoshop integration. Aperture Pros: Aperture is a step up from the iPhoto software that comes with every Mac, and fairly easy to learn. Adjustments are made in a logical order from top to bottom of the menu. You can store the images in a library or any folder you choose. Aperture also works really well with direct Canon files. It is just $79 if you buy it through Apple’s App Store Moving forward, it will run on the iPad, and possibly the iPhone – Adobe products like Lightroom and Photoshop may never offer these options It is much nicer and simpler user interface. Lightroom Lightroom does a smashing job of basic fixing and editing. It is more advanced tool for photographers. They can use it to have a startling photography effect. Light room has many advanced features, which makes it one of the best tools for photographers and far ahead of the other two. They are Nondestructive editing. Nothing is actually changed in an image until the photo is exported. Better controls over organizing your photos. Lightroom helps to gather a group of photos to use in a slideshow. Lightroom has larger Compare and Survey views of images. Quickly customizable interface. Simple keystrokes allow you to perform different All Lightroom controls are kept available in panels right next to the photos. Always-available History palette, it doesn’t go when you close lightroom. You gain more colors to work with compared to Photoshop and with more precise control. Local control, or adjusting small parts of a photo without affecting anything else, has long been an important part of photography. In Lightroom 2, you can darken, lighten, and affect color and change sharpness and other aspects of specific areas in the photo simply by brushing your cursor across the areas. Photoshop has far more power in its Cloning and Healing Brush tools than Lightroom, but Lightroom offers simple cloning and healing that’s nondestructive. Lightroom supports the RAW formats of more cameras than Aperture. Lightroom provides the option of storing images outside the application in the file system. It costs less than photoshop. Download Why PhotoShop is advanced than Lightroom? There are countless image processing plug-ins on the market for doing specialized processing in Photoshop. For example, if your image needs sophisticated noise reduction, you can use the Noiseware plug-in with Photoshop to do a much better job or noise removal than Lightroom can do. Lightroom’s advantages over Aperture 3 Will always have better integration with Photoshop. Lightroom is backed by bigger and more active user community (So abundant availability for tutorials, etc.) Better noise reduction tool. Especially for photographers the Lens-distortion correction tool  is perfect Lightroom Cons: • Have to Import images to work on them • Slows down with over 10,000 images in the catalog • For processing just one or two images this is a slower workflow Photoshop Pros: • ACR has the same RAW processing controls as Lightroom • ACR Histogram is specialized to the chosen color space (Lightroom is locked into ProPhoto RGB color space with an sRGB tone curve) • Don’t have to Import images to open in Bridge or ACR • Ability to customize processing of RAW images with Photoshop Actions Pricing and Availability Get LightRoomGet PhotoShop Latest version Of Photoshop can be purchased from Adobe store and Adobe authorized reseller and it costs US$999. Latest version of Aperture can be bought for US$199 from Apple Online store or Mac App Store. You can buy latest version of LightRoom from Adobe Store or Adobe Authorized reseller for US$299. Related posts:Adobe Photoshop CS5 vs Photoshop CS5 extended Web based Alternatives to Photoshop 10 Free Alternatives for Adobe Photoshop Software

    Read the article

  • Using the RSSBus Salesforce Excel Add-In From Excel Macros (VBA)

    - by dataintegration
    The RSSBus Salesforce Excel Add-In makes it easy to retrieve and update data from Salesforce from within Microsoft Excel. In addition to the built-in wizards that make data manipulation possible without code, the full functionality of the RSSBus Excel Add-Ins is available programmatically with Excel Macros (VBA) and Excel Functions. This article shows how to write an Excel macro that can be used to perform bulk inserts into Salesforce. Although this article uses the Salesforce Excel Add-In as an example, the same process can be applied to any of the Excel Add-Ins available on our website. Step 1: Download and install the RSSBus Excel Add-In available on our website. Step 2: Open Excel and create place holder cells for the connection details that are needed from the macro. In this article, a spreadsheet will be created for batch inserts, and these cells will store the connection details, and will be used to report the job Id, the batch Id, and the batch status. Step 3: Switch to the Developer tab in Excel. Add a new button on the spreadsheet, and create a new macro associated with it. This macro will contain the code needed to insert a batch of rows into Salesforce. Step 4: Add a reference to the Excel Add-In by selecting Tools --> References --> RSSBus Excel Add-In. The macro functions of the Excel Add-In will be available once the reference has been added. The following code shows how to call a Stored Procedure. In this example, a job is created to insert Leads by calling the CreateJob stored procedure. CreateJob returns a jobId that can be used to upload a large number of Leads in one transaction. Note the use of cells B1, B2, B3, and B4 that were created in Step 2 to read the connection settings from the Excel SpreadSheet and to write out the status of the procedure. methodName = "CreateJob" module.SetProviderName ("Salesforce") nameArray = Array("ObjectName", "Action", "ConcurrencyMode") valueArray = Array("Lead", "insert", "Serial") user = Range("B1").value pass = Range("B2").value atoken = Range("B3").value If (Not user = "" And Not pass = "" And Not atoken = "") Then module.SetConnectionString ("User=" + user + ";Password=" + pass + ";Access Token=" + atoken + ";") If module.CallSP(methodName, nameArray, valueArray) Then Dim ColumnCount As Integer ColumnCount = module.GetColumnCount Dim idIndex As Integer For Count = 0 To ColumnCount - 1 Dim colName As String colName = module.GetColumnName(Count) If module.GetColumnName(Count) = "id" Then idIndex = Count End If Next While (Not module.EOF) Range("B4").value = module.GetValue(idIndex) module.MoveNext Wend Else MsgBox "The CreateJob query failed." End If Exit Sub Else MsgBox "Please specify the connection details." Exit Sub End If Error: MsgBox "ERROR: " & Err.Description Step 5: Add the code to your macro. If you use the code above, you can check the results at Salesforce.com. They can be seen at Administration Setup -> Monitoring -> Bulk Data Load Jobs. Download the attached sample file for a more complete demo. Distributing an Excel File With Macros An Excel file with macros is saved using the .xlms extension. The code for the macro remains in the Excel file, and you can distribute your Excel file to any machine where the RSSBus Salesforce Excel Add-In is already installed. Macro Sample File Please download the fully functional sample excel file that includes the code referenced here. You will also need the RSSBus Excel Add-In to make the connection. You can download a free trial here. Note: You may get an error message stating: "Can't find project or library." in Excel 2007, since this example is made using Excel 2010. To resolve this, navigate to Tools -> References and uncheck the "MISSING: RSSBus Excel Add-In", then scroll down and check the "RSSBus Excel Add-In" listed below it.

    Read the article

  • "Mac" vs "OS X" vs "Mac OS X"

    - by Brian Campbell
    I am writing server software that gives administrators options that apply only to Mac OS X clients. When naming those options, I need to decide how I am going to refer to such clients. I can see three choices: Mac OS X Mac OS X In context, I feel that "Mac OS X" might be a little clumsy: "Default protocol for Mac OS X clients"; "Change Mac OS X protocol". In discussing with my boss, he suggests "OS X", as that's the name for the OS itself, while I think that "Mac" is more recognizable. While "OS X" is technically correct and is what Apple recommends, I feel that it has a lot less name recognition than "Mac"; in particular, administrators who work in a Windows-only environment may not even recognize "OS X" and wonder what the options is about, while I think everyone knows what "Mac" refers to. In looking at what several popular pieces of software choose, I see that Microsoft has "Office for Mac". Adobe calls it "Macintosh" (which sounds very outdated, I believe that Apple stopped using that "Macintosh" years ago). Firefox uses "Mac OS X". Google has "Google Software Downloads for Mac". I don't see many popular pieces of software that refer to it solely as OS X; it seems that either "Mac OS X" or "Mac" is used most often. Apple does refer to it as OS X, but I think the fact that it's coming from Apple provides the disambiguation that you need, so it's not confusing coming from them, while it may be in another context. Is there any good solution for this? Should I just use the somewhat clumsy "Mac OS X" everywhere (or at least, the first time I refer to it in any given screen)? Obviously, my boss has final say, but I'd like to be able to provide a coherent argument.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • SEO: Nested List vs List, Split Over Divs vs Definition List

    - by Jon P
    From an SEO perspective which, if any, is better: Option 1: Nested lists with h2 tags <ul id="mainpoints"> <li><h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </li> <li><h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> <li><h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> </ul> Option 2: Divs with h2 and lists <div id="mainpoints"> <div> <h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </div> <div> <h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> <div> <h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> </div> Option 3: Definition List <dl id="mainpoints"> <dt>Powerful Analysis</dt> <dd>- Charting and indicators</dd> <dd>- Daily trading signals</dd> <dd>- Company health checks</dd> <dt>World Market Data</dt> [List Items removed for brevity] <dt>Daily Market Data</dt> [List Items removed for brevity] </dl> My instincts tell me that semanticaly the pure list options (1 & 3) are the best and that h2 may be more SEO friendly (1 & 2) which would point to option 1 as being the best option. I do love the lean makeup of the definition list but will I take an SEO hit by losing the h2 tags? Before anyone asks, h2 is not valid markup in a dt tag. Are my instincts right with a nested list being the way to go?

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • Podcasting vs Stack Overflow vs Geekswithblogs

    - by MarkPearl
    For a few years now I have been looking for effective ways to be involved in the “community”. While there are a few community programming events in my area (Johannesberg), there isn’t too much face to face stuff – which has caused me to turn to the internet. My internet attempts have been varied – at first I took the passive approach of listening to tech podcasts. This was great for a while, but soon the content became semi-repetitive and a little boring. It seemed that the podcasts I was listening to all went round the same themes and speakers and while I am still a keen listener to several tech podcasts – it didn’t quench my thirst. So I began to be a bit more active – starting with stack overflow – where I would scan the site for questions that were in the realm of my ability to answer. It worked for a while but soon it began to be discouraging – there seems to be so many people that know so much more than me and are quicker at typing that I felt fairly ineffective. So while I still use Stack Overflow when I am in a pickle and need some help – it feels more like me taking from the community than giving anything. Which brought me to Geeks with blogs. Till I found GWB I hadn’t felt like I was an active part of a community. I had blogged before on Blogspot and Wordpress but hadn’t felt associated to the community. Now when I get a comment from someone on one of my GWB posts either thanking me or adding a bit more or correcting me, it makes me feel like I am contributing to a community. So well done GWB. Thanks for making a spot that makes me feel at home!

    Read the article

  • What are the benefits and drawback of documentation vs tutorials vs video tutorials [closed]

    - by Cat
    Which types of learning resources do you find the most helpful, for which kinds of learning and/or perhaps at specific times? Some examples of types of learning you could consider: When starting to integrate a new SDK inside an existing codebase When learning a new framework without having to integrate legacy code When digging deeper into an already-used SDK that you may not know very well yet For example - (video) tutorials are usually very easy to follow and tells a story from beginning to end to get results, but will nearly always assume starting from scratch or a previous tutorial. Therefore such a resource is useful for quick learning if you don't have legacy code around, but less so if you have to search for the best-fit to the code you already have. SDK Documentation on the other hand is well-structured but does not tell a story. It is more difficult to get to a specific larger result with documentation alone, but it is a better fit when you do have legacy code around and are searching for perhaps non-obvious ways of employing the SDK or library. Are there other forms of resources that you find useful, such as interactive training?

    Read the article

  • LSP vs OCP / Liskov Substitution VS Open Close

    - by Kolyunya
    I am trying to understand the SOLID principles of OOP and I've come to the conclusion that LSP and OCP have some similarities (if not to say more). the open/closed principle states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification". LSP in simple words states that any instance of Foo can be replaced with any instance of Bar which is derived from Foo and the program will work the same very way. I'm not a pro OOP programmer, but it seems to me that LSP is only possible if Bar, derived from Foo does not change anything in it but only extends it. That means that in particular program LSP is true only when OCP is true and OCP is true only if LSP is true. That means that they are equal. Correct me if I'm wrong. I really want to understand these ideas. Great thanks for an answer.

    Read the article

  • Ubuntu 12.04.3 - Graphics Driver: Default vs Nvidia 319-recommended vs Nvidia 319-updated

    - by Navraj
    Background: I switched from default driver to Nvidia-319-recommended. I am guessing that this update has caused issues with Keyboard shortcuts, battery status icon disappearing as well as power management issues as speculated by others. Closing laptop lid no longer suspends laptop - It has to be manually done by licking 'suspend' before closing lid. Question: How do you restore the original/default graphics driver? Thanks for your help. Regards

    Read the article

  • Naming your website longname.com vs shortcatchy.net vs shortcatchy.info

    - by jskye
    I'm designing a website that will basically be a social network for sharing information. I have the domain $$$$d.net and the same domain $$$$d.info where $$$$ is a word (that runs into the d) pertaining to the purpose of the site . The .com of this domain was already taken, but they've got nothing showing. They only have a not reached google error showing ie. dont seem to be trying to sell it either. I also have the long name of the site $$$$------&&&&&&&&&.com where the words $$$$ and &&&&&&&&& would contribute relevant seo to the site. In fact the word $$$$------ would also if a one letter spelling mistake is recognised at all by google, which i doubt but am unsure about. But as a brandname the $$$$------ word still works relevantly. Which do you think is a better choice to use? The short catchy name with the .info for relevance to information The .net which is more familiar than .info but slightly less relevant maybe. (But i think net as in network still works cos as i said it will be a social networking site). The long, .com domain which has more SEO plus a pun albeit on a spelling mistake. I know its kind of a subjective question and also hard to answer without knowing the name (which I've obfuscated because I'm only in initial design stage) but nevertheless im interested in what some of you guys think.

    Read the article

  • H1 vs H2 vs Other for website title/logo and SEO

    - by Ilian Iliev
    It is a common practice for front-end developers to put the website title or logo in H1 tag and the title in H2. But most of the time the title of the page/article is more important because it caries the content value. So my question is what is the best approac from semantic and seo viewpoint. Examples: logo - H1, title - H1 logo - H1, title - H2 logo - H2, title - H1 logo - other tag, title - H1 Provided other variants if you think they will have bigger effect.

    Read the article

  • CVS vs SVN vs GIT

    - by user3215
    CVS is being used in my workplace and I've no much knowledge of cvs other than installing and creating cvs users and I heard developers share their project with eclipse or something like that. I'm asked to check for best repositories which offers advanced features giving the hints SVN and GIT. If any one using these repositories please short list their features and if possible with links of good installation guides and a bit information of what the eclipse to do with these repositories. Thank you!

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • Next steps for Asp.Net C# developer (RoR vs Python Django vs PhP Drupal)

    - by ProfessorB
    A majority of my web development experience has been on the .Net stack (mainly Asp.net C#). I am looking to learn something new in my spare time, for the use of personal projects and possibly for use professionally (as an ISV). I know some Python, done some scripting with it in the past, nothing on the web though. Php has been around for a long time and RoR has gained a lot of popularity. Are there any developers from the .NET world that have migrated over to one or more of the other platforms? If so, which do you prefer and why? Which would you suggest and why?

    Read the article

  • Nails vs Screws (C# List vs Dictionary)

    - by MarkPearl
    General This may sound like a typical noob statement, but I’m finding out in a very real way that just because you have a solution to a problem, doesn’t necessarily mean it is the best solution. This was reiterated to me when a friend of mine suggested I look at using Dictionaries instead of Lists for a particular problem – he was right, I have always just assumed that because lists solved my problem I did not need to look elsewhere. So my new manifesto to counter this ageless problem is as follows… Look for a solution that will logically work Once you have a solution look for possible alternatives Decide why your current solution is the best approach compared to the alternatives If it is.. use it till something better comes along, if it isnt…. change What’s the difference between Lists & Dictionaries Both lists and dictionaries are used to store collections of data. Assume we had the following declarations… var dic = new Dictionary<string, long>(); var lst = new List<long>(); long data;   With a list, you simply add the item to the list and it will add the item to the end of the list. lst.Add(data); With a dictionary, you need to specify some sort of key and the data you want to add so that it can be uniquely identified. dic.Add(uniquekey, data);   Because with a dictionary you now have unique identifier, in the background they provide all sort’s of optimized algorithms to find your associated data. What this means is that if you are wanting to access your data it is a lot faster than a List. So when is it appropriate to use either class? For me, if I can guarantee that each item in my collection will have a unique identifier, then I will use Dictionaries instead of Lists as there is a considerable performance benefit when accessing each data item. If I cannot make this sort of guarantee, then by default I will use a list. I know this is all really basic, and I hope I haven’t missed some fundamental principle… If anyone would like to add their 2 cents, please feel free to do so…

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >