Search Results

Search found 52968 results on 2119 pages for 'lucene net'.

Page 675/2119 | < Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • Default Parameters vs Method Overloading

    - by João Angelo
    With default parameters introduced in C# 4.0 one might be tempted to abandon the old approach of providing method overloads to simulate default parameters. However, you must take in consideration that both techniques are not interchangeable since they show different behaviors in certain scenarios. For me the most relevant difference is that default parameters are a compile time feature while method overloading is a runtime feature. To illustrate these concepts let’s take a look at a complete, although a bit long, example. What you need to retain from the example is that static method Foo uses method overloading while static method Bar uses C# 4.0 default parameters. static void CreateCallerAssembly(string name) { // Caller class - Invokes Example.Foo() and Example.Bar() string callerCode = String.Concat( "using System;", "public class Caller", "{", " public void Print()", " {", " Console.WriteLine(Example.Foo());", " Console.WriteLine(Example.Bar());", " }", "}"); var parameters = new CompilerParameters(new[] { "system.dll", "Common.dll" }, name); new CSharpCodeProvider().CompileAssemblyFromSource(parameters, callerCode); } static void Main() { // Example class - Foo uses overloading while Bar uses C# 4.0 default parameters string exampleCode = String.Concat( "using System;", "public class Example", "{{", " public static string Foo() {{ return Foo(\"{0}\"); }}", " public static string Foo(string key) {{ return \"FOO-\" + key; }}", " public static string Bar(string key = \"{0}\") {{ return \"BAR-\" + key; }}", "}}"); var compiler = new CSharpCodeProvider(); var parameters = new CompilerParameters(new[] { "system.dll" }, "Common.dll"); // Build Common.dll with default value of "V1" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V1")); // Caller1 built against Common.dll that uses a default of "V1" CreateCallerAssembly("Caller1.dll"); // Rebuild Common.dll with default value of "V2" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V2")); // Caller2 built against Common.dll that uses a default of "V2" CreateCallerAssembly("Caller2.dll"); dynamic caller1 = Assembly.LoadFrom("Caller1.dll").CreateInstance("Caller"); dynamic caller2 = Assembly.LoadFrom("Caller2.dll").CreateInstance("Caller"); Console.WriteLine("Caller1.dll:"); caller1.Print(); Console.WriteLine("Caller2.dll:"); caller2.Print(); } And if you run this code you will get the following output: // Caller1.dll: // FOO-V2 // BAR-V1 // Caller2.dll: // FOO-V2 // BAR-V2 You see that even though Caller1.dll runs against the current Common.dll assembly where method Bar defines a default value of “V2″ the output show us the default value defined at the time Caller1.dll compiled against the first version of Common.dll. This happens because the compiler will copy the current default value to each method call, much in the same way a constant value (const keyword) is copied to a calling assembly and changes to it’s value will only be reflected if you rebuild the calling assembly again. The use of default parameters is also discouraged by Microsoft in public API’s as stated in (CA1026: Default parameters should not be used) code analysis rule.

    Read the article

  • Context Sensitive History. Part 1 of 2

    A Desktop and Silverlight user action management system, with undo, redo, and repeat. Allowing actions to be monitored, and grouped according to a context (such as a UI control), executed sequentially or in parallel, and even to be rolled back on failure.

    Read the article

  • Application shortcut reappears on restart

    - by Nathan Friesen
    I have an application that I have built a .msi installer for throgh Microsoft Visual Studio 2010. I recently made some updates, including changing the version number and rebuilt the installer with these updates. The installer includes shortcuts on both the desktop and in the Start menu. Running the installer appears to work fine, and both of these shortcuts work. After restarting my computer I've found that the shortcuts are changed to have a Target type of Application (Installs on first use) and the Start In: field is changed to a location that doesn't exist. Once this happens, every time you use that shortcut it tries to install the application again and fails. I have also changed the name of the shortcut that the installer creates. This appears to work, and the shortcut still works after a restart. After the restart, though, the shortcut with the old name that doesn't work also appears on the desktop and in the Start menu. Does anyone have any ideas what I may have set up wrong, or what I need to change to get the shortcuts to be have properly?

    Read the article

  • IIS - HTTP Redirect all requests for one virtual directory to another

    - by nekno
    How do I set up an HTTP Redirect rule to redirect all requests for a virtual directory to another virtual directory, when I don't know the hostname or complete URL, and cannot use the URL Rewrite module? The following redirects should work: http://host1/app/oldvdir -> http://host1/app/newvdir http://host1/app/oldvdir/ -> http://host1/app/newvdir/ http://host1/app/oldvdir/login.aspx -> http://host1/app/newvdir/login.aspx http://host2/app/oldvdir/login.aspx -> http://host2/app/newvdir/login.aspx I would like to place the redirect rule in the app's root web.config. I have attempted the following rules, but the end result is simply that the redirected vdir gets duplicated on the end of the original vdir until reaching the max URL length, e.g., http://host/oldvdir/login.aspx -> http://host/oldvdir/newvdir/newvdir/newvdir/... Rules in root web.config (I also have tried all sorts of combinations of settings with and without leading and trailing slashes, etc): <location path="oldvdir"> <system.webServer> <httpRedirect enabled="true" exactDestination="false" httpResponseStatus="Permanent"> <add wildcard="*/oldvdir/*" destination="/newvdir/"/> </httpRedirect> </system.webServer> </location> <location path="oldvdir/"> <system.webServer> <httpRedirect enabled="true" exactDestination="false" destination="/newvdir" httpResponseStatus="Permanent"/> </system.webServer> </location>

    Read the article

  • Business Logic Layer in MVC Application

    - by Subin Jacob
    In my ASP MVC application I decided to add another Business Layer and made the model only to have properties. All other functionality like save to db, get from db is done on this new Business layer. So now the controller will be calling this business layer and model for various operations. Is it a good approach to design like this? I decided not to use model for this purpose because I would need a number of models for different actions. (for eg, one for edit and other for create)

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • Enumerable Interleave Extension Method

    - by João Angelo
    A recent stackoverflow question, which I didn’t bookmark and now I’m unable to find, inspired me to implement an extension method for Enumerable that allows to insert a constant element between each pair of elements in a sequence. Kind of what String.Join does for strings, but maintaining an enumerable as the return value. Having done the single element part I got a bit carried away and ended up expanding it adding overloads to support interleaving elements of another sequence and support for a predicate to control when interleaving takes place. I have to confess that I did this for fun and now I can’t think of any real usage scenario, nonetheless, it may prove useful for someone. First a simple example: var target = new string[] { "(", ")", "(", ")" }; var result = target.Interleave(".", (f, s) => f == "("); // Prints: (.)(.) Console.WriteLine(String.Join(string.Empty, result)); And now the untested but documented implementation: using System; using System.Collections; using System.Collections.Generic; using System.Linq; public static class EnumerableExtensions { /// <summary> /// Iterates infinitely over a constant element. /// </summary> /// <typeparam name="T"> /// The type of element in the sequence. /// </typeparam> private class InfiniteSequence<T> : IEnumerable<T>, IEnumerator<T> { public InfiniteSequence(T element) { this.Element = element; } public T Element { get; private set; } public IEnumerator<T> GetEnumerator() { return this; } IEnumerator IEnumerable.GetEnumerator() { return this; } T IEnumerator<T>.Current { get { return this.Element; } } void IDisposable.Dispose() { } object IEnumerator.Current { get { return this.Element; } } bool IEnumerator.MoveNext() { return true; } void IEnumerator.Reset() { } } /// <summary> /// Interleaves the specified <paramref name="element"/> between each pair of elements in the <paramref name="target"/> sequence. /// </summary> /// <typeparam name="T"> /// The type of elements in the sequence. /// </typeparam> /// <param name="target"> /// The target sequence to be interleaved. /// </param> /// <param name="element"> /// The element used to perform the interleave operation. /// </param> /// <exception cref="ArgumentNullException"> /// <paramref name="target"/> or <paramref name="element"/> is a null reference. /// </exception> /// <returns> /// The <paramref name="target"/> sequence interleaved with the specified <paramref name="element"/>. /// </returns> public static IEnumerable<T> Interleave<T>( this IEnumerable<T> target, T element) { if (target == null) throw new ArgumentNullException("target"); if (element == null) throw new ArgumentNullException("element"); return InterleaveInternal(target, new InfiniteSequence<T>(element), (f, s) => true); } /// <summary> /// Interleaves the specified <paramref name="element"/> between each pair of elements in the <paramref name="target"/> sequence. /// </summary> /// <remarks> /// The interleave operation is interrupted as soon as the <paramref name="target"/> sequence is exhausted; If the number of <paramref name="elements"/> to be interleaved are not enough to completely interleave the <paramref name="target"/> sequence then the remainder of the sequence is returned without being interleaved. /// </remarks> /// <typeparam name="T"> /// The type of elements in the sequence. /// </typeparam> /// <param name="target"> /// The target sequence to be interleaved. /// </param> /// <param name="elements"> /// The elements used to perform the interleave operation. /// </param> /// <exception cref="ArgumentNullException"> /// <paramref name="target"/> or <paramref name="element"/> is a null reference. /// </exception> /// <returns> /// The <paramref name="target"/> sequence interleaved with the specified <paramref name="elements"/>. /// </returns> public static IEnumerable<T> Interleave<T>( this IEnumerable<T> target, IEnumerable<T> elements) { if (target == null) throw new ArgumentNullException("target"); if (elements == null) throw new ArgumentNullException("elements"); return InterleaveInternal(target, elements, (f, s) => true); } /// <summary> /// Interleaves the specified <paramref name="element"/> between each pair of elements in the <paramref name="target"/> sequence that satisfy <paramref name="predicate"/>. /// </summary> /// <typeparam name="T"> /// The type of elements in the sequence. /// </typeparam> /// <param name="target"> /// The target sequence to be interleaved. /// </param> /// <param name="element"> /// The element used to perform the interleave operation. /// </param> /// <param name="predicate"> /// A predicate used to assert if interleaving should occur between two target elements. /// </param> /// <exception cref="ArgumentNullException"> /// <paramref name="target"/> or <paramref name="element"/> or <paramref name="predicate"/> is a null reference. /// </exception> /// <returns> /// The <paramref name="target"/> sequence interleaved with the specified <paramref name="element"/>. /// </returns> public static IEnumerable<T> Interleave<T>( this IEnumerable<T> target, T element, Func<T, T, bool> predicate) { if (target == null) throw new ArgumentNullException("target"); if (element == null) throw new ArgumentNullException("element"); if (predicate == null) throw new ArgumentNullException("predicate"); return InterleaveInternal(target, new InfiniteSequence<T>(element), predicate); } /// <summary> /// Interleaves the specified <paramref name="element"/> between each pair of elements in the <paramref name="target"/> sequence that satisfy <paramref name="predicate"/>. /// </summary> /// <remarks> /// The interleave operation is interrupted as soon as the <paramref name="target"/> sequence is exhausted; If the number of <paramref name="elements"/> to be interleaved are not enough to completely interleave the <paramref name="target"/> sequence then the remainder of the sequence is returned without being interleaved. /// </remarks> /// <typeparam name="T"> /// The type of elements in the sequence. /// </typeparam> /// <param name="target"> /// The target sequence to be interleaved. /// </param> /// <param name="elements"> /// The elements used to perform the interleave operation. /// </param> /// <param name="predicate"> /// A predicate used to assert if interleaving should occur between two target elements. /// </param> /// <exception cref="ArgumentNullException"> /// <paramref name="target"/> or <paramref name="element"/> or <paramref name="predicate"/> is a null reference. /// </exception> /// <returns> /// The <paramref name="target"/> sequence interleaved with the specified <paramref name="elements"/>. /// </returns> public static IEnumerable<T> Interleave<T>( this IEnumerable<T> target, IEnumerable<T> elements, Func<T, T, bool> predicate) { if (target == null) throw new ArgumentNullException("target"); if (elements == null) throw new ArgumentNullException("elements"); if (predicate == null) throw new ArgumentNullException("predicate"); return InterleaveInternal(target, elements, predicate); } private static IEnumerable<T> InterleaveInternal<T>( this IEnumerable<T> target, IEnumerable<T> elements, Func<T, T, bool> predicate) { var targetEnumerator = target.GetEnumerator(); if (targetEnumerator.MoveNext()) { var elementsEnumerator = elements.GetEnumerator(); while (true) { T first = targetEnumerator.Current; yield return first; if (!targetEnumerator.MoveNext()) yield break; T second = targetEnumerator.Current; bool interleave = true && predicate(first, second) && elementsEnumerator.MoveNext(); if (interleave) yield return elementsEnumerator.Current; } } } }

    Read the article

  • Unit Testing with NUnit and Moles Redux

    - by João Angelo
    Almost two years ago, when Moles was still being packaged alongside Pex, I wrote a post on how to run NUnit tests supporting moled types. A lot has changed since then and Moles is now being distributed independently of Pex, but maintaining support for integration with NUnit and other testing frameworks. For NUnit the support is provided by an addin class library (Microsoft.Moles.NUnit.dll) that you need to reference in your test project so that you can decorate yours tests with the MoledAttribute. The addin DLL must also be placed in the addins folder inside the NUnit installation directory. There is however a downside, since Moles and NUnit follow a different release cycle and the addin DLL must be built against a specific NUnit version, you may find that the release included with the latest version of Moles does not work with your version of NUnit. Fortunately the code for building the NUnit addin is supplied in the archive (moles.samples.zip) that you can found in the Documentation folder inside the Moles installation directory. By rebuilding the addin against your specific version of NUnit you are able to support any version. Also to note that in Moles 0.94.51023.0 the addin code did not support the use of TestCaseAttribute in your moled tests. However, if you need this support, you need to make just a couple of changes. Change the ITestDecorator.Decorate method in the MolesAddin class: Test ITestDecorator.Decorate(Test test, MemberInfo member) { SafeDebug.AssumeNotNull(test, "test"); SafeDebug.AssumeNotNull(member, "member"); bool isTestFixture = true; isTestFixture &= test.IsSuite; isTestFixture &= test.FixtureType != null; bool hasMoledAttribute = true; hasMoledAttribute &= !SafeArray.IsNullOrEmpty( member.GetCustomAttributes(typeof(MoledAttribute), false)); if (!isTestFixture && hasMoledAttribute) { return new MoledTest(test); } return test; } Change the Tests property in the MoledTest class: public override System.Collections.IList Tests { get { if (this.test.Tests == null) { return null; } var moled = new List<Test>(this.test.Tests.Count); foreach (var test in this.test.Tests) { moled.Add(new MoledTest((Test)test)); } return moled; } } Disclaimer: I only tested this implementation against NUnit 2.5.10.11092 version. Finally you just need to run the NUnit console runner through the Moles runner. A quick example follows: moles.runner.exe [Tests.dll] /r:nunit-console.exe /x86 /args:[NUnitArgument1] /args:[NUnitArgument2]

    Read the article

  • ANTS Profiler Saves Me From A Sordid Fate

    A bit of string concatenation never hurt anybody, right? Think again. Carl Niedner has been designing software since 1983, and was shocked to find his latest and greatest creation suddenly plagued with long loading times. After trying ANTS Profiler, he discovered one tiny line of forgotten concept code was causing his pain.

    Read the article

  • ReSharper C# Live Template for Dependency Property and Property Change Routed Event Boilerplate Code

    - by Bart Read
    I don't know about you but it took me about 5 seconds to get royally fed up of typing the boilerplate code necessary for creating WPF (and Silverlight) dependency properties and, if you want them, their associated property change routed events. Being a ReSharper user, I wondered if there was any live template for doing this. It turns out there's nothing built in, but there are many examples of templates for creating dependency properties out there on the web, such as this excellent one from Roy...(read more)

    Read the article

  • Is there something better than a StringBuilder for big blocks of SQL in the code

    - by Eduardo Molteni
    I'm just tired of making a big SQL statement, test it, and then paste the SQL into the code and adding all the sqlstmt.append(" at the beginning and the ") at the end. It's 2011, isn't there a better way the handle a big chunk of strings inside code? Please: don't suggest stored procedures or ORMs. edit Found the answer using XML literals and CData. Thanks to all the people that actually tried to answer the question without questioning me for not using ORM, SPs and using VB edit 2 the question leave me thinking that languages could try to make a better effort for using inline SQL with color syntax, etc. It will be cheaper that developing Linq2SQL. Just something like: dim sql = <sql> SELECT * ... </sql>

    Read the article

  • how to assign javascript variable value to the google analytics script? [migrated]

    - by Vinoth Prakash
    I have assigned two values in the two hidden variables at server Side and accessed those values at client side using script. I have written the google analytics code. I have set two custom variable. I need to pass two values which is stored in the javascript variables to the "value" of custom variable. I have assigned the varibales but values not displaying. please telll what error i made in the script. My aspx code <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <br /> Total Pirce&nbsp; &nbsp;: <asp:Label ID="Label1" runat="server" Text="10"></asp:Label><br /> &nbsp;Ship Price&nbsp; &nbsp; : <asp:Label ID="Label2" runat="server" Text="5"></asp:Label> <br /> ------------------<br /> Grand Total : <asp:Label ID="Label3" runat="server" Text="15"></asp:Label><br /> ------------------</div> <asp:HiddenField ID="HiddenField1" runat="server" /> <asp:HiddenField ID="HiddenField2" runat="server" /> </form> <script type="text/javascript"> var serverhid1 = document.getElementById('HiddenField1').value; var serverhid2 = document.getElementById('HiddenField2').value; alert(serverhid1) alert(serverhid2) var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-35156990-1']); //Set Custom Variable _gaq.push(['_setCustomVar', 1, 'TotalPirce', serverhid1 , 3]); _gaq.push(['_setCustomVar', 2, 'Shipping','yes', 3]); _gaq.push(['_setCustomVar', 3, 'GrandTotal',check(), 3]); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </body> </html> cs Code protected void Page_Load(object sender, EventArgs e) { HiddenField1.Value = Label1.Text; HiddenField2.Value = Label2.Text; }

    Read the article

  • In which cases build artifacts will be different in different environments

    - by Sundeep
    We are working on automation of deployment using Jenkins. We have different environments - DEV, UAT, PROD. In SVN, we are tagging each release and placing same binaries in DEV, UAT, PROD. The artifacts already contains config files w.r.t each environment but I am not understanding why we are storing binaries in environment folder again. Are there any scenarios where deployment might be different for different environments.

    Read the article

  • How to fix “Unit Test Runner failed to load test assembly”

    - by ybbest
    I encountered this issue a couple times during my recent project, every time I forgot what actually cause the issue. Therefore, I decide to write a quick blog post to make sure I can identify the issue quickly. Problem: Run unit test using a test runner and received a Unit Test Runner failed to load test assembly exception. Analysis: Basically, I have changed some code and start the test runner to run tests. The same dll have already been deployed to GAC. So the test runner actually tries to use the old version of the assembly thus could not load the assembly. Solution: Deploy the current version of dll to the GAC and re-run your test, it works like a charm.

    Read the article

  • WCF Keep Alive: Whether to disable keepAliveEnabled

    - by Lijo
    I have a WCF web service hosted in a load balanced environment. I do not need any WCF session related functionality in the service. QUESTION What are the scenarios in which performances will be best if keepAliveEnabled = false keepAliveEnabled = true Reference From Load Balancing By default, the BasicHttpBinding sends a connection HTTP header in messages with a Keep-Alive value, which enables clients to establish persistent connections to the services that support them. This configuration offers enhanced throughput because previously established connections can be reused to send subsequent messages to the same server. However, connection reuse may cause clients to become strongly associated to a specific server within the load-balanced farm, which reduces the effectiveness of round-robin load balancing. If this behavior is undesirable, HTTP Keep-Alive can be disabled on the server using the KeepAliveEnabled property with a CustomBinding or user-defined Binding.

    Read the article

  • RemoveAll Dictionary Extension Method

    - by João Angelo
    Removing from a dictionary all the elements where the keys satisfy a set of conditions is something I needed to do more than once so I implemented it as an extension method to the IDictionary<TKey, TValue> interface. Here’s the code: public static class DictionaryExtensions { /// <summary> /// Removes all the elements where the key match the conditions defined by the specified predicate. /// </summary> /// <typeparam name="TKey"> /// The type of the dictionary key. /// </typeparam> /// <typeparam name="TValue"> /// The type of the dictionary value. /// </typeparam> /// <param name="dictionary"> /// A dictionary from which to remove the matched keys. /// </param> /// <param name="match"> /// The <see cref="Predicate{T}"/> delegate that defines the conditions of the keys to remove. /// </param> /// <exception cref="ArgumentNullException"> /// dictionary is null /// <br />-or-<br /> /// match is null. /// </exception> /// <returns> /// The number of elements removed from the <see cref="IDictionary{TKey, TValue}"/>. /// </returns> public static int RemoveAll<TKey, TValue>( this IDictionary<TKey, TValue> dictionary, Predicate<TKey> match) { if (dictionary == null) throw new ArgumentNullException("dictionary"); if (match == null) throw new ArgumentNullException("match"); var keysToRemove = dictionary.Keys.Where(k => match(k)).ToList(); if (keysToRemove.Count == 0) return 0; foreach (var key in keysToRemove) { dictionary.Remove(key); } return keysToRemove.Count; } }

    Read the article

  • Are there well-known PowerShell coding conventions?

    - by Tahir Hassan
    Are there any well-defined conventions when programming in PowerShell? For example, in scripts which are to be maintained long-term, do we need to: Use the real cmdlet name or alias? Specify the cmdlet parameter name in full or only partially (dir -Recurse versus dir -r) When specifying string arguments for cmdlets do you enclose them in quotes (New-Object 'System.Int32' versus New-Object System.Int32 When writing functions and filters do you specify the types of parameters? Do you write cmdlets in the (official) correct case? For keywords like BEGIN...PROCESS...END do you write them in uppercase only? It seems that MSDN lack coding conventions document for PowerShell, while such document exist for example for C#.

    Read the article

  • Should I create my own Assert class based on these reasons?

    - by Mike
    The main reason I don't like Debug.Assert is the fact that these assertions are disabled in Release. I know that there's a performance reason for that, but at least in my situation I believe the gains would outweigh the cost. (By the way, I'm guessing this is the situation in most cases). And yes, I know that you can use Trace.Assert instead. But even though that would work, I find the name Trace distracting, since I don't see this as tracing. The other reason to create my own class is laziness I guess, since I could write methods for the most usual cases like Assert.IsNotNull, Assert.Equals and so forth. The second part of my question has to do with using Environment.FailFast in this class. Would that be a good idea? I do like the ideas put forth in this document. That's pretty much where I got the idea from. One last point. Does creating a design like this imply having an untestable code path, as described in this answer by Eric Lippert on a different (but related) question?

    Read the article

  • Profiling SharePoint with ANTS Performance Profiler 5.2

    Using ANTS Performance Profiler with SharePoint has, previously, been possible, but not easy. Version 5.2 of ANTS Performance Profiler changes all that, and Chris Allen has put together a straight-forward guide to profiling SharePoint, demonstrating just how much easier it has become.

    Read the article

  • Handling Deployment to Multiple Environments

    - by JayGee
    How should I handle deploying web applications to multiple servers? Constraints I have a dev, test and prod environment. No build server is available. Developers can't deploy to prod. The people that do deploy to prod copy files from test to prod. They don't have VS installed. Currently The way it's handled is using web.config transform. However, to deploy to prod involves putting prod code on the test server where it's copied over. Problem Sometimes simple mistakes are made, such as forgetting to change test back to the right environment after deployment. Or the test config gets moved to prod instead of the prod config. Solution So the question is, what is the best way to prevent mistakes from happening? My first thought is let the app determine which server it's on at runtime and use the appropriate settings/connection strings/etc... However, the server names could change in the not too distant future. So if multiple apps are hard coded, that would mean updating all of them. The easiest way to handle that situation would be to place a DLL in the GAC that determines the environment. Are there any drawbacks or possible complications that this would cause? Or is there a better solution to the problem than this?

    Read the article

  • How to Create Features for Windows SharePoint Services 3.0

    To customise a SharePoint (WSS 3.0) site, you'll need to understand 'Features'. The 'Feature' framework has become the most important method of customising a SharePoint site, because it is now defined by a list of Features, a layout page and a master page. One templated site can be turned into another by toggling Features and maybe switching the layout page or master page. Charles Lee explains.

    Read the article

  • Wikipedia A* pathfinding algorithm takes a lot of time

    - by Vee
    I've successfully implemented A* pathfinding in C# but it is very slow, and I don't understand why. I even tried not sorting the openNodes list but it's still the same. The map is 80x80, and there are 10-11 nodes. I took the pseudocode from here Wikipedia And this is my implementation: public static List<PGNode> Pathfind(PGMap mMap, PGNode mStart, PGNode mEnd) { mMap.ClearNodes(); mMap.GetTile(mStart.X, mStart.Y).Value = 0; mMap.GetTile(mEnd.X, mEnd.Y).Value = 0; List<PGNode> openNodes = new List<PGNode>(); List<PGNode> closedNodes = new List<PGNode>(); List<PGNode> solutionNodes = new List<PGNode>(); mStart.G = 0; mStart.H = GetManhattanHeuristic(mStart, mEnd); solutionNodes.Add(mStart); solutionNodes.Add(mEnd); openNodes.Add(mStart); // 1) Add the starting square (or node) to the open list. while (openNodes.Count > 0) // 2) Repeat the following: { openNodes.Sort((p1, p2) => p1.F.CompareTo(p2.F)); PGNode current = openNodes[0]; // a) We refer to this as the current square.) if (current == mEnd) { while (current != null) { solutionNodes.Add(current); current = current.Parent; } return solutionNodes; } openNodes.Remove(current); closedNodes.Add(current); // b) Switch it to the closed list. List<PGNode> neighborNodes = current.GetNeighborNodes(); double cost = 0; bool isCostBetter = false; for (int i = 0; i < neighborNodes.Count; i++) { PGNode neighbor = neighborNodes[i]; cost = current.G + 10; isCostBetter = false; if (neighbor.Passable == false || closedNodes.Contains(neighbor)) continue; // If it is not walkable or if it is on the closed list, ignore it. if (openNodes.Contains(neighbor) == false) { openNodes.Add(neighbor); // If it isn’t on the open list, add it to the open list. isCostBetter = true; } else if (cost < neighbor.G) { isCostBetter = true; } if (isCostBetter) { neighbor.Parent = current; // Make the current square the parent of this square. neighbor.G = cost; neighbor.H = GetManhattanHeuristic(current, neighbor); } } } return null; } Here's the heuristic I'm using: private static double GetManhattanHeuristic(PGNode mStart, PGNode mEnd) { return Math.Abs(mStart.X - mEnd.X) + Math.Abs(mStart.Y - mEnd.Y); } What am I doing wrong? It's an entire day I keep looking at the same code.

    Read the article

  • Which is more maintainable -- boolean assignment via if/else or boolean expression?

    - by Bret Walker
    Which would be considered more maintainable? if (a == b) c = true; else c = false; or c = (a == b); I've tried looking in Code Complete, but can't find an answer. I think the first is more readable (you can literally read it out loud), which I also think makes it more maintainable. The second one certainly makes more sense and reduces code, but I'm not sure it's as maintainable for C# developers (I'd expect to see this idiom more in, for example, Python).

    Read the article

  • Installing an asp application on a dnn server

    - by Cody Henrichsen
    I created a registration db/web application in C# for some workshops. The organization requesting is hosted on a DotNetNuke server. What changes do I need to make to the web.config so it can run under the site. Currently when I try to go to a page it get an error: Server Error in '/' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS.

    Read the article

< Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >