Search Results

Search found 27 results on 2 pages for 'brickner'.

Page 1/2 | 1 2  | Next Page >

  • FxCop CA2227 warning and ReadOnlyCollection<T>

    - by brickner
    In my VS2008 SP1, .NET 3.5 SP1 project, I have different classes that contain different properties. I use C#3.0 auto properties a lot. Some of these properties need to be collections. Since I want to make it simple, I use ReadOnlyCollection<T for these properties. I don't want to use IEnumerable<T since I want random access to the elements. I use Code Analysis (FxCop rules) and I get the CA2227 warning. I don't understand why does ReadOnlyCollection<T should have a set method while it can't be changed... The set method can only do exactly what the property can do. Example: using System.Collections.ObjectModel; namespace CA2227 { public class MyClass { public ReadOnlyCollection<int> SomeNumbers { get; set; } } } CA2227 : Microsoft.Usage : Change 'MyClass.SomeNumbers' to be read-only by removing the property setter. C:\Users...\Visual Studio 2008\Projects\CA2227\MyClass.cs 7 CA2227

    Read the article

  • Should I suppress CA1062: Validate arguments of public methods?

    - by brickner
    I've recently upgraded my project to Visual Studio 2010 from Visual Studio 2008. In Visual Studio 2008, this Code Analysis rule doesn't exist. Now I'm not sure if I should use this rule or not. I'm building an open source library so it seems important to keep people safe from doing mistakes. However, if all I'm going to do is throw ArgumentNullException when the parameter is null, it seems like writing useless code since ArgumentNullException will be thrown even if I won't write that code. Should I remove that rule or fix the violations?

    Read the article

  • Do Precompiled headers help with rebuilds?

    - by brickner
    I read some of the questions about precompiled headers but couldn't find a direct answer to that. I usually rebuild my entire Visual Studio 2010 solution. One of the projects in my solution is a C++/CLI project. I thought that using precompiled headers in that project will increase the speed of the compilation. After some experiments, it seems that using precompiled headers only slows the rebuild process. Do precompiled headers only help with builds that didn't completely clean the old files?

    Read the article

  • How do I upgrade ReSharper 4.5 settings to ReSharper 5.0 settings

    - by brickner
    I've recently upgraded my code from Visual Studio 2008 .NET 3.5 to Visual Studio 2010 .NET 4.0. I've used ReSharper 4.5 and now I'm using ReSharper 5.0. I want my ReSharper 4.5 settings (the ones in .resharper file) to be upgraded to ReSharper 5.0 settings. Is there an automatic way to do it? Should I do it manually? Would just overwriting the .5.0.ReSharper file with the .4.5.resharper file do that trick?

    Read the article

  • Using dlls compiled in Visual Studio 2010 with target .NET Franework 4.0 in Visual Studio 2008

    - by brickner
    I know it's a bit close to Can I use .NET 4.0 beta in Visual Studio 2008? But my question is a bit different. I have a project that now uses .NET 4.0 (target .NET Framework 4.0) in Visual Studio 2010. Is it possible to use the project compiled dlls in Visual Studio 2008? How? I don't want to use .NET4.0 directly in Visual Studio 2008, only the compiled dlls with target .NET Framework 4.0 (this is how my question is different that what has been asked so far). I know that I was able to use .NET3.5 in Visual Studio 2005. So why not .NET4.0 in Visual Studio 2008?

    Read the article

  • How do I merge cells of the same column in LyX?

    - by brickner
    I have 3 subfigures I want to arrange so that 1 will be in the left and 2 will be in the right (one above the other): Figure 1 | Figure 2 Figure 1 | Figure 3 Figure 1 should appear only once of course - across the entire column. I thought I should use 2x2 table to arrange them, but I can't find a way to merge the two cells in the same column to one cell in order to put figure 1 there. How can I merge the two cells in the same column?

    Read the article

  • Calculating the square of BigInteger

    - by brickner
    Hi, I'm using .NET 4's System.Numerics.BigInteger structure. I need to calculate the square (x^2) of very large numbers. If x is a BigInteger, What is the time complexity of: x*x; or BigInteger.Pow(x,2); ? If it's worse than O(n^2), do you have a better implementation? Maybe something like Schönhage–Strassen algorithm?

    Read the article

  • Benchmark for a .NET WinPcap wrapper

    - by brickner
    I'm developing a .NET wrapper for WinPcap called Pcap.Net. I'm trying to make sure this wrapper has high performance and I want to compare it to WinPcap and to other .net wrappers for WinPcap. The features I want to profile are: WinPcap native features (sending packets in different ways, receiving packets in different ways...) Interpreting packets that Pcap.Net knows how to interpret (like Etherent, IPv4, UDP, TCP, ICMP, ...) Building packet that Pcap.Net knows how to build (the same types it knows how to interpret). I also want to be able to profile the benchmark using Visual Studio 2010 Ultimate profiling tools. My question is: What should my benchmark exactly do to cover these issues and how would you suggest to build it?

    Read the article

  • Should I supress CA1062: Validate arguments of public methods?

    - by brickner
    I've recently upgraded my project to Visual Studio 2010 from Visual Studio 2008. In Visual Studio 2008, this Code Analysis rule doesn't exist. Now I'm not sure if I should use this rule or not. I'm building an open source library so it seems important to keep people safe from doing mistakes. However, if all I'm going to do is throw ArgumentNullException when the parameter is null, it seems like writing useless code since ArgumentNullException will be thrown even if I won't write that code. Should I remove that rule or fix the violations?

    Read the article

  • Documenting using Sandcastle: Refering to enum value using <see>

    - by brickner
    I'm using Sandcastle 2.4.10520 and Sandcastle Help File Builder 1.8.0 to generate a .chm help file. In my documentation, I'm using <see> tags. If I try to refer an enum like <see cref="NumberStyles"/> it works perfectly. If I try to refer an enum value like <see cref="NumberStyles.AllowTrailingWhite"/> I get a link in the documentation file, but the link leads me to an MSDN Page not found I don't get any warnings - my xml documentation is correct. I've noticed that MSDN pages that refer to an enum value also have a Page not found link. For example: UInt64.Parse Method (String, NumberStyles, IFormatProvider) refers to NumberStyles.AllowHexSpecifier and this leads to another MSDN Page not found. Should I refer to the enum instead of the enum value? What should I do to refer an enum? Is it even possible?

    Read the article

  • Why is a fixed size buffers (arrays) must be unsafe?

    - by brickner
    Let's say I want to have a value type of 7 bytes (or 3 or 777). I can define it like that: public struct Buffer71 { public byte b0; public byte b1; public byte b2; public byte b3; public byte b4; public byte b5; public byte b6; } A simpler way to define it is using a fixed buffer public struct Buffer72 { public unsafe fixed byte bs[7]; } Of course the second definition is simpler. The problem lies with the unsafe keyword that must be provided for fixed buffers. I understand that this is implemented using pointers and hence unsafe. My question is why does it have to be unsafe? Why can't C# provide arbitrary constant length arrays and keep them as a value type instead of making it a C# reference type array or unsafe buffers?

    Read the article

  • Why do I get Code Analysis CA1062 on an out parameter in this code?

    - by brickner
    I have a very simple code (simplified from the original code - so I know it's not a very clever code) that when I compile in Visual Studio 2010 with Code Analysis gives me warning CA1062: Validate arguments of public methods. public class Foo { protected static void Bar(out int[] x) { x = new int[1]; for (int i = 0; i != 1; ++i) x[i] = 1; } } The warning I get: CA1062 : Microsoft.Design : In externally visible method 'Foo.Bar(out int[])', validate local variable '(*x)', which was reassigned from parameter 'x', before using it. I don't understand why do I get this warning and how can I resolve it without suppressing it? Can new return null? Is this a Visual Studio 2010 bug?

    Read the article

  • Ignoring build number when referencing dll

    - by brickner
    I have one solution with a .NET 4.0 project (C#) that produces a delayed signed dll, that I dotfuscate and sign. EDIT: This is how I version the dll: [assembly: AssemblyVersion("0.7.0.*")] [assembly: AssemblyFileVersion("0.7.0.0")] I have another solution with a .NET 4.0 project (C++/CLI) that references the signed dll and produces a signed dll (actually, delayed signed and signed in a post build because of a flaw in the C++ build system). The problem is that the reference to the dll contains a specific version number, which includes even the build number (I want to have a build number). Every time I build the referenced dll, I have to change the project settings file (.vcxproj) so it reference the new version dll. Since I work with source control, this is very inconvenient (different computers might have different build numbers since each computer build its own referenced dll - the referenced dll is not in the source control). If I don't change the reference, I get a warning: warning MSB3245: Could not resolve this reference. Could not locate the assembly... And many errors like this: error C3083: 'Foo': the symbol to the left of a '::' must be a type These are resolved once I change the reference. How do I make the reference ignore the build number or even the entire version number?

    Read the article

  • .NET Regular expressions on bytes instead of chars

    - by brickner
    Hi, I'm trying to do some parsing that will be easier using regular expressions. The input is an array (or enumeration) of bytes. I don't want to convert the bytes to chars for the following reasons: Computation efficiency Memory consumption efficiency Some non-printable bytes might be complex to convert to chars. Not all the bytes are printable. So I can't use Regex. The only solution I know, is using Boost.Regex (which works on bytes - C chars), but this is a C++ library that wrapping using C++/CLI will take considerable work. How can I use regular expressions on bytes in .NET directly, without working with .NET strings and chars? Thank you.

    Read the article

  • Learn Obj-C Memory Management

    - by Joshua Brickner
    I come from a web development background. I'm good at XHTML, CSS, JavaScript, PHP and MySQL, because I use all of those technologies at my day job. Recently I've been tinkering with Obj-C in Xcode in the evenings and on weekends. I've written code for both the iPhone and Mac OS X, but I can't wrap my head around the practicalities of memory management. I understand the high-level concepts but am unclear how that plays out in implementation. Web developers typically don't have to worry about these sorts of things, so it is pretty new to me. I've tried adding memory management to my projects, but things usually end up crashing. Any suggestions of how to learn? Any suggestions are appreciated.

    Read the article

  • Excluding standard directories from code coverage results with C++/CLI

    - by brickner
    I have a Visual Studio 2010 .NET 4 solution with C# projects and a C++/CLI project. I use Visual Studio's built in unit tests and code coverage. Other than the fact that Visual Studio 2010 coverage tool for C++/CLI projects seems to be much weaker than Visual Studio 2008 coverage tool, I get weird results. For example, I get uncovered code in this file: c:\program files (x86)\microsoft visual studio 10.0\vc\include\xstring And some other files in that directory. I want to exclude this code from coverage results. Is there a way to put some exclude attributes on that code? If not, is there a different automatic way to exclude that code from coverage? If not, is there a way to use EXCLUDE option to exclude it? Can it be done automatically within Visual Studio without running the coverage tool from command prompt? Any other solutions?

    Read the article

  • Why do I get CA1806 when I catch exception in C++/CLI?

    - by brickner
    I've recently upgraded my project from Visual Studio 2008 to Visual Studio 2010. By enabling Code Analysis and compiling in Release, I'm getting warning CA1806: Do not ignore method results. I've managed to reduce the code that produces the warning to this code: .h file: public ref class Foo { public: void Bar(); }; .cpp file: void Foo::Bar() { try { } catch (const std::exception&) // here I get the warning { } } the warning: CA1806 : Microsoft.Usage : 'Foo::Bar(void)' calls 'Global::__CxxRegisterExceptionObject(void*, void*)' but does not use the HRESULT or error code that the method returns. This could lead to unexpected behavior in error conditions or low-resource situations. Use the result in a conditional statement, assign the result to a variable, or pass it as an argument to another method. If I try to use the exception value or do catch(...) the warning still appears. If I catch managed exceptions instead or compile in Debug I don't get the warning. Why do I get this warning? UPDATE I've decided to open a bug report on Microsoft Connect.

    Read the article

  • What do you choose, protected or internal?

    - by brickner
    If I have a class with a method I want protected and internal. I want that only derived classes in the assembly would be able to call it. Since protected internal means protected or internal, you have to make a choice. What do you choose in this case - protected or internal?

    Read the article

  • Parsing every part of an HTTP header field-value

    - by brickner
    Hi all. I'm parsing HTTP data directly from packets (either TCP reconstructed or not, you can assume it is). I'm looking for the best way to parse HTTP as accurately as possible. The main issue here is the HTTP header. Looking at the basic RFC of HTTP/1.1, it seems that HTTP header parsing would be complex. The RFC describes very complex regular expressions for different parts of the header. Should I write these regular expressions to parse the different parts of the HTTP header? The basic parsing I've written so far for HTTP header is for the generic HTTP header: message-header = field-name ":" [ field-value ] And I've included replacing inner LWS with SP and repeating headers with the same field-name with comma separated values as described in section 4.2. However, looking at section 14.9 for example would show that in order to parse the different parts of the field-value I need a much more complex parsing scheme. How do you suggest I should handle the complex parts of HTTP parsing (specifically the field-value) assuming I want to give the parser users the full capabilities of HTTP and to parse every part of HTTP? Design suggestions for this would also be appreciated. Thanks.

    Read the article

  • How to call memcmp() on two parts of byte[] (with offset)?

    - by brickner
    Hi, I want to compare parts of byte[] efficiently - so I understand memcmp() should be used. I know I can using PInvoke to call memcmp() - http://stackoverflow.com/questions/43289/comparing-two-byte-arrays-in-net But, I want to compare only parts of the byte[] - using offset, and there is no memcmp() with offset since it uses pointers. int CompareBuffers(byte[] buffer1, int offset1, byte[] buffer2, int offset2, int count) { // Somehow call memcmp(&buffer1+offset1, &buffer2+offset2, count) } Should I use C++/CLI to do that? Should I use PInvoke with IntPtr? How? Thank you.

    Read the article

  • Adding Runtime Intelligence Application Analytics for a library and not an application

    - by brickner
    I want to add usage statistics for a .NET 4.0 library I write on CodePlex. I try to follow the step described here but my problem lies with the fact that what I write is a library and not an application. One of the steps is put the Setup and Teardown attributes. I thought about adding the Setup attribute on a static constructor or a different place that will run once per usage of the library. My problem lies with the Teardown attribute that should be placed on code that ends the usage. I don't know where to put this attribute. Is it possible to get usage statistics on a library? Maybe I can register on an event that will fire when the application unloads the dll?

    Read the article

  • Why do I get CA1811 when I call a private method from a public method in C++/CLI?

    - by brickner
    I've recently upgraded my project from Visual Studio 2008 to Visual Studio 2010. By enabling Code Analysis and building on Release, I'm getting warning CA1811: Avoid uncalled private code. I've managed to reduce the code to this: .h file: public ref class Foo { public: virtual System::String^ ToString() override; private: static System::String^ Bar(); }; .cpp file: String^ Foo::ToString() { return Bar(); } String^ Foo::Bar() { return "abc"; } The warning I get: CA1811 : Microsoft.Performance : 'Foo::Bar(void)' appears to have no upstream public or protected callers. It doesn't matter if Bar() is static or not. I've tried to reproduce it in C# but I can't. I can only reproduce it in C++/CLI. Why do I get this warning? Is this a Visual Studio 2010 bug?

    Read the article

  • BigInteger.Parse() on hexadecimal number gives negative numbers.

    - by brickner
    I've started using .NET 4 System.Numerics.BigInteger Structure and I've encountered a problem. I'm trying to parse a string that contains a hexadecimal number with no sign (positive). I'm getting a negative number. For example, I do the following two asserts: Assert.IsTrue(System.Int64.Parse("8", NumberStyles.HexNumber, CultureInfo.InvariantCulture) > 0, "Int64"); Assert.IsTrue(System.Numerics.BigInteger.Parse("8", NumberStyles.HexNumber, CultureInfo.InvariantCulture) > 0, "BigInteger"); The first assert succeeds, the second assert fails. I actually get -8 instead of 8 in the BigInteger. The problem seems to be when I'm the hexadecimal starts with 1 bit and not 0 bit (a digit between 8 and F inclusive). If I add a leading 0, everything works perfectly. Is that a bad usage on my part? Is it a bug in BigInteger?

    Read the article

  • BigInteger.ToString() returns more than 50 decimal digits.

    - by brickner
    I'm using .NET 4 System.Numerics.BigInteger Structure and I'm getting results different from the documentation. In the documentation of BigInteger.ToString() Method It says: The ToString() method supports 50 decimal digits of precision. That is, if the BigInteger value has more than 50 digits, only the 50 most significant digits are preserved in the output string; all other digits are replaced with zeros. I have some code that takes a 60 decimal digits BigInteger and converts it to a string. The 60 significant decimal digits string didn't lose any significant digits: const string vString = "123456789012345678901234567890123456789012345678901234567890"; Assert.AreEqual(60, vString.Length); BigInteger v = BigInteger.Parse(vString); Assert.AreEqual(60, v.ToString().Length); Assert.AreEqual('9', v.ToString()[58]); Assert.AreEqual('1', v.ToString()[0]); Assert.AreEqual(vString, v.ToString()); All the asserts pass. What exactly does the quoted part of the documentation mean?

    Read the article

1 2  | Next Page >