Search Results

Search found 204 results on 9 pages for 'sarah vessels'.

Page 2/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • Redirect pages to fix crawl errors

    - by sarah
    Google is giving me a crawl error for pages that I have removed like www.mysite.com/mypage.html. I want to redirect this pages to the new page www.mysite.com/mysite/mypage. I tried to do that by using .htaccess but instead of fixing the problem, the crawl pages increased and a new crawl came www.mysite.com/www.mysite.com. This is my .htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /sitename/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /sitename/index.php [L] </IfModule> # END WordPress Should I add this after the rewrite rule or I should do something else? RewriteRule ^pagename\.html$ http://www.sitename.com/pagename [R=301]

    Read the article

  • http requests, using sprites and file sizes -

    - by crazy sarah
    Hi all I'm in the process of finding out all about sprites and how they can speed up your pages. So I've used spriteMe to create a overall sprite image which is 130kb, this is made up of 14 images with a combined total size of about 65kb So is it better to have one http request and a file size of 130kb or 14 requests for a total of 65kb? Also there is a detailed image which has been put into the spite which caused it's size to go up by about 60kb odd, this used to be a seperate jpg image which was only 30kb. Would I be better off having it seperate and suffering the additional request?

    Read the article

  • Problem with grails web app running in production: "No such property: save for class: JsecRole"

    - by Sarah Boyd
    I've got a grails 1.1 web app running great in development but when I try and run it in production with an sqlserver database it crashes in a weird way. The relevant part of my datasource.groovy is as follows: environments { development { dataSource { dbCreate = "create-drop" // one of 'create', 'create-drop','update' url = "jdbc:hsqldb:mem:devDB" } } test { dataSource { dbCreate = "update" url = "jdbc:hsqldb:mem:testDb" } } production { dataSource { dbCreate = "update" driverClassName = "com.microsoft.sqlserver.jdbc.SQLServerDriver" endUsername = "sa" password = "pw4db" url = "jdbc:sqlserver://localhost:1433;databaseName=ReleasePlanner;selectMethod=cursor" The error message I receive is: Message: No such property: save for class: JsecRole Caused by: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole Class: ProjectController At Line: [28] Code Snippet: 27: println "###about to create project roles" 28: userManagerService.createProjectRoles(project) 29: userManagerService.addUserToProject(session.user.id.toString(), project, 'owner') } } } The stacktrace is as follows: org.codehaus.groovy.runtime.InvokerInvocationException: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole at org.jsecurity.web.servlet.JSecurityFilter.doFilterInternal(JSecurityFilter.java:382) at org.jsecurity.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:180) Caused by: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole at UserManagerService.createProjectRoles(UserManagerService.groovy:9) at UserManagerService$$FastClassByCGLIB$$6fa73713.invoke(<generated>) at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149) at UserManagerService$$EnhancerByCGLIB$$fcf60984.createProjectRoles(<generated>) at UserManagerService$createProjectRoles.call(Unknown Source) at ProjectController$_closure4.doCall(ProjectController.groovy:28) at ProjectController$_closure4.doCall(ProjectController.groovy) ... 2 more Any help is appreciated. Thanks Sarah

    Read the article

  • Method return type

    - by sarah xia
    Hi, In my company, a system is designed to have 3 layers. Layer1 is responsible for business logic handling. Layer3 is calling back end systems. Layer2 sits between the two layers so that layer1 doesn't need to know about the back end systems. To replay information from layer3, layer2 needs to define interface to layer1. For example, layer1 wants to check if a PIN from user is correct. It calls layer2 checkPin() method and then layer2 calls the relevant back end system. The checkPin() results could be: correctPin, inCorrectPin and internalError. At the moment, we defined the return type 'int'. So if layer2 returns 0, it means correctPin; if 1 is returned, it means inCorrectPin; if 9 is returned it means internalError. It works. However I feel a bit uneasy about this approach. Are there better ways to do it? For example define an enum CheckPinResult{CORRECT_PIN,INCORRECT_PIN,INTERNAL_ERROR}, and return CheckPinResult type? Thanks, Sarah

    Read the article

  • Facebook Connect: Permissions Error [200] using "stream.publish" with PHP

    - by Sarah
    Hi all, I've been implementing Facebook Connect into a site and am using both the PHP API, to allow me to automatically post data to a user's wall, as well as the JS API, for manual posting, permissions dialogs, etc. When the user uses the manual method it works 100%...the popups are displayed correctly, and the data gets posted to their wall properly. However, when I try to use the PHP API I am getting inconsistencies. When I try posting automatically using the PHP API using one account it works perfect, every time. But for some other accounts it never works, always returning "Permissions error." The error code is 200, and I've checked the Facebook API documentation and it's pretty vague, saying only "Permissions error. The application does not have permission to perform this action." But that's not true, since it works on some accounts and doesn't work on others. First, I've made sure that the users in question have enabled the extended permission "publish_stream" and that the manual method using the JS API works, so it doesn't seem to be a problem with those specific permissions. There are no apparent differences between the Facebook accounts I've used. So my question is has anyone run into this problem and found a solution to it? Is there some sort of other permission setting that users must enable for this to work? I've been searching Google and these forums but have not found any solution. The request I am sending is: (Note: The content/image url/link url are not the actual data I use) $attachment = array( 'caption' => '{*actor*} commented on <title> "<comment>"', 'media' => array( array( 'type' => 'image', 'src' => 'http://www.test.com/image.jpg', 'href' => 'http://www.test.com' ) ) ); $Facebook->api_client->stream_publish('', $attachment); Thanks, Sarah

    Read the article

  • using ILMerge with .NET 4 libraries

    - by Sarah Vessels
    I'm having trouble using ILMerge in my post-build after upgrading from .NET 3.5/Visual Studio 2008 to .NET 4/Visual Studio 2010. I have a Solution with several projects whose target framework is set to ".NET Framework 4". I use the following ILMerge command to merge the individual project DLLs into a single DLL: if not $(ConfigurationName) == Debug if exist "C:\Program Files (x86)\Microsoft\ILMerge\ILMerge.exe" "C:\Program Files (x86)\Microsoft\ILMerge\ILMerge.exe" /lib:"C:\Windows\Microsoft.NET\Framework64\v4.0.30319" /lib:"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies" /keyfile:"$(SolutionDir)$(SolutionName).snk" /targetplatform:v4 /out:"$(SolutionDir)bin\development\$(SolutionName).dll" "$(SolutionDir)Connection\$(OutDir)Connection.dll" ...other project DLLs... /xmldocs If I leave off specifying the location of the .NET 4 framework directory, I get an "Unresolved assembly reference not allowed: System" error from ILMerge. If I leave off specifying the location of the MSTest directory, I get an "Unresolved assembly reference not allowed: Microsoft.VisualStudio.QualityTools.UnitTestFramework" error. The ILMerge command above works and produces a DLL. When I reference that DLL in another .NET 4 C# project, however, and try to use code within it, I get the following warning: The primary reference "MyILMergedDLL" could not be resolved because it has an indirect dependency on the .NET Framework assembly "mscorlib, Version=4.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" which has a higher version "4.0.65535.65535" than the version "4.0.0.0" in the current target framework. If I then remove the /targetplatform:v4 flag and try to use MyILMergedDLL.dll, I get the following error: The type 'System.Xml.Serialization.IXmlSerializable' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. It doesn't seem like I should have to do that. Whoever uses my MyILMergedDLL.dll API should not have to add references to whatever libraries it references. How can I get around this?

    Read the article

  • C# Func delegate with params type

    - by Sarah Vessels
    How, in C#, do I have a Func parameter representing a method with this signature? XmlNode createSection(XmlDocument doc, params XmlNode[] childNodes) I tried having a parameter of type Func<XmlDocument, params XmlNode[], XmlNode> but, ooh, ReSharper/Visual Studio 2008 go crazy highlighting that in red.

    Read the article

  • how to exclude a Web Reference from Code Coverage in VS 2008 Team System

    - by Sarah Vessels
    When I run my MSTest tests in Visual Studio 2008 Team System and get code coverage results, I always see a particular web service included. I don't care how well this web service is tested, I'm intentionally only using a small part of it. How can I exclude the Web Reference from showing up in my Code Coverage results? I see that someone asked this very question over on Microsoft Connect and it's marked as postponed, but I was hoping someone knew of a workaround.

    Read the article

  • How do I get the member to which my custom attribute was applied?

    - by Sarah Vessels
    I'm creating a custom attribute in C# and I want to do different things based on whether the attribute is applied to a method versus a property. At first I was going to do new StackTrace().GetFrame(1).GetMethod() in my custom attribute constructor to see what method called the attribute constructor, but now I'm unsure what that will give me. What if the attribute was applied to a property? Would GetMethod() return a MethodBase instance for that property? Is there a different way of getting the member to which an attribute was applied in C#? [AttributeUsage(AttributeTargets.Method | AttributeTargets.Property, AllowMultiple = true)] public class MyCustomAttribute : Attribute Update: okay, I might have been asking the wrong question. From within a custom attribute class, how do I get the member (or the class containing the member) to which my custom attribute was applied? Aaronaught suggested against walking up the stack to find the class member to which my attribute was applied, but how else would I get this information from within the constructor of my attribute?

    Read the article

  • organize using directives, re-run tests?

    - by Sarah Vessels
    Before making a commit, I prefer to run all hundred-something unit tests in my C# Solution, since they only take a couple minutes to run. However, if I've already run them all, all is well, and then I decide to organize the using directives in my Solution, is it really necessary to re-run the unit tests? I have a macro that goes through all files in the Solution and runs Visual Studio's 'Remove and Sort' command on each. In my understanding, so long as all projects still build after using directives are changed around, things should be fine at runtime, too. Is this correct thinking?

    Read the article

  • Getting data from array of DataSet objects returned from web service

    - by Sarah Vessels
    I have a web service that I want to access when it is added as a web reference to my C# project. A particular method in the web service takes a SQL query string and returns the results of the query as a custom type. When I add the web service reference, the method shows up as returning DataSet[] instead of the custom type. This is fine provided I can still somehow access the data returned from the query within those DataSet objects. I ran a particular query that should return 6 rows; I got back a DataSet[] array with 6 elements. However, when I iterate over those DataSet objects, none of them has any tables (via the Tables property on the DataSet). What gives? Where is my data? The web service is tested and works when I use it as a data source in a Report Builder 2.0 report. I am able to send an XML SOAP query to the web service and get back XML results containing my data.

    Read the article

  • synchronizing XML nodes between class and file using C#

    - by Sarah Vessels
    I'm trying to write an IXmlSerializable class that stays synced with an XML file. The XML file has the following format: <?xml version="1.0" encoding="utf-8" ?> <configuration> <logging> <logLevel>Error</logLevel> </logging> ...potentially other sections... </configuration> I have a DllConfig class for the whole XML file and a LoggingSection class for representing <logging> and its contents, i.e., <logLevel>. DllConfig has this property: [XmlElement(ElementName = LOGGING_TAG_NAME, DataType = "LoggingSection")] public LoggingSection Logging { get; protected set; } What I want is for the backing XML file to be updated (i.e., rewritten) when a property is set. I already have DllConfig do this when Logging is set. However, how should I go about doing this when Logging.LogLevel is set? Here's an example of what I mean: var config = new DllConfig("path_to_backing_file.xml"); config.Logging.LogLevel = LogLevel.Information; // not using Logging setter, but a // setter on LoggingSection, so how // does path_to_backing_file.xml // have its contents updated? My current solution is to have a SyncedLoggingSection class that inherits from LoggingSection and also takes a DllConfig instance in the constructor. It declares a new LogLevel that, when set, updates the LogLevel in the base class and also uses the given DllConfig to write the entire DllConfig out to the backing XML file. Is this a good technique? I don't think I can just serialize SyncedLoggingSection by itself to the backing XML file, because not all of the contents will be written, just the <logging> node. Then I'd end up with an XML file containing only the <logging> section with its updated <logLevel>, instead of the entire config file with <logLevel> updated. Hence, I need to pass an instance of DllConfig to SyncedLoggingSection. It seems almost like I want an event handler, one in DllConfig that would notice when particular properties (i.e., LogLevel) in its properties (i.e., Logging) were set. Is such a thing possible?

    Read the article

  • ObjectDisposedException when outputting to console

    - by Sarah Vessels
    If I have the following code, I have no runtime or compilation problems: if (ConsoleAppBase.NORMAL_EXIT_CODE == code) { StdOut.WriteLine(msg); } else { StdErr.WriteLine(msg); } However, in trying to make this more concise, I switched to the following code: (ConsoleAppBase.NORMAL_EXIT_CODE == code ? StdOut : StdErr ).WriteLine(msg); When I have this code, I get the following exception at runtime: System.ObjectDisposedException: Cannot write to a closed TextWriter Can you explain why this happens? Can I avoid it and have more concise code like I wanted?

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • MSTest unit test passes by itself, fails when other tests are run

    - by Sarah Vessels
    I'm having trouble with some MSTest unit tests that pass when I run them individually but fail when I run the entire unit test class. The tests test some code that SLaks helped me with earlier, and he warned me what I was doing wasn't thread-safe. However, now my code is more complicated and I don't know how to go about making it thread-safe. Here's what I have: public static class DLLConfig { private static string _domain; public static string Domain { get { return _domain = AlwaysReadFromFile ? readCredentialFromFile(DOMAIN_TAG) : _domain ?? readCredentialFromFile(DOMAIN_TAG); } } } And my test is simple: string expected = "the value I know exists in the file"; string actual = DLLConfig.Domain; Assert.AreEqual(expected, actual); When I run this test by itself, it passes. When I run it alongside all the other tests in the test class (which perform similar checks on different properties), actual is null and the test fails. I note this is not a problem with a property whose type is a custom Enum type; maybe I'm having this problem with the Domain property because it is a string? Or maybe it's a multi-threaded issue with how MSTest works?

    Read the article

  • C# naming convention for extension methods for interface

    - by Sarah Vessels
    I typically name my C# interfaces as IThing. I'm creating an extension method class for IThing, but I don't know what to name it. On one hand, calling it ThingExtensions seems to imply it is an extension class to some Thing class instead of to the IThing interface. It also makes the extension class be sorted away from the interface it extends, when viewing files alphabetically. On the other hand, naming it IThingExtensions makes it look like it is an interface itself, instead of an extension class for an interface. What would you suggest?

    Read the article

  • help me "dry" out this .net XML serialization code

    - by Sarah Vessels
    I have a base collection class and a child collection class, each of which are serializable. In a test, I discovered that simply having the child class's ReadXml method call base.ReadXml resulted in an InvalidCastException later on. First, here's the class structure: Base Class // Collection of Row objects [Serializable] [XmlRoot("Rows")] public class Rows : IList<Row>, ICollection<Row>, IEnumerable<Row>, IEquatable<Rows>, IXmlSerializable { public Collection<Row> Collection { get; protected set; } public void ReadXml(XmlReader reader) { reader.ReadToFollowing(XmlNodeName); do { using (XmlReader rowReader = reader.ReadSubtree()) { var row = new Row(); row.ReadXml(rowReader); Collection.Add(row); } } while (reader.ReadToNextSibling(XmlNodeName)); } } Derived Class // Acts as a collection of SpecificRow objects, which inherit from Row. Uses the same // Collection<Row> that Rows defines which is fine since SpecificRow : Row. [Serializable] [XmlRoot("MySpecificRowList")] public class SpecificRows : Rows, IXmlSerializable { public new void ReadXml(XmlReader reader) { // Trying to just do base.ReadXml(reader) causes a cast exception later reader.ReadToFollowing(XmlNodeName); do { using (XmlReader rowReader = reader.ReadSubtree()) { var row = new SpecificRow(); row.ReadXml(rowReader); Collection.Add(row); } } while (reader.ReadToNextSibling(XmlNodeName)); } public new Row this[int index] { // The cast in this getter is what causes InvalidCastException if I try // to call base.ReadXml from this class's ReadXml get { return (Row)Collection[index]; } set { Collection[index] = value; } } } And here's the code that causes a runtime InvalidCastException if I do not use the version of ReadXml shown in SpecificRows above (i.e., I get the exception if I just call base.ReadXml from within SpecificRows.ReadXml): TextReader reader = new StringReader(serializedResultStr); SpecificRows deserializedResults = (SpecificRows)xs.Deserialize(reader); SpecificRow = deserializedResults[0]; // this throws InvalidCastException So, the code above all compiles and runs exception-free, but it bugs me that Rows.ReadXml and SpecificRows.ReadXml are essentially the same code. The value of XmlNodeName and the new Row()/new SpecificRow() are the differences. How would you suggest I extract out all the common functionality of both versions of ReadXml? Would it be silly to create some generic class just for one method? Sorry for the lengthy code samples, I just wanted to provide the reason I can't simply call base.ReadXml from within SpecificRows.

    Read the article

  • help translate this week query from Oracle PL/SQL to SQL Server 2008

    - by Sarah Vessels
    I have the following query that runs in my Oracle database and I want to have the equivalent for a SQL Server 2008 database: SELECT TRUNC( /* Midnight Sunday */ NEXT_DAY(SYSDATE, 'SUN') - (7*LEVEL) ) AS week_start, TRUNC( /* 23:59:59 Saturday */ NEXT_DAY(NEXT_DAY(SYSDATE, 'SUN') - (7*LEVEL), 'SAT') + 1 ) - (1/(60*24)) + (59/(60*60*24)) AS week_end FROM DUAL CONNECT BY LEVEL <= 4 /* Get the past 4 weeks */ What the query does is get the start of the week and the end of the week for the last 4 weeks. It generates data like the following: WEEK_START WEEK_END 2010-03-07 00:00:00 2010-03-13 23:59:59 2010-02-28 00:00:00 2010-03-06 23:59:59 ...

    Read the article

  • custom attribute changes in .NET 4

    - by Sarah Vessels
    I recently upgraded a C# project from .NET 3.5 to .NET 4. I have a method that extracts all MSTest test methods from a given list of MethodBase instances. Its body looks like this: return null == methods || methods.Count() == 0 ? null : from method in methods let testAttribute = Attribute.GetCustomAttribute(method, typeof(TestMethodAttribute)) where null != testAttribute select method; This worked in .NET 3.5, but since upgrading my projects to .NET 4, this code always returns an empty list, even when given a list of methods containing a method that is marked with [TestMethod]. Did something change with custom attributes in .NET 4? Debugging, I found that the results of GetCustomAttributesData() on the test method gives a list of two CustomAttributeData which are described in Visual Studio 2010's 'Locals' window as: Microsoft.VisualStudio.TestTools.UnitTesting.DeploymentItemAttribute("myDLL.dll") Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute() -- this is what I'm looking for When I call GetType() on that second CustomAttributeData instance, however, I get {Name = "CustomAttributeData" FullName = "System.Reflection.CustomAttributeData"} System.Type {System.RuntimeType}. How can I get TestMethodAttribute out of the CustomAttributeData, so that I can extract test methods from a list of MethodBases?

    Read the article

  • DRYing out implementation of ICloneable in several classes

    - by Sarah Vessels
    I have several different classes that I want to be cloneable: GenericRow, GenericRows, ParticularRow, and ParticularRows. There is the following class hierarchy: GenericRow is the parent of ParticularRow, and GenericRows is the parent of ParticularRows. Each class implements ICloneable because I want to be able to create deep copies of instances of each. I find myself writing the exact same code for Clone() in each class: object ICloneable.Clone() { object clone; using (var stream = new MemoryStream()) { var formatter = new BinaryFormatter(); // Serialize this object formatter.Serialize(stream, this); stream.Position = 0; // Deserialize to another object clone = formatter.Deserialize(stream); } return clone; } I then provide a convenience wrapper method, for example in GenericRows: public GenericRows Clone() { return (GenericRows)((ICloneable)this).Clone(); } I am fine with the convenience wrapper methods looking about the same in each class because it's very little code and it does differ from class to class by return type, cast, etc. However, ICloneable.Clone() is identical in all four classes. Can I abstract this somehow so it is only defined in one place? My concern was that if I made some utility class/object extension method, it would not correctly make a deep copy of the particular instance I want copied. Is this a good idea anyway?

    Read the article

  • Help me clean up this crazy lambda with the out keyword

    - by Sarah Vessels
    My code looks ugly, and I know there's got to be a better way of doing what I'm doing: private delegate string doStuff( PasswordEncrypter encrypter, RSAPublicKey publicKey, string privateKey, out string salt ); private bool tryEncryptPassword( doStuff encryptPassword, out string errorMessage ) { ...get some variables... string encryptedPassword = encryptPassword(encrypter, publicKey, privateKey, out salt); ... } This stuff so far doesn't bother me. It's how I'm calling tryEncryptPassword that looks so ugly, and has duplication because I call it from two methods: public bool method1(out string errorMessage) { string rawPassword = "foo"; return tryEncryptPassword( (PasswordEncrypter encrypter, RSAPublicKey publicKey, string privateKey, out string salt) => encrypter.EncryptPasswordAndDoStuff( // Overload 1 rawPassword, publicKey, privateKey, out salt ), out errorMessage ); } public bool method2(SecureString unencryptedPassword, out string errorMessage) { return tryEncryptPassword( (PasswordEncrypter encrypter, RSAPublicKey publicKey, string privateKey, out string salt) => encrypter.EncryptPasswordAndDoStuff( // Overload 2 unencryptedPassword, publicKey, privateKey, out salt ), out errorMessage ); } Two parts to the ugliness: I have to explicitly list all the parameter types in the lambda expression because of the single out parameter. The two overloads of EncryptPasswordAndDoStuff take all the same parameters except for the first parameter, which can either be a string or a SecureString. So method1 and method2 are pretty much identical, they just call different overloads of EncryptPasswordAndDoStuff. Any suggestions? Edit: if I apply Jeff's suggestions, I do the following call in method1: return tryEncryptPassword( (encrypter, publicKey, privateKey) => { var result = new EncryptionResult(); string salt; result.EncryptedValue = encrypter.EncryptPasswordAndDoStuff( rawPassword, publicKey, privateKey, out salt ); result.Salt = salt; return result; }, out errorMessage ); Much the same call is made in method2, just with a different first value to EncryptPasswordAndDoStuff. This is an improvement, but it still seems like a lot of duplicated code.

    Read the article

  • should I include VB macros in source control with my project?

    - by Sarah Vessels
    For a C# project, I make use of several Visual Basic macros in Visual Studio. I was just considering that these would be of use to other developers that work on the C# project. The macros so far include removing trailing whitespace on save, organizing using directives and removing unnecessary ones, and an override for Ctrl-M Ctrl-O that expands regions. Would it be reasonable for me to include this macro code with my C# project in Subversion? I don't know if it's even possible for macros to be made available/work in Visual Studio just because you open a particular Solution file, and that might be too invasive since some of the macros override existing VS behavior.

    Read the article

  • nullable type and a ReSharper warning

    - by Sarah Vessels
    I have the following code: private static LogLevel? _logLevel = null; public static LogLevel LogLevel { get { if (!_logLevel.HasValue) { _logLevel = readLogLevelFromFile(); } return _logLevel.Value; } } private static LogLevel readLogLevelFromFile() { ... } I get a ReSharper warning on the return statement about a possible System.InvalidOperationException and it suggests I check _logLevel to see if it is null first. However, readLogLevelFromFile returns LogLevel, not LogLevel?, so there is no way the return statement could be reached when _logLevel is null. Is this just an oversight by ReSharper, or am I missing something?

    Read the article

  • using yield in C# like I would in Ruby

    - by Sarah Vessels
    Besides just using yield for iterators in Ruby, I also use it to pass control briefly back to the caller before resuming control in the called method. What I want to do in C# is similar. In a test class, I want to get a connection instance, create another variable instance that uses that connection, then pass the variable to the calling method so it can be fiddled with. I then want control to return to the called method so that the connection can be disposed. I guess I'm wanting a block/closure like in Ruby. Here's the general idea: private static MyThing getThing() { using (var connection = new Connection()) { yield return new MyThing(connection); } } [TestMethod] public void MyTest1() { // call getThing(), use yielded MyThing, control returns to getThing() // for disposal } [TestMethod] public void MyTest2() { // call getThing(), use yielded MyThing, control returns to getThing() // for disposal } ... This doesn't work in C#; ReSharper tells me that the body of getThing cannot be an iterator block because MyThing is not an iterator interface type. That's definitely true, but I don't want to iterate through some list. I'm guessing I shouldn't use yield if I'm not working with iterators. Any idea how I can achieve this block/closure thing in C# so I don't have to wrap my code in MyTest1, MyTest2, ... with the code in getThing()'s body?

    Read the article

  • representing an XML config file with an IXmlSerializable class

    - by Sarah Vessels
    I'm writing in C# and trying to represent an XML config file through an IXmlSerializable class. I'm unsure how to represent the nested elements in my config file, though, such as logLevel: <?xml version="1.0" encoding="utf-8" ?> <configuration> <logging> <logLevel>Error</logLevel> </logging> <credentials> <user>user123</user> <host>localhost</host> <password>pass123</password> </credentials> <credentials> <user>user456</user> <host>my.other.domain.com</host> <password>pass456</password> </credentials> </configuration> There is an enum called LogLevel that represents all the possible values for the logLevel tag. The tags within credentials should all come out as strings. In my class, called DLLConfigFile, I had the following: [XmlElement(ElementName="logLevel", DataType="LogLevel")] public LogLevel LogLevel; However, this isn't going to work because <logLevel> isn't within the root node of the XML file, it's one node deeper in <logging>. How do I go about doing this? As for the <credentials> nodes, my guess is I will need a second class, say CredentialsSection, and have a property such as the following: [XmlElement(ElementName="credentials", DataType="CredentialsSection")] public CredentialsSection[] AllCredentials; Edit: okay, I tried Robert Love's suggestion and created a LoggingSection class. However, my test fails: var xs = new XmlSerializer(typeof(DLLConfigFile)); using (var stream = new FileStream(_configPath, FileMode.Open, FileAccess.Read, FileShare.Read)) { using (var streamReader = new StreamReader(stream)) { XmlReader reader = new XmlTextReader(streamReader); var file = (DLLConfigFile)xs.Deserialize(reader); Assert.IsNotNull(file); LoggingSection logging = file.Logging; Assert.IsNotNull(logging); // fails here LogLevel logLevel = logging.LogLevel; Assert.IsNotNull(logLevel); Assert.AreEqual(EXPECTED_LOG_LEVEL, logLevel); } } The config file I'm testing with definitely has <logging>. Here's what the classes look like: [Serializable] [XmlRoot("logging")] public class LoggingSection : IXmlSerializable { public XmlSchema GetSchema() { return null; } [XmlElement(ElementName="logLevel", DataType="LogLevel")] public LogLevel LogLevel; public void ReadXml(XmlReader reader) { LogLevel = (LogLevel)Enum.Parse(typeof(LogLevel), reader.ReadString()); } public void WriteXml(XmlWriter writer) { writer.WriteString(Enum.GetName(typeof(LogLevel), LogLevel)); } } [Serializable] [XmlRoot("configuration")] public class DLLConfigFile : IXmlSerializable { [XmlElement(ElementName="logging", DataType="LoggingSection")] public LoggingSection Logging; }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >