Search Results

Search found 1030 results on 42 pages for 'refactoring'.

Page 13/42 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Optional Member Objects

    - by David Relihan
    Okay, so you have a load of methods sprinkled around your systems main class. So you do the right thing and refactor by creating a new class and perform move method(s) into a new class. The new class has a single responsibility and all is right with the world again: class Feature { public: Feature(){}; void doSomething(); void doSomething1(); void doSomething2(); }; So now your original class has a member variable of type object: Feature _feature; Which you will call in the main class. Now if you do this many times, you will have many member-objects in your main class. Now these features may or not be required based on configuration so in a way it's costly having all these objects that may or not be needed. Can anyone suggest a way of improving this? At the moment I plan to test in the newly created class if the feature is enabled - so the when a call is made to method I will return if it is not enabled. I could have a pointer to the object and then only call new if feature is enabled - but this means I will have to test before I call a method on it which would be potentially dangerous and not very readable. Would having an auto_ptr to the object improve things: auto_ptr<Feature> feature; Or am I still paying the cost of object invokation even though the object may\or may not be required. BTW - I don't think this is premeature optimisation - I just want to consider the possibilites.

    Read the article

  • How can this verbose, unpythonic routine be improved?

    - by fmark
    Is there a more pythonic way of doing this? I am trying to find the eight neighbours of an integer coordinate lying within an extent. I am interested in reducing its verbosity without sacrificing execution speed. def fringe8((px, py), (x1, y1, x2, xy)): f = [(px - 1, py - 1), (px - 1, py), (px - 1, py + 1), (px, py - 1), (px, py + 1), (px + 1, py - 1), (px + 1, py), (px + 1, py + 1)] f_inrange = [] for fx, fy in f: if fx < x1: continue if fx >= x2: continue if fy < y1: continue if fy >= y2: continue f_inrange.append((fx, fy)) return f_inrange

    Read the article

  • How could I refactor this into more manageable code?

    - by ChaosPandion
    private static JsonStructure Parse(string jsonText, bool throwException) { var result = default(JsonStructure); var structureStack = new Stack<JsonStructure>(); var keyStack = new Stack<string>(); var current = default(JsonStructure); var currentState = ParserState.Begin; var invalidToken = false; var key = default(string); var value = default(object); foreach (var token in Lexer.Tokenize(jsonText)) { switch (currentState) { case ParserState.Begin: switch (token.Type) { case TokenType.OpenBrace: currentState = ParserState.ObjectKey; current = result = new JsonObject(); break; case TokenType.OpenBracket: currentState = ParserState.ArrayValue; current = result = new JsonArray(); break; default: invalidToken = true; break; } break; case ParserState.ObjectKey: switch (token.Type) { case TokenType.StringLiteral: currentState = ParserState.ColonSeperator; key = (string)token.Value; break; default: invalidToken = true; break; } break; case ParserState.ColonSeperator: switch (token.Type) { case TokenType.Colon: currentState = ParserState.ObjectValue; break; default: invalidToken = true; break; } break; case ParserState.ObjectValue: case ParserState.ArrayValue: switch (token.Type) { case TokenType.NumberLiteral: case TokenType.StringLiteral: case TokenType.BooleanLiteral: case TokenType.NullLiteral: currentState = ParserState.ItemEnd; value = token.Value; break; case TokenType.OpenBrace: structureStack.Push(current); keyStack.Push(key); currentState = ParserState.ObjectKey; current = new JsonObject(); break; case TokenType.OpenBracket: structureStack.Push(current); currentState = ParserState.ArrayValue; current = new JsonArray(); break; default: invalidToken = true; break; } break; case ParserState.ItemEnd: var jsonObject = (current as JsonObject); if (jsonObject != null) { jsonObject.Add(key, value); currentState = ParserState.ObjectKey; } var jsonArray = (current as JsonArray); if (jsonArray != null) { jsonArray.Add(value); currentState = ParserState.ArrayValue; } switch (token.Type) { case TokenType.CloseBrace: case TokenType.CloseBracket: currentState = ParserState.End; break; case TokenType.Comma: break; default: invalidToken = true; break; } break; case ParserState.End: switch (token.Type) { case TokenType.CloseBrace: case TokenType.CloseBracket: case TokenType.Comma: var previous = structureStack.Pop(); var previousJsonObject = (previous as JsonObject); if (previousJsonObject != null) { currentState = ParserState.ObjectKey; previousJsonObject.Add(keyStack.Pop(), current); } var previousJsonArray = (previous as JsonArray); if (previousJsonArray != null) { currentState = ParserState.ArrayValue; previousJsonArray.Add(current); } current = previous; if (token.Type != TokenType.Comma) { currentState = ParserState.End; } break; default: invalidToken = true; break; } break; default: break; } if (invalidToken) { if (throwException) { throw new JsonException(token); } return null; } } return result; }

    Read the article

  • How to Elegantly convert switch+enum with polymorphism

    - by Kyle
    I'm trying to replace simple enums with type classes.. that is, one class derived from a base for each type. So for example instead of: enum E_BASE { EB_ALPHA, EB_BRAVO }; E_BASE message = someMessage(); switch (message) { case EB_ALPHA: applyAlpha(); case EB_BRAVO: applyBravo(); } I want to do this: Base* message = someMessage(); message->apply(this); // use polymorphism to determine what function to call. I have seen many ways to do this which all seem less elegant even then the basic switch statement. Using dyanimc_pointer_cast, inheriting from a messageHandler class that needs to be updated every time a new message is added, using a container of function pointers, all seem to defeat the purpose of making code easier to maintain by replacing switches with polymorphism. This is as close as I can get: (I use templates to avoid inheriting from an all-knowing handler interface) class Base { public: template<typename T> virtual void apply(T* sandbox) = 0; }; class Alpha : public Base { public: template<typename T> virtual void apply(T* sandbox) { sandbox->applyAlpha(); } }; class Bravo : public Base { public: template<typename T> virtual void apply(T* sandbox) { sandbox->applyBravo(); } }; class Sandbox { public: void run() { Base* alpha = new Alpha; Base* bravo = new Bravo; alpha->apply(this); bravo->apply(this); delete alpha; delete bravo; } void applyAlpha() { // cout << "Applying alpha\n"; } void applyBravo() { // cout << "Applying bravo\n"; } }; Obviously, this doesn't compile but I'm hoping it gets my problem accross.

    Read the article

  • can you simlify and generalize this useful jQuery function?

    - by user199368
    Hi, I'm doing an eshop with goods displayed as "tiles" in grid as usual. I just want to use various sizes of tiles and make sure (via jQuery) there are no free spaces. In basic situation, I have a 960px wrapper and want to use 240x180px (class .grid_4) tiles and 480x360px (class .grid_8) tiles. See image (imagine no margins/paddings there): Problems without jQuery: - when the CMS provides the big tile as 6th, there would be a free space under the 5th one - when the CMS provides the big tile as 7th, there would be a free space under 5th and 6th - when the CMS provides the big tile as 8th, it would shift to next line, leaving position no.8 free My solution so far looks like this: $(".grid_8").each(function(){ //console.log("BIG on position "+($(this).index()+1)+" which is "+(($(this).index()+1)%2?"ODD":"EVEN")); switch (($(this).index()+1)%4) { case 1: // nothing needed //console.log("case 1"); break; case 2: //need to shift one position and wrap into 240px div //console.log("case 2"); $(this).insertAfter($(this).next()); //swaps this with next $(this).prevAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); break; case 3: //need to shift two positions and wrap into 480px div //console.log("case 3"); $(this).prevAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); //wraps previous two - forcing them into column $(this).nextAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); //wraps next two - forcing them into column $(this).insertAfter($(this).next()); //moves behind the second column break; case 0: //need to shift one position //console.log("case 4"); $(this).insertAfter($(this).next()); //console.log("shifted to next line"); break; } }); It should be obvious from the comments how it works - generally always makes sure that the big tile is on odd position (count of preceding small tiles is even) by shifting one position back if needed. Also small tiles to the left from the big one need to be wrapped in another div so that they appear in column rather than row. Now finally the questions: how to generalize the function so that I can use even more tile dimensions like 720x360 (3x2), 480x540 (2x3), etc.? is there a way to simplify the function? I need to make sure that big tile counts as a multiple of small tiles when checking the actual position. Because using index() on the tile on position 12 (last tile in 3rd row) would now return 7 (position 8) because tiles on positions 5 and 9 are wrapped together in one culumn and the big tile is also just a single div, but spans 2x2 positions. any clean way to ensure this? Thank you very much for any hints. Feel free to reuse the code, I think it can be useful. Josef

    Read the article

  • Benefits of migrating my work to a new web development framework?

    - by John
    When I first started programming with PHP, I was ignorant of other php frameworks (like code igniter, cake php, etc...). So I fell into the trap of re-inventing wheels, which had the benefit of being "fun" and "educational". Overtime, I discovered other open source products that I found useful, like smarty templating engine, jquery library, tcpdf library, fdf etc...so I started bundling these technologies along with things I've built over the years into a LAMP development framework to make life easier for myself. This pass year, I've been having fun developing on the code igniter framework. It does many of the things I do in my framework. Coding in CI feels natural because the MVC and ORM feels similar to the MVC and ORM of my framework. So now I'm contemplating migrating a lot of the plugins in my framework over to CI. The pros and cons I can think of for such a project are: Pros: benefit from the vast community of CI developers lots of other developers will be familiar with it better documentation Cons: I've built a lot of useful plugins against my own framework, and it will take a lot of time to move even just the essential ones at the moment, I still, work faster against my own framework than CI, just because I'm more familiar with it even if I did migrate to CI, there will always be newer and better frameworks in the near future, and i'll be contemplating this scenario again So my question is the following: perhaps I should leave my old framework as is, and for each new project I receive, I make a decision on whether the requirements are best served by developing with CI or my own framework. Is this the right approach?

    Read the article

  • remove duplicate code in java

    - by Anantha Kumaran
    class A extends ApiClass { public void duplicateMethod() { } } class B extends AnotherApiClass { public void duplicateMethod() { } } I have two classes which extend different api classes. The two class has some duplicate methods(same method repeated in both class) and how to remove this duplication? Edit The ApiClass is not under my control

    Read the article

  • What ReSharper 4.0 live templates for C# do you use?

    - by Rinat Abdullin
    What ReSharper 4.0 templates for C# do you use? Let's share these in the following format: [Title] Optional description Shortcut: shortcut Available in: [AvailabilitySetting] // Resharper template code snippet // comes here Macros properties (if present): Macro1 - Value - EditableOccurence Macro2 - Value - EditableOccurence One macro per answer, please! Here are some samples for NUnit test fixture and Standalone NUnit test case that describe live templates in the suggested format.

    Read the article

  • moqing static method call to c# library class

    - by Joe
    This seems like an easy enough issue but I can't seem to find the keywords to effect my searches. I'm trying to unit test by mocking out all objects within this method call. I am able to do so to all of my own creations except for this one: public void MyFunc(MyVarClass myVar) { Image picture; ... picture = Image.FromStream(new MemoryStream(myVar.ImageStream)); ... } FromStream is a static call from the Image class (part of c#). So how can I refactor my code to mock this out because I really don't want to provide a image stream to the unit test.

    Read the article

  • Why Does This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • vestal_versions and htmldiff question of reversion...

    - by holden
    I'm guessing there's probably an easier way to do what I'm doing so that the code is less unwieldy. I had trouble understanding how to use the revert_to method... i wanted something where i could call up two different versions at the same time, but this doesn't seem to be the way that vestal_versions works. This code works, but I'm wondering if I'm making something harder than it needs to be and I'd like to find out before I delve deeper. @article = Article.find(params[:id]) if params[:versions] v = params[:versions].split(',') @article.revert_to(v.first.to_i) @content1 = @article.content @article.revert_to(v.last.to_i) @content2 = @article.content end In case you're wondering, I'm using this in conjunction with HTMLDIFF to get the version changes. <div id="content"> <% if params[:versions] %> <%= Article.diff(@content1, @content2) %> <% else %> <%= @article.content %> <% end %> </div>

    Read the article

  • are projects with high developer turn over rate really a bad thing?

    - by John
    I've inherited a lot of web projects that experienced high developer turn over rates. Sometimes these web projects are a horrible patchwork of band aid solutions. Other times they can be somewhat maintainable mozaics of half-done features each built with a different architectural style. Everytime I inherit these projects, I wish the previous developers could explain to me why things got so bad. What puzzles me is the reaction of the owners (either a manager, a middle man company, or a client). They seem to think, "Well, if you leave, I'll just find another developer." Or they think, "Oh, it costs that much money to refactor the system? I know another developer who can do it at half the price. I'll hire him if I can't afford you." I'm guessing that the high developer turn over rate is related to the owner's mentality of "If you think it's a bad idea to build this, I'll just find another (possibly cheaper) developer to do what I want". For the owners, the approach seems to work because their business is thriving. Unfortunately, it's no fun for the developers that go AWOL 3-4 months after working with poor code, strict timelines, and little feedback. So my question is the following: Are the following symptoms of a project really such a bad thing for business? high developer turn over rate poorly built technology - often a patchwork of different and inappropriately used architectural styles owners without a clear roadmap for their web project, and they request features on a whim I've seen numerous businesses prosper while experiencing the symptoms above. So as a programmer, even though my instincts tell me the above points are terrible, I'm forced to take a step back and ask, "are things really that bad in the grand scheme of things?" If not, I will re-evaluate my approach to these projects.

    Read the article

  • Why .NET Boolean has TrueLiteral and TrueString?

    - by user309937
    Why in Boolean type there are two fields with the same value? internal const int True = 1; internal const int False = 0; internal const string TrueLiteral = "True"; internal const string FalseLiteral = "False"; and public static readonly string TrueString; public static readonly string FalseString; static Boolean() { TrueString = "True"; FalseString = "False"; } in reflector generated code, methods don't use those strings but: public string ToString(IFormatProvider provider) { if (!this) { return "False"; } return "True"; } would't it be better to use those const values?

    Read the article

  • Proactively using 'lines of code' (LOC) metric in your software-development process?

    - by manuel aldana
    hi there, I find the LOC (lines of code) metric a simple but nice metric to get an overview of software codebase complexity (see also blog-entry 'implications of lines-of-code'). I wondered how many of you out there are using this metric as a centric part for retrospective (for removing unused functionality or dead code). I think creating awareness that more lines-of-code mean more complexity in maintenance and extension is valuable.

    Read the article

  • help me "dry" out this .net XML serialization code

    - by Sarah Vessels
    I have a base collection class and a child collection class, each of which are serializable. In a test, I discovered that simply having the child class's ReadXml method call base.ReadXml resulted in an InvalidCastException later on. First, here's the class structure: Base Class // Collection of Row objects [Serializable] [XmlRoot("Rows")] public class Rows : IList<Row>, ICollection<Row>, IEnumerable<Row>, IEquatable<Rows>, IXmlSerializable { public Collection<Row> Collection { get; protected set; } public void ReadXml(XmlReader reader) { reader.ReadToFollowing(XmlNodeName); do { using (XmlReader rowReader = reader.ReadSubtree()) { var row = new Row(); row.ReadXml(rowReader); Collection.Add(row); } } while (reader.ReadToNextSibling(XmlNodeName)); } } Derived Class // Acts as a collection of SpecificRow objects, which inherit from Row. Uses the same // Collection<Row> that Rows defines which is fine since SpecificRow : Row. [Serializable] [XmlRoot("MySpecificRowList")] public class SpecificRows : Rows, IXmlSerializable { public new void ReadXml(XmlReader reader) { // Trying to just do base.ReadXml(reader) causes a cast exception later reader.ReadToFollowing(XmlNodeName); do { using (XmlReader rowReader = reader.ReadSubtree()) { var row = new SpecificRow(); row.ReadXml(rowReader); Collection.Add(row); } } while (reader.ReadToNextSibling(XmlNodeName)); } public new Row this[int index] { // The cast in this getter is what causes InvalidCastException if I try // to call base.ReadXml from this class's ReadXml get { return (Row)Collection[index]; } set { Collection[index] = value; } } } And here's the code that causes a runtime InvalidCastException if I do not use the version of ReadXml shown in SpecificRows above (i.e., I get the exception if I just call base.ReadXml from within SpecificRows.ReadXml): TextReader reader = new StringReader(serializedResultStr); SpecificRows deserializedResults = (SpecificRows)xs.Deserialize(reader); SpecificRow = deserializedResults[0]; // this throws InvalidCastException So, the code above all compiles and runs exception-free, but it bugs me that Rows.ReadXml and SpecificRows.ReadXml are essentially the same code. The value of XmlNodeName and the new Row()/new SpecificRow() are the differences. How would you suggest I extract out all the common functionality of both versions of ReadXml? Would it be silly to create some generic class just for one method? Sorry for the lengthy code samples, I just wanted to provide the reason I can't simply call base.ReadXml from within SpecificRows.

    Read the article

  • Automate refactor import/using directives, using ReSharper and Visual Studio 2010

    - by Mendy
    I want to automate the Visual Studio 2010 / Resharper 5 auto inserting import directives to put my internal namespaces into the namespace sphere. Like this: using System; using System.Collections.Generic; using System.Linq; using StructureMap; using MyProject.Core; // <--- Move inside. using MyProject.Core.Common; // <--- Move inside. namespace MyProject.DependencyResolution { using Core; using Core.Common; // <--- My internal namespaces to be here! public class DependencyRegistrar { ........... } }

    Read the article

  • Safest way to change variable names in a project

    - by kamziro
    So I've been working on a relatively large project by myself, and I've come to realise that some of the variable names earlier on were.. less than ideal. But how does one change variable names in a project easily? Is there such a tool that can go through a project directory, parse all the files, and then replace the variable names to the desired one? It has to be smart enough to understand the language I imagine. I was thinking of using regexp (sed/awk on linux?) tools to just replace the variable name, but there were many times where my particular variable is also included as a part of strings. There's also the issue about changing stuff on a c++ namespace, because there is actually two classes in my project that share the same name, but are in different namespaces. I remember visual stuio being able to do this, but what's the safest and most elegant way to do this on linux?

    Read the article

  • Java: Best practices for turning foreign horror-code into clean API...?

    - by java.is.for.desktop
    Hello, everyone! I have a project (related to graph algorithms). It is written by someone else. The code is horrible: public fields, no getters/setters huge methods, all public some classes have over 20 fields some classes have over 5 constructors (which are also huge) some of those constructors just left many fields null (so I can't make some fields final, because then every second constructor signals errors) methods and classes rely on each other in both directions I have to rewrite this into a clean and understandable API. Problem is: I myself don't understand anything in this code. Please give me hints on analyzing and understanding such code. I was thinking, perhaps, there are tools which perform static code analysis and give me call graphs and things like this.

    Read the article

  • Refactored App Engine project - now Eclipse is infinitely building the project without end

    - by Yog
    I renamed some files in my App Engine project and refactored code, changing references to variables. Everything seemed fine until I changed the references in the web.xml for the project. Then I got a complaint about some error with the DataNucleus enhancer and now the project build process is stuck at 22%. I tried stopping Eclipse and restarting but the build process keeps hanging. Any tips on how to clean out whatever it's getting stuck on?

    Read the article

  • What is a faster way of merging the values of this Python structure into a single dictionary?

    - by jcoon
    I've refactored how the merged-dictionary (all_classes) below is created, but I'm wondering if it can be more efficient. I have a dictionary of dictionaries, like this: groups_and_classes = {'group_1': {'class_A': [1, 2, 3], 'class_B': [1, 3, 5, 7], 'class_c': [1, 2], # ...many more items like this }, 'group_2': {'class_A': [11, 12, 13], 'class_C': [5, 6, 7, 8, 9] }, # ...and many more items like this } A function creates a new object from groups_and_classes like this (the function to create this is called often): all_classes = {'class_A': [1, 2, 3, 11, 12, 13], 'class_B': [1, 3, 5, 7, 9], 'class_C': [1, 2, 5, 6, 7, 8, 9] } Right now, there is a loop that does this: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): for v in vals: if all_classes.has_key(c): if v not in all_classes[c]: all_classes[c].append(v) else: all_classes[c] = [v] So far, I changed the code to use a set instead of a list since the order of the list doesn't matter and the values need to be unique: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): try: all_classes[c].update(set(vals)) except KeyError: all_classes[c] = set(vals) This is a little nicer, and I didn't have to convert the sets to lists because of how all_classes is used in the code. Question: Is there a more efficient way of creating all_classes (aside from building it at the same time groups_and_classes is built, and changing everywhere this function is called)?

    Read the article

  • Diagnosing the .NET Legacy

    - by Xencor
    Assume you are taking over a legacy .NET app. [pre - 3.0] What are the top 5 diagnostic measures, profiling or otherwise that you would employ to assess the health of the application?

    Read the article

  • Isn't it better to use a single try catch instead of tons of TryParsing and other error handling sometimes?

    - by Ryan Peschel
    I know people say it's bad to use exceptions for flow control and to only use exceptions for exceptional situations, but sometimes isn't it just cleaner and more elegant to wrap the entire block in a try-catch? For example, let's say I have a dialog window with a TextBox where the user can type input in to be parsed in a key-value sort of manner. This situation is not as contrived as you might think because I've inherited code that has to handle this exact situation (albeit not with farm animals). Consider this wall of code: class Animals { public int catA, catB; public float dogA, dogB; public int mouseA, mouseB, mouseC; public double cow; } class Program { static void Main(string[] args) { string input = "Sets all the farm animals CAT 3 5 DOG 21.3 5.23 MOUSE 1 0 1 COW 12.25"; string[] splitInput = input.Split(' '); string[] animals = { "CAT", "DOG", "MOUSE", "COW", "CHICKEN", "GOOSE", "HEN", "BUNNY" }; Animals animal = new Animals(); for (int i = 0; i < splitInput.Length; i++) { string token = splitInput[i]; if (animals.Contains(token)) { switch (token) { case "CAT": animal.catA = int.Parse(splitInput[i + 1]); animal.catB = int.Parse(splitInput[i + 2]); break; case "DOG": animal.dogA = float.Parse(splitInput[i + 1]); animal.dogB = float.Parse(splitInput[i + 2]); break; case "MOUSE": animal.mouseA = int.Parse(splitInput[i + 1]); animal.mouseB = int.Parse(splitInput[i + 2]); animal.mouseC = int.Parse(splitInput[i + 3]); break; case "COW": animal.cow = double.Parse(splitInput[i + 1]); break; } } } } } In actuality there are a lot more farm animals and more handling than that. A lot of things can go wrong though. The user could enter in the wrong number of parameters. The user can enter the input in an incorrect format. The user could specify numbers too large or too small for the data type to handle. All these different errors could be handled without exceptions through the use of TryParse, checking how many parameters the user tried to use for a specific animal, checking if the parameter is too large or too small for the data type (because TryParse just returns 0), but every one should result in the same thing: A MessageBox appearing telling the user that the inputted data is invalid and to fix it. My boss doesn't want different message boxes for different errors. So instead of doing all that, why not just wrap the block in a try-catch and in the catch statement just display that error message box and let the user try again? Maybe this isn't the best example but think of any other scenario where there would otherwise be tons of error handling that could be substituted for a single try-catch. Is that not the better solution?

    Read the article

  • Organizing Eager Queries in an ObjectContext

    - by Nix
    I am messing around with Entity Framework 3.5 SP1 and I am trying to find a cleaner way to do the below. I have an EF model and I am adding some Eager Loaded entities and i want them all to reside in the "Eager" property in the context. We originally were just changing the entity set name, but it seems a lot cleaner to just use a property, and keep the entity set name in tact. Example: Context - EntityType - AnotherType - Eager (all of these would have .Includes to pull in all assoc. tables) - EntityType - AnotherType Currently I am using composition but I feel like there is an easier way to do what I want. namespace Entities{ public partial class TestObjectContext { EagerExtensions Eager { get;set;} public TestObjectContext(){ Eager = new EagerExtensions (this); } } public partial class EagerExtensions { TestObjectContext context; public EagerExtensions(TestObjectContext _context){ context = _context; } public IQueryable<TestEntity> TestEntity { get { return context.TestEntity .Include("TestEntityType") .Include("Test.Attached.AttachedType") .AsQueryable(); } } } } public class Tester{ public void ShowHowIWantIt(){ TestObjectContext context= new TestObjectContext(); var query = from a in context.Eager.TestEntity select a; } }

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >