Search Results

Search found 12934 results on 518 pages for 'magic methods'.

Page 111/518 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Cocoa NSTextField Drag & Drop Requires Subclass... Really?

    - by ipmcc
    Until today, I've never had occasion to use anything other than an NSWindow itself as an NSDraggingDestination. When using a window as a one-size-fits-all drag destination, the NSWindow will pass those messages on to its delegate, allowing you to handle drops without subclassing NSWindow. The docs say: Although NSDraggingDestination is declared as an informal protocol, the NSWindow and NSView subclasses you create to adopt the protocol need only implement those methods that are pertinent. (The NSWindow and NSView classes provide private implementations for all of the methods.) Either a window object or its delegate may implement these methods; however, the delegate’s implementation takes precedence if there are implementations in both places. Today, I had a window with two NSTextFields on it, and I wanted them to have different drop behaviors, and I did not want to allow drops anywhere else in the window. The way I interpret the docs, it seems that I either have to subclass NSTextField, or make some giant spaghetti-conditional drop handlers on the window's delegate that hit-checks the draggingLocation against each view in order to select the different drop-area behaviors for each field. The centralized NSWindow-delegate-based drop handler approach seems like it would be onerous in any case where you had more than a small handful of drop destination views. Likewise, the subclassing approach seems onerous regardless of the case, because now the drop handling code lives in a view class, so once you accept the drop you've got to come up with some way to marshal the dropped data back to the model. The bindings docs warn you off of trying to drive bindings by setting the UI value programmatically. So now you're stuck working your way back around that too. So my question is: "Really!? Are those the only readily available options? Or am I missing something straightforward here?" Thanks.

    Read the article

  • How to manage lifecycle in a ViewGroup-derived class?

    - by Scott Smith
    I had a bunch of code in an activity that displays a running graph of some external data. As the activity code was getting kind of cluttered, I decided to extract this code and create a GraphView class: public class GraphView extends LinearLayout { public GraphView(Context context, AttributeSet attrs) { super(context, attrs); LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); inflater.inflate(R.layout.graph_view, this, true); } public void start() { // Perform initialization (bindings, timers, etc) here } public void stop() { // Unbind, destroy timers, yadda yadda } . . . } Moving stuff into this new LinearLayout-derived class was simple. But there was some lifecycle management code associated with creating and destroying timers and event listeners used by this graph (I didn't want this thing polling in the background if the activity was paused, for example). Coming from a MS Windows background, I kind of expected to find overridable onCreate() and onDestroy() methods or something similar, but I haven't found anything of the sort in LinearLayout (or any of its inherited members). Having to leave all of this initialization code in the Activity, and then having to pass it into the view seemed like it defeated the original purpose of encapsulating all of this code into a reusable view. I ended up adding two additional public methods to my view: start() and stop(). I make these calls from the activity's onResume() and onPause() methods respectively. This seems to work, but it feels like I'm using duct tape here. Does anyone know how this is typically done? I feel like I'm missing something...

    Read the article

  • I want tell the VC++ Compiler to compile all code. Can it be done?

    - by KGB
    I am using VS2005 VC++ for unmanaged C++. I have VSTS and am trying to use the code coverage tool to accomplish two things with regards to unit tests: See how much of my referenced code under test is getting executed See how many methods of my code under test (if any) are not unit tested at all Setting up the VSTS code coverage tool (see the link text) and accomplishing task #1 was straightforward. However #2 has been a surprising challenge for me. Here is my test code. class CodeCoverageTarget { public: std::string ThisMethodRuns() { return "Running"; } std::string ThisMethodDoesNotRun() { return "Not Running"; } }; #include <iostream> #include "CodeCoverageTarget.h" using namespace std; int main() { CodeCoverageTarget cct; cout<<cct.ThisMethodRuns()<<endl; } When both methods are defined within the class as above the compiler automatically eliminates the ThisMethodDoesNotRun() from the obj file. If I move it's definition outside the class then it is included in the obj file and the code coverage tool shows it has not been exercised at all. Under most circumstances I want the compiler to do this elimination for me but for the code coverage tool it defeats a significant portion of the value (e.g. finding untested methods). I have tried a number of things to tell the compiler to stop being smart for me and compile everything but I am stumped. It would be nice if the code coverage tool compensated for this (I suppose by scanning the source and matching it up with the linker output) but I didn't find anything to suggest it has a special mode to be turned on. Am I totally missing something simple here or is this not possible with the VC++ compiler + VSTS code coverage tool? Thanks in advance, KGB

    Read the article

  • Can I write a .NETCF Partial Class to extend System.Windows.Forms.UserControl?

    - by eidylon
    Okay... I'm writing a .NET CF (VBNET 2008 3.5 SP1) application, which has one master form, and it dynamically loads specific UserControls based on menu click, in a sort of framework idea. There are certain methods and properties these controls all need to work within the app. Right now I am doing this as an Interface, but this is aggravating as all get up, because some of the methods are optional, and yet I MUST implement them by the nature of interfaces. I would prefer to use inheritance, so that I can have certain code be inherited with overridability, but if I write a class which inherits System.Windows.Forms.UserControl and then inherit my control from that, it squiggles, and tells me that UserControls MUST inherit directly from System.Windows.Forms.UserControl. (Talk about a design flaw!) So next I thought, well, let me use a partial class to extend System.Windows.Forms.UserControl, but when I do that, even though it all seems to compile fine, none of my new properties/methods show up on my controls. Is there any way I can use partial classes to 'extend' System.Windows.Forms.UserControl? For example, can anyone give me a code sample of a partial class which simply adds a MyCount As Integer readonly property to the System.Windows.Forms.UserControl class? If I can just see how to get this going, I can take it from there and add the rest of my functionality. Thanks in advance! I've been searching google, but can't find anything that seems to work for UserControl extension on .NET CF. And the Interface method is driving me crazy as even a small change means updating ALL the controls whether they need to 'override' the method or not.

    Read the article

  • How can I design a custom control in Javascript (possibly using jQuery)

    - by Mathieu Pagé
    I'd like to create a custom control in javascript. The goal is to create a building block that has methods, properties and events and render into a div. An example of one such control would be a calendar. It would render into a div, it would have properties that would define how it's displayed and what date is selected or highlighted, it would have methods to change the current month or to select some date and it would raise events when a day is clicked or the current month is changed by a user input. I can think of lots of way to implement this and I'm not sure what is the best way. I seem to remember that it's a bad thing to augment a DOM elements with properties and method, so I ruled that out. A jQuery plugin seems like a good idea, however I'm not sure if it's appropriate to create a plugin for each and every method my control would have so I could then use it like : $('#control').method1(); $('#control').method2(); And if I do use jQuery, where do I store the private data of my control? Another idea I got was to create a new kind of object that would have a reference to a div in which it could render it's elements. So what is the prefered way to do this. If I can I would like to do this as a jQuery plugin, but I'd need guidlines on how to create methods and where to store private data. I've loked at Plugins/Authoring on jQuery website and it did not helped that much in this regard.

    Read the article

  • No operations allowed after statement closed issue

    - by Washu
    I have the next methods in my singleton to execute the JDBC connections public void openDB() throws ClassNotFoundException, IllegalAccessException, InstantiationException, SQLException { Class.forName("com.mysql.jdbc.Driver").newInstance(); String url = "jdbc:mysql://localhost/mbpe_peru";//mydb conn = DriverManager.getConnection(url, "root", "admin"); st = conn.createStatement(); } public void sendQuery(String query) throws SQLException { st.executeUpdate(query); } public void closeDB() throws SQLException { st.close(); conn.close(); } And I'm having a problem in a void where i have to call this twice. private void jButton1ActionPerformed(ActionEvent evt) { Main.getInstance().openDB(); Main.getInstance().sendQuery("call insertEntry('"+EntryID()+"','"+SupplierID()+"');"); Main.getInstance().closeDB(); Main.getInstance().openDB(); for(int i=0;i<dataBox.length;i++){ Main.getInstance().sendQuery("call insertCount('"+EntryID()+"','"+SupplierID()+"','"+BoxID()+"'); Main.getInstance().closeDB(); } } I have already tried to keep the connection open and send the 2 querys and after that closed and it didnt work... The only way it worked was to not use the methods, declare the commands for the connection and use different variables for the connection and the statement. I thought that if i close the Connecion and the Statement I could use the variable once again since is a method but I'm not able to. Is there any way to solve this using my methods for the JDBC connection?

    Read the article

  • In Java, is there a performance gain in using interfaces for complex models?

    - by Gnoupi
    The title is hardly understandable, but I'm not sure how to summarize that another way. Any edit to clarify is welcome. I have been told, and recommended to use interfaces to improve performances, even in a case which doesn't especially call for the regular "interface" role. In this case, the objects are big models (in a MVC meaning), with many methods and fields. The "good use" that has been recommended to me is to create an interface, with its unique implementation. There won't be any other class implementing this interface, for sure. I have been told that this is better to do so, because it "exposes less" (or something close) to the other classes which will use methods from this class, as these objects are referring to the object from its interface (all public methods from the implementation being reproduced in the interface). This seems quite strange to me, as it seems like a C++ use to me (with header files). There I see the point, but in Java? Is there really a point in making an interface for such unique implementation? I would really appreciate some clarifications on the topic, so I could justify not following such kind of behavior, and the hassle it creates from duplicating all declarations. Edit: Plenty of valid points in most answers, I'm wondering if I won't switch this question for a community wiki, so we can regroup these points in more structured answers.

    Read the article

  • Java Inheritance doubt in parameterised collection

    - by Gala101
    It's obvious that a parent class's object can hold a reference to a child, but does this not hold true in case of parameterised collection ?? eg: Car class is parent of Sedan So public void doSomething(Car c){ ... } public void caller(){ Sedan s = new Sedan(); doSomething(s); } is obviously valid But public void doSomething(Collection<Car> c){ ... } public void caller(){ Collection<Sedan> s = new ArrayList<Sedan>(); doSomething(s); } Fails to compile Can someone please point out why? and also, how to implement such a scenario where a function needs to iterate through a Collection of parent objects, modifying only the fields present in parent class, using parent class methods, but the calling methods (say 3 different methods) pass the collection of three different subtypes.. Ofcourse it compiles fine if I do as below: public void doSomething(Collection<Car> c){ ... } public void caller(){ Collection s = new ArrayList<Sedan>(); doSomething(s); }

    Read the article

  • Hibernate Session flush behaviour [ and Spring @Transactional ]

    - by EugeneP
    I use Spring and Hibernate in a web-app, SessionFactory is injected into a DAO bean, and then this DAO is used in a Servlet through webservicecontext. DAO methods are transactional, inside one of the methods I use ... getCurrentSession().save(myObject); One servlet calls this method with an object passed. The update seems to not be flushed at once, it takes about 5 seconds to see the changes in the database. The servlet's method in which that DAO's update method is called, takes a fraction of second to complete. After the @Transactional method of DAO is completed, flushing may NOT happen ? It does not seem to be a rule [ I already see it ]. Then the question is this: what to do to force the session to flush after every DAO method? It may not be a good thing to do, but talking about a Service layer, some methods must end with immediate flush, and Hibernate Session behavior is not predictable. So what to do to guarantee that my @Transactional method persists all the changes after the last line of that method code? getCurrentSession().flush() is the only solution? p.s. I read somewhere that @Transactional IS ASSOCIATED with a DB Transaction. Method returns, transaction must be committed. I do not see this happens.

    Read the article

  • C#: at design time, how can I reliably determine the type of a variable that is declared using var?

    - by Cheeso
    I'm working on a completion (intellisense) facility for C# in emacs. The idea is, if a user types a fragment, then asks for completion via a particular keystroke combination, the completion facility will use .NET reflection to determine the possible completions. Doing this requires that the type of the thing being completed, be known. If it's a string, it has a set of known methods; if it's an Int32, it has a separate set of methods, and so on. Using semantic, a code lexer/parser package available in emacs, I can locate the variable declarations, and their types. Given that, it's straightforward to use reflection to get the methods and properties on the type, and then present the list of options to the user. The problem arrives when the code uses var in the declaration. How can I reliably determine the actual type used, when the variable is declared with the var keyword? Just to be clear, I don't need to determine it at runtime. I want to determine it at "Design time". So far the best idea I have is: extract the declaration statement, eg var foo = "a string value"; concatenate a statement foo.GetType(); dynamically compile the resulting C# fragment it into a new assembly load the assembly into a new AppDomain, run the framgment and get the return type. unload and discard the assembly This sounds awfully heavyweight, for each completion request in the editor. Any better ideas out there?

    Read the article

  • How to Implement an Interface that Requires Duplicate Member Names in C#?

    - by Will Marcouiller
    I often have to implement some interfaces such as IEnumerable<T> in my code. Each time, when implementing automatically, I encounter the following: public IEnumerator<T> GetEnumerator() { // Code here... } public IEnumerator GetEnumerator1() { // Code here... } Though I have to implement both GetEnumerator() methods, they impossibly can have the same name, even if we understand that they do the same, somehow. The compiler can't treat them as one being the overload of the other, because only the return type differs. When doing so, I manage to set the GetEnumerator1() accessor to private. This way, the compiler doesn't complaint about not implementing the interface member, and I simply throw a NotImplementedException within the methods body. However, I wonder whether it is a good practice, or if I shall proceed differently, as perhaps a method alias or something like so. What is the best approach while implementing an interface such as IEnumerable<T> that requires the implementation of two different methods with the same name?

    Read the article

  • Trouble accessing fields of a serialized object in Java

    - by typoknig
    I have instantized a class that implements Serializable and I am trying to stream that object like this: try{ Socket socket = new Socket("localhost", 8000); ObjectOutputStream toServer = new ObjectOutputStream(socket.getOutputStream()); toServer.writeObject(myObject); } catch (IOException ex) { System.err.println(ex); } All good so far right? Then I am trying to read the fields of that object like this: //This is an inner class class HandleClient implements Runnable{ private ObjectInputStream fromClient; private Socket socket; // This socket was established earlier try { fromClient = new ObjectInputStream(socket.getInputStream()); GetField inputObjectFields = fromClient.readFields(); double myFristVariable = inputObjectFields.get("myFirstVariable", 0); int mySecondVariable = inputObjectFields.get("mySecondVariable", 0); //do stuff } catch (IOException ex) { System.err.println(ex); } catch (ClassNotFoundException ex) { System.err.println(ex); } finally { try { fromClient.close(); } catch (Exception ex) { ex.printStackTrace(); } } } But I always get the error: java.io.NotActiveException: not in call to readObject This is my first time streaming objects instead of primitive data types, what am I doing wrong? BONUS When I do get this working correctly, is the ENTIRE CLASS passed with the serialized object (i.e. will I have access to the methods of the object's class)? My reading suggests that the entire class is passed with the object, but I have been unable to use the objects methods thus far. How exactly do I call on the object's methods? In addition to my code above I also experimented with the readObject method, but I was probably using it wrong too because I couldn't get it to work. Please enlighten me.

    Read the article

  • Should I put actors in the Domain-Model/Class-Diagram?

    - by devoured elysium
    When designing both the domain-model and class-diagrams I am having some trouble understanding what to put in them. I'll give an example of what I mean: I am doing a vacations scheduler program, that has an Administrator and End-Users. The Administrator does a couple of things like registering End-Users in the program, changing their previleges, etc. The End-User can choose his vacations days, etc. I initially defined an Administrator and End-User as concepts in the domain-model, and later as classes in the class-diagram. In the class-diagram, both classes ended up having a couple of methods like Administrator.RegisterNewUser(); Administrator.UnregisterUser(int id); etc. Only after some time I realised that actually both Administrator and End-User are actors, and maybe I got this design totally wrong. Instead of filling Administrator and End-User classes with methods to do what my Use-Cases request, I could define other classes from the domain to do them, and have controllers handle the Use-Cases(actually, I decided to do one for each Use-Case). I could have a UserDatabase.RegisterNewUser() and UserDatabase.UnregisterUser(int id);, for example, instead of having those methods on the Administrator class. The idea would be to try to think of the whole vacation-scheduler as a "closed-program" that has a set of features and doesn't bother with things such as authentication, that should be internal/protected, being that the only public things I'd let the outside world see would be its controllers. Is this the right approach? Or am I getting this totally wrong? Is it generally bad idea to put Actors in the domain-model/class-diagrams? What are good rules of thumb for this? My lecturer is following Applying UML and Patterns, which I find awful, so I'd like to know where I could look up more info on this described actor-models situation. I'm still a bit confused about all of this, as this new approach is radically different from anything I've done before.

    Read the article

  • Whether to put method code in a VB.Net data storage class, or put it in a separate class?

    - by Alan K
    TLDR summary: (a) Should I include (lengthy) method code in classes which may spawn multiple objects at runtime, (b) does doing so cause memory usage bloat, (c) if so should I "outsource" the code to a class that is loaded only once and have the class methods call that, or alternatively (d) does the code get loaded only once with the object definition anyway and I'm worrying about nothing? ........ I don't know whether there's a good answer to this but if there is I haven't found it yet by searching in the usual places. In my VB.Net (2010 if it matters) WinForms project I have about a dozen or so class objects in an object model. Some of these are pretty simple and do little more than act as data storage repositories. The ones further up the object model, however, have an increasing number of methods. There can be a significant number of higher level objects in use though the exact number will be runtime dependent so I can't be more precise than that. As I was writing the method code for one of the top level ones I noticed that it was starting to get quite lengthy. Memory optimisation is something of a lost art given how much memory the average PC has these days but I don't want to make my application a resource hog. So my questions for anyone who knows .Net way better than I do (of which there will be many) are: Is the code loaded into memory with each instance of the class that's created? Alternatively is it loaded only once with the definition of the class, and all derived objects just refer to that definition? (I'm not really sure how that could be possible given that, for example, event handlers can be assigned dynamically, but no harm asking.) If the answer to the first one is yes, would it be more efficient to write the code in a "utility" object which is loaded only once and called from the real class' methods? Any thoughts appreciated.

    Read the article

  • Python class structure ... prep() method?

    - by Adam Nelson
    We have a metaclass, a class, and a child class for an alert system: class AlertMeta(type): """ Metaclass for all alerts Reads attrs and organizes AlertMessageType data """ def __new__(cls, base, name, attrs): new_class = super(AlertMeta, cls).__new__(cls, base, name, attrs) # do stuff to new_class return new_class class BaseAlert(object): """ BaseAlert objects should be instantiated in order to create new AlertItems. Alert objects have classmethods for dequeue (to batch AlertItems) and register (for associated a user to an AlertType and AlertMessageType) If the __init__ function recieves 'dequeue=True' as a kwarg, then all other arguments will be ignored and the Alert will check for messages to send """ __metaclass__ = AlertMeta def __init__(self, **kwargs): dequeue = kwargs.pop('dequeue',None) if kwargs: raise ValueError('Unexpected keyword arguments: %s' % kwargs) if dequeue: self.dequeue() else: # Do Normal init stuff def dequeue(self): """ Pop batched AlertItems """ # Dequeue from a custom queue class CustomAlert(BaseAlert): def __init__(self,**kwargs): # prepare custom init data super(BaseAlert, self).__init__(**kwargs) We would like to be able to make child classes of BaseAlert (CustomAlert) that allow us to run dequeue and to be able to run their own __init__ code. We think there are three ways to do this: Add a prep() method that returns True in the BaseAlert and is called by __init__. Child classes could define their own prep() methods. Make dequeue() a class method - however, alot of what dequeue() does requires non-class methods - so we'd have to make those class methods as well. Create a new class for dealing with the queue. Would this class extend BaseAlert? Is there a standard way of handling this type of situation?

    Read the article

  • C# properties: How are they instantiated?

    - by Pedery
    Hi! This might be a pretty straightforward question, but I'm trying to understand some of the internal workings of the compilation. Very simply put, imagine an arbitrary object being instantiated. This object is then allocated on the heap. The object has a property of type PointF (which is value type), with a get and a set method. Imagine the get and the set method containing a few calculations for doing their work. How and where (stack/heap) and when is this code instantiated? This is the background for this question: I'm writing get and set methods for an object and these methods need to be accessed very frequently. The get and set code in itself is rather massive so I feared that in a worst case scenario the methods would be instantiated as an object or a value type with all internal code for every access of the property. On the other hand the code is probably instantiated when the main object is created and the CPU is simply told to jmp to the property code start. Anyway, this is what I want to have clarified.

    Read the article

  • possible to make codeigniter work with another framework?

    - by ajsie
    the situation is this. my client (who also is a programmer) asks me to develop an address book (with mysql database) with a lot of functions. then he can interact with some class methods i provide for him. kinda like an API. the situation is that the address book application is getting bigger and bigger, and i feel like its way better to use CodeIgniter to code it with MVC. i wonder if i can use codeigniter, then in some way give him the access to controller methods. eg. in a controller there are some functions u can call with the web browser. public function create_contact($information) {..} public function delete_contact($id) {..} public function get_contact($id) {..} however, these are just callable from web browser. how can i let my client have access to these functions like an API? then in his own application he can use: $result = $address_book-create_contact($information); if($result) { echo "Success"; } $contact = $address_book-get_contact($id); is this possible? cause i just know how to access the controller methods with the webbrowser. and i guess its not an option for him to use header(location) to access them. all suggestions to make this possible are welcomed! thanks

    Read the article

  • Exposing a C++ API to C#

    - by Siyfion
    So what I have is a C++ API contained within a *.dll and I want to use a C# application to call methods within the API. So far I have created a C++ / CLR project that includes the native C++ API and managed to create a "bridge" class that looks a bit like the following: ManagedBridge.h namespace ManagedAPIWrapper { public ref class Bridge { public: int bridge_test(void); int bridge_test2(api_struct* temp); } } ManagedBridge.cpp int Bridge::bridge_test(void) { return test(); } int Bridge::bridge_test2(api_struct* temp) { return test2(temp); } I also have a C# application that has a reference to the C++/CLR "Bridge.dll" and then uses the methods contained within. I have a number of problems with this: I can't figure out how to call bridge_test2 within the C# program, as it has no knowledge of what a api_struct actually is. I know that I need to marshal the object somewhere, but do I do it in the C# program or the C++/CLR bridge? This seems like a very long-winded way of exposing all of the methods in the API, is there not an easier way that I'm missing out? (That doesn't use P/Invoke!)

    Read the article

  • Minimal framework in Scala for collections with inheriting return type

    - by Rex Kerr
    Suppose one wants to build a novel generic class, Novel[A]. This class will contain lots of useful methods--perhaps it is a type of collection--and therefore you want to subclass it. But you want the methods to return the type of the subclass, not the original type. In Scala 2.8, what is the minimal amount of work one has to do so that methods of that class will return the relevant subclass, not the original? For example, class Novel[A] /* What goes here? */ { /* Must you have stuff here? */ def reverse/* What goes here instead of :Novel[A]? */ = //... def revrev/*?*/ = reverse.reverse } class ShortStory[A] extends Novel[A] /* What goes here? */ { override def reverse: /*?*/ = //... } val ss = new ShortStory[String] val ss2 = ss.revrev // Type had better be ShortStory[String], not Novel[String] Does this minimal amount change if you want Novel to be covariant? (The 2.8 collections do this among other things, but they also play with return types in more fancy (and useful) ways--the question is how little framework one can get away with if one only wants this subtypes-always-return-subtypes feature.)

    Read the article

  • Action on each method's return value

    - by RobGlynn
    What I'd like to do is take some action using the value returned by every method in a class. So for instance, if I have a class Order which has a method public Customer GetCustomer() { Customer CustomerInstance = // get customer return CustomerInstance; } Let's say I want to log the creation of these - Log(CustomerInstance); My options (AFAIK) are: Call Log() in each of these methods before returning the object. I'm not a fan of this because it gets unwieldy if used on a lot of classes with a lot of methods. It also is not an intrinsic part of the method's purpose. Use composition or inheritance to layer the log callon the Order class similar to: public Customer GetCustomer() { Customer CustomerInstance = this.originalCustomer.GetCustomer(); Log(CustomerInstance); return CustomerInstance; } I don't think this buys me anything over #1. Create extension methods on each of the returned types: Customer CustomerInstance = Order.GetCustomer().Log(); which has just as many downsides. I'm looking to do this for every (or almost every) object returned, automatically if possible, without having to write double the amount of code. I feel like I'm either trying to bend the language into doing something it's not supposed to, or failing to recognize some language feature that would enable this. Possible solutions would be greatly appreciated.

    Read the article

  • Should Service Depend on Many Repositories, or Break Them Up?

    - by Josh Pollard
    I'm using a repository pattern for my data access. So I basically have a repository per table/class. My UI currently uses service classes to actually get things done, and these service classes wrap, and therefore depend on repositories. In many cases my services are only dependent upon one or two repositories, so things aren't too crazy. Unfortunately, one of my forms in the UI expects the user to enter data that will span five different tables. For this form I made a single service class that depends upon five repositories. Then the methods within the service for saving and loading the data call the appropriate methods on all of the corresponding repositories. As you can imagine, the save and load methods in this service are really big. Also, unit testing this service is getting really difficult because I have to setup so many fake repositories. Would it have been a better choice to break this single service apart into a few smaller services? It would put more code at the UI layer, but would make the services smaller and more testable.

    Read the article

  • Is it an good idea to make a wrapper specifically for a DateTime that respresents Now?

    - by Dirk Boer
    I have been noticing lately that is really nice to use a DateTime representing 'now' as an input parameter for your methods, for mocking and testing purposes. Instead of every method calling DateTime.UtcNow themselves, I do it once in the upper methods and forward it on the lower ones. So a lot of methods that need a 'now', have an input parameter DateTime now. (I'm using MVC, and try to detect a parameter called now and modelbind DateTime.UtcNow to it) So instead of: public bool IsStarted { get { return StartTime >= DateTime.UtcNow; } } I usually have: public bool IsStarted(DateTime now) { return StartTime >= now; } So my convention is at the moment, if a method has a DateTime parameter called now, you have to feed it with the current time. Of course this comes down to convention, and someone else can easily just throw some other DateTime in there as a parameter. To make it more solid and static-typed I am thinking about wrapping DateTime in a new object, i.e. DateTimeNow. So in one of the most upper layers I will convert the DateTime to a DateTimeNow and we will get compile errors when, someone tries to fiddle in a normal DateTime. Of course you can still workaround this, but at least if feels more that you are doing something wrong at point. Did anyone else ever went into this path? Are there any good or bad results on the long term that I am not thinking about?

    Read the article

  • convert a repeating code to method

    - by Mr_Green
    In my project, I am adding ComboBox, Text, Link label to my DataGridView dgvMain.I have created different methods for different cell templates as shown below: (The code below is working) gridLnklbl(string headerName) DataGridViewLinkColumn col = new DataGridViewLinkColumn(); col.HeaderText = headerName; // col.Name = "col" + headerName; // same code repeating to all the methods dgvMain.Columns.Add(col); // gridCmb(string headerName) DataGridViewComboBoxColumn col = new DataGridViewComboBoxColumn(); col.HeaderText = headerName; col.Name = "col" + headerName; dgvMain.Columns.Add(col); gridText(string headerName) DataGridViewTextBoxColumn col = new DataGridViewTextBoxColumn(); col.HeaderText = headerName; col.Name = "col" + headerName; dgvMain.Columns.Add(col); As you can see, except the declaration of objects, the code for every method is repeating. Just curious to know, can the repeating code be converted to single method? I dont know how to do that.. Its not about 3 codes of line, I have written many more lines which can be make common to those methods.

    Read the article

  • WMI Rights required to read root\MicrosoftIISv2 in IIS7 with IIS6 compatibility mode

    - by JoeBilly
    I need to manage my IIS7 (Windows Server 2008) remotely with a WMI IIS6 API. So I added the IIS6 WMI Compatibility and IIS6 Metabase Compatibility roles to access the root\MicrosoftIIsv2 namespace. I have a domain account which is not administrator on the remote machine ; with this right, everything is ok. I configured these rights for my domain account to access the root\MicrosoftIIsv2 WMI namespace remotely ; note that these rights work perfectly on a IIS6 and Windows Server 2003 : DCOM : Account in Distributed COM Users Remote & local access to DCOM WMI : Root\CIMV2 (I need access here too) Execute methods, Enable Account, Remote Enable Root\Default (I need access here too) Execute methods, Enable Account, Remote Enable Root\MicrosoftIISv2 Execute methods, Enable Account, Provider Write, Remote Enable IIS Metabase (Metabase Explorer) : LM Full Control (W3SVC inherits these permissions) I tried to give some access on C:\Windows\System32\inetsrv too ; don't know if needed. My issue is : I can't list the IIS WebSites (\root\MicrosoftIISv2:IIsWebServerSetting.Name="W3SVC/*"). I don't get an 'access denied' but nothing is returned. My API and powershell tests can connect and execute queries in the root\MicrosoftIISv2 namespace I can read the IIsComputer class ex: Get-WmiObject IIsComputer -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT * I can't read the IIsWebServerSetting, IIsWebServer ... to list the WebSites : the query returns an empty collection ex: Get-WmiObject IIsWebServerSetting -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT ServerComment All queries work perfectly if the account is administrator as already said I am using PacketPrivacy authentication FI: I got a Warning Event 5605 with the Administrator right or not, that does not seem to have an impact : The root\MicrosoftIISv2 namespace is marked with the RequiresEncryption flag. Access to this namespace might be denied if the script or application does not have the appropriate authentication level. Change the authentication level to Pkt_Privacy and run the script or application again Ok, I have some more informations, when I use IIS 6 Metabase Explorer with my administrator account I can see the rights are correctly inherited for my non-administrator account. But when I try to connect using my non-administrator account, I can list the LM node, but get an "access denied, failed to get a key's data" when I try to browse the child nodes. I'll check further. I tried to Trace the WMI Activity, and everything seems OK ; this tends to confirm that the problem lies in IIS Rights.

    Read the article

  • Mac OS X Server Configure DHCP Options 66 and 67

    - by Paul Adams
    I need to configure Mountain Lion (10.8.2) OS X Server BOOTP to provide DHCP options 66 and 67 to provide PXE booting for PCs on my network. I have tried following the bootpd MAN pages, but they are not specific enough. I have also read conflicting information on the net, but nothing definitive for Mountain Lion DHCP. From bootpd man page: bootpd has a built-in type conversion table for many more options, mostly those specified in RFC 2132, and will try to convert from whatever type the option appears in the property list to the binary, packet format. For example, if bootpd knows that the type of the option is an IP address or list of IP addresses, it converts from the string form of the IP address to the binary, network byte order numeric value. If the type of the option is a numeric value, it converts from string, integer, or boolean, to the proper sized, network byte-order numeric value. Regardless of whether bootpd knows the type of the option or not, you can always specify the DHCP option using the data property list type <key>dhcp_option_128</key> <data> AAqV1Tzo </data> My TFTP server is 172.16.152.20 and the bootfile is pxelinux.0 I have edited /etc/bootpd.plist and added the following to the subnet dict: <key>dhcp_option_66</key> <data> LW4gLWUgrBCYFAo= </data> <key>dhcp_option_67</key> <data> LW4gLWUgcHhlbGludXguMAo= </data> According to the man page, the data elements are supposed to be Base64 encoded, but no matter what I try, I cannot get PXE clients to boot. I have tried encoding 172.16.152.20 using various methods: echo "172.16.152.20" | openssl enc -base64 returns MTcyLjE2LjE1Mi4yMAo= DHCP Option Code Utility (http://mac.softpedia.com/get/Internet-Utilities/DHCP-Option-Code-Utility.shtml) generating a string from 172.16.152.20 yields: LW4gLWUgMTcyLjE2LjE1Mi4yMAo= (used in the above example) DHCP Option Code Utility generating an IP Addresss from 172.16.152.20 yields: LW4gLWUgrBCYFAo= Encoding pxelinux.0 with the above methods likewise yields different encodings. I have tried using all three methods of encoding the data elements, but nothing seems to work i.e. my PXE boot clients do not get directed to my TFTP server. Can anyone help? Regards, Paul Adams.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >