Search Results

Search found 5121 results on 205 pages for 'the all foo'.

Page 105/205 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • Best practice with pyGTK and Builder XML files

    - by Phoenix87
    I usually design GUI with Glade, thus producing a series of Builder XML files (one such file for each application window). Now my idea is to define a class, e.g. MainWindow, that inherits from gtk.Window and that implements all the signal handlers for the application main window. The problem is that when I retrieve the main window from the containing XML file, it is returned as a gtk.Window instance. The solution I have adopted so far is the following: I have defined a class "Window" in the following way class Window(): def __init__(self, win_name): builder = gtk.Builder() self.builder = builder builder.add_from_file("%s.glade" % win_name) self.window = builder.get_object(win_name) builder.connect_signals(self) def run(self): return self.window.run() def show_all(self): return self.window.show_all() def destroy(self): return self.window.destroy() def child(self, name): return self.builder.get_object(name) In the actual application code I have then defined a new class, say MainWindow, that inherits frow Window, and that looks like class Main(Window): def __init__(self): Window.__init__(self, "main") ### Signal handlers ##################################################### def on_mnu_file_quit_activated(self, widget, data = None): ... The string "main" refers to the main window, called "main", which resides into the XML Builder file "main.glade" (this is a sort of convention I decided to adopt). So the question is: how can I inherit from gtk.Window directly, by defining, say, the class Foo(gtk.Window), and recast the return value of builder.get_object(win_name) to Foo?

    Read the article

  • Working with multiple interfaces on a single mock.

    - by mehfuzh
    Today , I will cover a very simple topic, which can be useful in cases we want to mock different interfaces on our expected mock object.  Our target interface is simple and it looks like:   public interface IFoo : IDisposable {     void Do(); } Now, as we can see that our target interface has implemented IDisposable and in normal cases if we have to implement it in class where language rules require use to implement that as well[no doubt about it] and whether or not there can be more complex cases, we want to ensure that rather having an extra call(..As()) or constructs to prepare it for us, we should do it in the simplest way possible. Therefore, keeping that in mind, first we create a mock of IFoo var foo = Mock.Create<IFooDispose>(); Then, as we are interested with IDisposable, we simply do: var iDisposable = foo as IDisposable;   Finally, we proceed with our existing mock code. Considering the current context, we I will check if the dispose method has invoked our mock code successfully.   bool called = false;   Mock.Arrange(() => iDisposable.Dispose()).DoInstead(() => called = true);     iDisposable.Dispose();   Assert.True(called);   Further, we assert our expectation as follows: Mock.Assert(() => iDisposable.Dispose(), Occurs.Once());   Hopefully that will help a bit and stay tuned. Enjoy!!

    Read the article

  • Self-referencing anonymous closures: is JavaScript incomplete?

    - by Tom Auger
    Does the fact that anonymous self-referencing function closures are so prevelant in JavaScript suggest that JavaScript is an incomplete specification? We see so much of this: (function () { /* do cool stuff */ })(); and I suppose everything is a matter of taste, but does this not look like a kludge, when all you want is a private namespace? Couldn't JavaScript implement packages and proper classes? Compare to ActionScript 3, also based on EMACScript, where you get package com.tomauger { import bar; class Foo { public function Foo(){ // etc... } public function show(){ // show stuff } public function hide(){ // hide stuff } // etc... } } Contrast to the convolutions we perform in JavaScript (this, from the jQuery plugin authoring documentation): (function( $ ){ var methods = { init : function( options ) { // THIS }, show : function( ) { // IS }, hide : function( ) { // GOOD }, update : function( content ) { // !!! } }; $.fn.tooltip = function( method ) { // Method calling logic if ( methods[method] ) { return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 )); } else if ( typeof method === 'object' || ! method ) { return methods.init.apply( this, arguments ); } else { $.error( 'Method ' + method + ' does not exist on jQuery.tooltip' ); } }; })( jQuery ); I appreciate that this question could easily degenerate into a rant about preferences and programming styles, but I'm actually very curious to hear how you seasoned programmers feel about this and whether it feels natural, like learning different idiosyncrasies of a new language, or kludgy, like a workaround to some basic programming language components that are just not implemented?

    Read the article

  • .Net oracle parameter order

    - by jkrebsbach
    Using the ODAC (Oracle Data Access Components) downloaded from Oracle to talk to a handfull of Oracle DBs - Was putting together my DAL to update the DB, and things weren't working as I hoped - UPDATE foo SET bar = :P_BAR WHERE bap = :P_BAP I assign my parameters - objCmd.Parameters.Add(objBap); objCmd.Parameters.Add(objBar);   Execute update command - int result = objCmd.ExecuteNonQuery() and result is zero! ...  Is my filter incorrect? SELECT count(*) FROM foo WHERE bap = :P_BAP ...result is one... Is my new value incorrect?  Am I using Char instead of Varchar somewhere and need an RTRIM?  Is there a transaction getting involved?  An error thrown and not caught? The answer: Order of parameters.   The order parameters are added to the Oracle Command object must match the order the parameters are referenced in the SQL statement.  I was adding the parameters for the WHERE clause before adding the SET value parameters, and for that reason although no error was being thrown, no value was updated either. Flip parameter collection around to match order of params in the SQL statement, and ExecuteNonQuery() is back to returning the number of rows affected.

    Read the article

  • Confused about javascript module pattern implementation

    - by Damon
    I have a class written on a project I'm working on that I've been told is using the module pattern, but it's doing things a little differently than the examples I've seen. It basically takes this form: (function ($, document, window, undefined) { var module = { foo : bar, aMethod : function (arg) { className.bMethod(arg); }, bMethod : function (arg) { console.log('spoons'); } }; window.ajaxTable = ajaxTable; })(jQuery, document, window); I get what's going on here. But I'm not sure how this relates to most of the definitions I've seen of the module (or revealing?) module pattern. like this one from briancray var module = (function () { // private variables and functions var foo = 'bar'; // constructor var module = function () { }; // prototype module.prototype = { constructor: module, something: function () { } }; // return module return module; })(); var my_module = new module(); Is the first example basically like the second except everything is in the constructor? I'm just wrapping my head around patterns and the little things at the beginnings and endings always make me not sure what I should be doing.

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

  • Commands don't have permission when using absplute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance.

    Read the article

  • Doubt related to PHP Cookies

    - by Richa
    Hey guys! I have a doubt, I will appreciate if you can clear it . COOKIES What are cookies? When described as entities, which is how cookies are often referenced in conversation, you can be easily misled. Cookies are actually just an extension of the HTTP protocol. Specifically, there are two additional HTTP headers: Set-Cookie and Cookie.The operation of these cookies is best described by the following series of events: Client sends an HTTP request to server. Server sends an HTTP response with Set-Cookie: foo=bar to client. Client sends an HTTP request with Cookie: foo=bar to server. Server sends an HTTP response to client. Thus, the typical scenario involves two complete HTTP transactions. In step 2, the server is asking the client to return a particular cookie in future requests. In step 3, if the user’s preferences are set to allow cookies, and if the cookie is valid for this particular request, the browser requests the resource again but includes the cookie. Now my question is....... why you cannot determine whether a user’s preferences are set to allow cookies during the first request????

    Read the article

  • Setting up group disk quotas

    - by Ray
    I am hoping to get some advice in setting up disk quotas. So, I know about: Adding usrquota and grpquota on to /etc/fstab for the file systems that need to be managed. Using edquota to assign disk quotas to users. However, I need to do the last step for multiple users and edquota seems to be a bit troublesome. One solution that I have found is that I can do: sudo edquota -u foo -p bar. This will copy the disk quota of bar to user foo. I was wondering if this is the best solution? I tried setting up group disk quotas but they don't seem to be working. Are group quotas meant to help in the assignment of the same quota to multiple users? Or are they suppose to give a total limit to a set of users? For example, if users A, B, C are in group X then assigning a quota of 20 GB gives each user 20 GB or does it give 20 GB to the entire group X to divide up? I'm interested in doing the former, but not the latter. Right now, I've assigned group disk quotas and they aren't working. So, I guess it is due to my misunderstanding of group disk quotas... My problem is I want to easily give the same quota to multiple users; any suggestions on the best way to do this out of what I've tried above or anything else I may not have thought of? Thank you!

    Read the article

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • Extension objects pattern

    - by voroninp
    In this MSDN Magazine article Peter Vogel describes Extension Objects partten. What is not clear is whether extensions can be later implemented by client code residing in a separate assembly. And if so how in this case can extension get acces to private members of the objet being extended? I quite often need to set different access levels for different classes. Sometimes I really need that descendants does not have access to the mebmer but separate class does. (good old friend classes) Now I solve this in C# by exposing callback properties in interface of the external class and setting them with private methods. This also alows to adjust access: read only or read|write depending on the desired interface. class Parent { private int foo; public void AcceptExternal(IFoo external) { external.GetFooCallback = () => this.foo; } } interface IFoo { Func<int> GetFooCallback {get;set;} } Other way is to explicitly implement particular interface. But I suspect more aspproaches exist.

    Read the article

  • Dynamic Query Generation : suggestion for better approaches

    - by Gaurav Parmar
    I am currently designing a functionality in my Web Application where the verified user of the application can execute queries which he wishes to from the predefined set of queries with where clause varying as per user's choice. For example,Table ABC contains the following Template query called SecretReport "Select def as FOO, ghi as BAR from MNO where " SecretReport can have parameters XYZ, ILP. Again XYZ can have values as 1,2 and ILP can have 3,4 so if the user chooses ILP=3, he will get the result of the following query on his screen "Select def as FOO, ghi as BAR from MNO where ILP=3" Again the user is allowed permutations of XYZ / ILP My initial thought is that User will be shown a list of Report names and each report will have parameters and corresponding values. But this approach although technically simple does not appear intuitive. I would like to extend this functionality to a more generic level. Such that the user can choose a table and query based on his requirements. Of course we do not want the end user to take complete control of DB. But only tables and fields that are relevant to him. At present we are defining what is relevant in the code. But I want the Admin to take over this functionality such that he can decide what is relevant and expose the same to the user. On user's side it should be intuitive what is available to him and what queries he can form. Please share your thoughts what is the most user friendly way to provide this feature to the end user.

    Read the article

  • Installing packages into local directory?

    - by Gili
    I'd like to install software packages, similar to apt-get install <foo> but: Without sudo, and Into a local directory The purpose of this exercise is to isolate independent builds in my continuous integration server. I don't mind compiling from source, if that's what it takes, but obviously I'd prefer the simplest approach possible. I tried apt-get source --compile <foo> as mentioned here but I can't get it working for packages like autoconf. I get the following error: dpkg-checkbuilddeps: Unmet build dependencies: help2man I've got help2man compiled in a local directory, but I don't know how to inform apt-get of that. Any ideas? UPDATE: I found an answer that almost works at http://askubuntu.com/a/350/23678. The problem with chroot is that it requires sudo. The problem with apt-get source is that I don't know how to resolve dependencies. I must say, chroot looks very appealing. Is there an equivalent command that doesn't require sudo?

    Read the article

  • How to pass dynamic information between a form and a service? [closed]

    - by qminator
    I have a design problem and hopefully the braintrust which is stack exchange can help. I have a generic form, which loads a dataset and displays it. It never has direct knowledge of what it contains but can pass it to a service for manipulation (via an Onclick event for example). However, the form might need to alter its behaviour based on the manipulation by the service. Example: The service realises this dataset requires sending of an email by the user and needs to send an instruction to the form to open up a mail form. My idea is thus: I'm thinking about passing back some type of key/name dictionary, filled with commands which the service requires. They could then be interpeted by the form without it need to reference something specific. Example: IF the service decides that the dataset needs to refresh it would send back a key/name pair, I might even be able to chain commands. Refreshing the dataset and sending a mail. Refresh / "Foo" Mail / "[email protected]" The form would reference an action explicitly (Refresh or Mail) but not the instructions themselves. Is this a valid idea or am I wasting time?

    Read the article

  • Is this JS code a good way for defining class with private methods?

    - by tigrou
    I was recently browsing a open source JavaScript project. The project is a straight port from another project in C language. It mostly use static methods, packed together in classes. Most classes are implemented using this pattern : Foo = (function () { var privateField = "bar"; var publicField = "bar";     function publicMethod() { console.log('this is public');     } function privateMethod() { console.log('this is private'); } return {   publicMethod : publicMethod, publicField : publicField }; })(); This was the first time I saw private methods implemented that way. I perfectly understand how it works, using a anonymous method. Here is my question : is this pattern a good practice ? What are the actual limitations or caveats ? Usually i declare my JavaScript classes like that : Foo = new function () { var privateField = "test"; this.publicField = "test";     this.publicMethod = function()     { console.log('this method is public'); privateMethod();     } function privateMethod() { console.log('this method is private'); } }; Other than syntax, is there any difference with the pattern show above ?

    Read the article

  • Does (should?) changing the URI scheme name change the semantics?

    - by Doug
    If we take: http://example.com/foo is it fair to say that: ftp://example.com/foo .. points to the same resource, just using a different mechanism for resolving it (and of course possibly a different representation, but perhaps not)? This came to light in a discussion we were having surrounding some internal tooling with Git. We have to process some Git repositories, and they come to use as "git@{authority}/{path}" , however the library we're using to interface with them doesn't support the git protocol. I suggested that we should make the service robust in of that it tries to use HTTP or SSH, in essence, discovering what protocols/schemes are supported for resolving the repository at {path} under each {authority}. This was met with some criticism: "We don't know if that's the same repository". My response was: "It had better be!" Looking at RFC 3986, I see this excerpt: URI "resolution" is the process of determining an access mechanism and the appropriate parameters necessary to dereference a URI; this resolution may require several iterations. To use that access mechanism to perform an action on the URI's resource is to "dereference" the URI. Which makes me think that the resolution process is permitted to try different protocols, because: Although many URI schemes are named after protocols, this does not imply that use of these URIs will result in access to the resource via the named protocol. The only concern I have, I guess, is that I only see reference to the notion of changing protocols when it comes to traversing relationships: it is possible for a single set of hypertext documents to be simultaneously accessible and traversable via each of the "file", "http", and "ftp" schemes if the documents refer to each other with relative references. I'm inclined to think I'm wrong in my initial beliefs, because the Normalization and Comparison section of said RFC doesn't mention any way of treating two URIs as equivalent if they use different schemes. It seems like schemes named/based on IP protocols ought to have this notion, at least?

    Read the article

  • Which order to define getters and setters in? [closed]

    - by N.N.
    Is there a best practice for the order to define getters and setters in? There seems to be two practices: getter/setter pairs first getters, then setters (or the other way around) To illuminate the difference here is a Java example of getter/setter pairs: public class Foo { private int var1, var2, var3; public int getVar1() { return var1; } public void setVar1(int var1) { this.var1 = var1; } public int getVar2() { return var2; } public void setVar2(int var2) { this.var2 = var2; } public int getVar3() { return var3; } public void setVar3(int var3) { this.var3 = var3; } } And here is a Java example of first getters, then setters: public class Foo { private int var1, var2, var3; public int getVar1() { return var1; } public int getVar2() { return var2; } public int getVar3() { return var3; } public void setVar1(int var1) { this.var1 = var1; } public void setVar2(int var2) { this.var2 = var2; } public void setVar3(int var3) { this.var3 = var3; } } I think the latter type of ordering is clearer both in code and in class diagrams but I do not know if that is enough to rule out the other type of ordering.

    Read the article

  • C# 4.0: Named And Optional Arguments

    - by Paulo Morgado
    As part of the co-evolution effort of C# and Visual Basic, C# 4.0 introduces Named and Optional Arguments. First of all, let’s clarify what are arguments and parameters: Method definition parameters are the input variables of the method. Method call arguments are the values provided to the method parameters. In fact, the C# Language Specification states the following on §7.5: The argument list (§7.5.1) of a function member invocation provides actual values or variable references for the parameters of the function member. Given the above definitions, we can state that: Parameters have always been named and still are. Parameters have never been optional and still aren’t. Named Arguments Until now, the way the C# compiler matched method call definition arguments with method parameters was by position. The first argument provides the value for the first parameter, the second argument provides the value for the second parameter, and so on and so on, regardless of the name of the parameters. If a parameter was missing a corresponding argument to provide its value, the compiler would emit a compilation error. For this call: Greeting("Mr.", "Morgado", 42); this method: public void Greeting(string title, string name, int age) will receive as parameters: title: “Mr.” name: “Morgado” age: 42 What this new feature allows is to use the names of the parameters to identify the corresponding arguments in the form: name:value Not all arguments in the argument list must be named. However, all named arguments must be at the end of the argument list. The matching between arguments (and the evaluation of its value) and parameters will be done first by name for the named arguments and than by position for the unnamed arguments. This means that, for this method definition: public static void Method(int first, int second, int third) this call declaration: int i = 0; Method(i, third: i++, second: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = i++; int CS$0$0001 = ++i; Method(i, CS$0$0001, CS$0$0000); which will give the method the following parameter values: first: 2 second: 2 third: 0 Notice the variable names. Although invalid being invalid C# identifiers, they are valid .NET identifiers and thus avoiding collision between user written and compiler generated code. Besides allowing to re-order of the argument list, this feature is very useful for auto-documenting the code, for example, when the argument list is very long or not clear, from the call site, what the arguments are. Optional Arguments Parameters can now have default values: public static void Method(int first, int second = 2, int third = 3) Parameters with default values must be the last in the parameter list and its value is used as the value of the parameter if the corresponding argument is missing from the method call declaration. For this call declaration: int i = 0; Method(i, third: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = ++i; Method(i, 2, CS$0$0000); which will give the method the following parameter values: first: 1 second: 2 third: 1 Because, when method parameters have default values, arguments can be omitted from the call declaration, this might seem like method overloading or a good replacement for it, but it isn’t. Although methods like this: public static StreamReader OpenTextFile( string path, Encoding encoding = null, bool detectEncoding = true, int bufferSize = 1024) allow to have its calls written like this: OpenTextFile("foo.txt", Encoding.UTF8); OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4096); OpenTextFile( bufferSize: 4096, path: "foo.txt", detectEncoding: false); The complier handles default values like constant fields taking the value and useing it instead of a reference to the value. So, like with constant fields, methods with parameters with default values are exposed publicly (and remember that internal members might be publicly accessible – InternalsVisibleToAttribute). If such methods are publicly accessible and used by another assembly, those values will be hard coded in the calling code and, if the called assembly has its default values changed, they won’t be assumed by already compiled code. At the first glance, I though that using optional arguments for “bad” written code was great, but the ability to write code like that was just pure evil. But than I realized that, since I use private constant fields, it’s OK to use default parameter values on privately accessed methods.

    Read the article

  • A C# implementation of the CallStream pattern

    - by Bertrand Le Roy
    Dusan published this interesting post a couple of weeks ago about a novel JavaScript chaining pattern: http://dbj.org/dbj/?p=514 It’s similar to many existing patterns, but the syntax is extraordinarily terse and it provides a new form of friction-free, plugin-less extensibility mechanism. Here’s a JavaScript example from Dusan’s post: CallStream("#container") (find, "div") (attr, "A", 1) (css, "color", "#fff") (logger); The interesting thing here is that the functions that are being passed as the first argument are arbitrary, they don’t need to be declared as plug-ins. Compare that with a rough jQuery equivalent that could look something like this: $.fn.logger = function () { /* ... */ } $("selector") .find("div") .attr("A", 1) .css("color", "#fff") .logger(); There is also the “each” method in jQuery that achieves something similar, but its syntax is a little more verbose. Of course, that this pattern can be expressed so easily in JavaScript owes everything to the extraordinary way functions are treated in that language, something Douglas Crockford called “the very best part of JavaScript”. One of the first things I thought while reading Dusan’s post was how I could adapt that to C#. After all, with Lambdas and delegates, C# also has its first-class functions. And sure enough, it works really really well. After about ten minutes, I was able to write this: CallStreamFactory.CallStream (p => Console.WriteLine("Yay!")) (Dump, DateTime.Now) (DumpFooAndBar, new { Foo = 42, Bar = "the answer" }) (p => Console.ReadKey()); Where the Dump function is: public static void Dump(object options) { Console.WriteLine(options.ToString()); } And DumpFooAndBar is: public static void DumpFooAndBar(dynamic options) { Console.WriteLine("Foo is {0} and bar is {1}.", options.Foo, options.Bar); } So how does this work? Well, it really is very simple. And not. Let’s say it’s not a lot of code, but if you’re like me you might need an Advil after that. First, I defined the signature of the CallStream method as follows: public delegate CallStream CallStream (Action<object> action, object options = null); The delegate define a call stream as something that takes an action (a function of the options) and an optional options object and that returns a delegate of its own type. Tricky, but that actually works, a delegate can return its own type. Then I wrote an implementation of that delegate that calls the action and returns itself: public static CallStream CallStream (Action<object> action, object options = null) { action(options); return CallStream; } Pretty nice, eh? Well, yes and no. What we are doing here is to execute a sequence of actions using an interesting novel syntax. But for this to be actually useful, you’d need to build a more specialized call stream factory that comes with some sort of context (like Dusan did in JavaScript). For example, you could write the following alternate delegate signature that takes a string and returns itself: public delegate StringCallStream StringCallStream(string message); And then write the following call stream (notice the currying): public static StringCallStream CreateDumpCallStream(string dumpPath) { StringCallStream str = null; var dump = File.AppendText(dumpPath); dump.AutoFlush = true; str = s => { dump.WriteLine(s); return str; }; return str; } (I know, I’m not closing that stream; sure; bad, bad Bertrand) Finally, here’s how you use it: CallStreamFactory.CreateDumpCallStream(@".\dump.txt") ("Wow, this really works.") (DateTime.Now.ToLongTimeString()) ("And that is all."); Next step would be to combine this contextual implementation with the one that takes an action parameter and do some really fun stuff. I’m only scratching the surface here. This pattern could reveal itself to be nothing more than a gratuitous mind-bender or there could be applications that we hardly suspect at this point. In any case, it’s a fun new construct. Or is this nothing new? You tell me… Comments are open :)

    Read the article

  • Towards an F# .NET Reflector add-in

    - by CliveT
    When I had the opportunity to spent some time during Red Gate's recent "down tools" week on a project of my choice, the obvious project was an F# add-in for Reflector . To be honest, this was a bit of a misnomer as the amount of time in the designated week for coding was really less than three days, so it was always unlikely that very much progress would be made in such a small amount of time (and that certainly proved to be the case), but I did learn some things from the experiment. Like lots of problems, one useful technique is to take examples, get them to work, and then generalise to get something that works across the board. Unfortunately, I didn't have enough time to do the last stage. The obvious first step is to take a few function definitions, starting with the obvious hello world, moving on to a non-recursive function and finishing with the ubiquitous recursive Fibonacci function. let rec printMessage message  =     printfn  message let foo x  =    (x + 1) let rec fib x  =     if (x >= 2) then (fib (x - 1) + fib (x - 2)) else 1 The major problem in decompiling these simple functions is that Reflector has an in-memory object model that is designed to support object-oriented languages. In particular it has a return statement that allows function bodies to finish early. I used some of the in-built functionality to take the IL and produce an in-memory object model for the language, but then needed to write a transformer to push the return statements to the top of the tree to make it easy to render the code into a functional language. This tree transform works in some scenarios, but not in others where we simply regenerate code that looks more like CPS style. The next thing to get working was library level bindings of values where these values are calculated at runtime. let x = [1 ; 2 ; 3 ; 4] let y = List.map  (fun x -> foo x) x The way that this is translated into a set of classes for the underlying platform means that the code needs to follow references around, from the property exposing the calculated value to the class in which the code for generating the value is embedded. One of the strongest selling points of functional languages is the algebraic datatypes, which allow definitions via standard mathematical-style inductive definitions across the union cases. type Foo =     | Something of int     | Nothing type 'a Foo2 =     | Something2 of 'a     | Nothing2 Such a definition is compiled into a number of classes for the cases of the union, which all inherit from a class representing the type itself. It wasn't too hard to get such a de-compilation happening in the cases I tried. What did I learn from this? Firstly, that there are various bits of functionality inside Reflector that it would be useful for us to allow add-in writers to access. In particular, there are various implementations of the Visitor pattern which implement algorithms such as calculating the number of references for particular variables, and which perform various substitutions which could be more generally useful to add-in writers. I hope to do something about this at some point in the future. Secondly, when you transform a functional language into something that runs on top of an object-based platform, you lose some fidelity in the representation. The F# compiler leaves attributes in place so that tools can tell which classes represent classes from the source program and which are there for purposes of the implementation, allowing the decompiler to regenerate these constructs again. However, decompilation technology is a long way from being able to take unannotated IL and transform it into a program in a different language. For a simple function definition, like Fibonacci, I could write a simple static function and have it come out in F# as the same function, but it would be practically impossible to take a mass of class definitions and have a decompiler translate it automatically into an F# algebraic data type. What have we got out of this? Some data on the feasibility of implementing an F# decompiler inside Reflector, though it's hard at the moment to say how long this would take to do. The work we did is included the 6.5 EAP for Reflector that you can get from the EAP forum. All things considered though, it was a useful way to gain more familiarity with the process of writing an add-in and understand difficulties other add-in authors might experience. If you'd like to check out a video of Down Tools Week, click here.

    Read the article

  • Helping install mrcwa and solve problems with f2py in Ubuntu 14.04 LTS

    - by user288160
    I am sorry if this is the wrong section but I am starting to get desperate, please someone help me... I need to install the program mrcwa-20080820 (sourceforge.net/projects/mrcwa/) because a summer project that I am involved. I need to use it together with anaconda (store.continuum.io/cshop/anaconda/), I already installed Anaconda and apparently it is working. When I type: conda --version I got the expected answer. conda 3.5.2 If I tried to import numpy or scipy with python or simple type f2py there are no errors. So far so good. But when I tried to install this program sudo python setup.py install I got these errors: running install running build sh: 1: f2py: not found cp: cannot stat ‘mrcwaf.so’: No such file or directory running build_py running install_lib running install_egg_info Removing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Writing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Obs: I am trying to use intel fortran 64-bits and Ubuntu 14.04 LTS. So I was checking f2py and tried to execute the program hello world f2py -c -m hello hello.f from here: cens.ioc.ee/projects/f2py2e/index.html#usage and I had some problems too: running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "hello" sources f2py options: [] f2py:> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c creating /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 Reading fortran codes... Reading file 'hello.f' (format:fix,strict) Post-processing... Block: hello Block: foo Post-processing (stage 2)... Building modules... Building module "hello"... Constructing wrapper function "foo"... foo(a) Wrote C/API module "hello" to file "/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 /hellomodule.c" adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c' to sources. adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7' to include_dirs. copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize IntelFCompiler Found executable /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgfortran customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize NAGFCompiler customize VastFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler using build_ext building 'hello' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC creating /tmp/tmpf8P4Y3/tmp creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3 creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local/lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c:17: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c:2: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ compiling Fortran sources Fortran f77 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict Fortran f90 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FR -fPIC -xhost -openmp -fp-model strict Fortran fix compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local /lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' ifort:f77: hello.f /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -shared -shared -nofor_main /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.o /tmp/tmpf8P4Y3 /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.o /tmp/tmpf8P4Y3/hello.o -L/home/felipe /anaconda/lib -lpython2.7 -o ./hello.so Removing build directory /tmp/tmpf8P4Y3 Please help me I am new in ubuntu and python. I really need this program, my advisor is waiting an answer. Thank you very much, Felipe Oliveira.

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • How can I Setup overloaded method invocations in Moq?

    - by arootbeer
    I'm trying to mock a mapping interface IMapper: public interface IMapper<TFoo, TBar> { TBar Map(TFoo foo); TFoo Map(TBar bar); } In my test, I'm setting the mock mapper up to expect an invocation of each (around an NHibernate update operation): //... _mapperMock.Setup(m => m.Map(fooMock.Object)).Returns(barMock.Object); _mapperMock.Setup(m => m.Map(barMock.Object)).Returns(fooMock.Object); //... However, when the second Map invocation is made, the mapper mock throws because it is only expecting a single invocation. Watching the mapper mock during setup at runtime, I can look see the Map(TFoo foo) overload get registered, and then see it get replaced when the Map(TBar bar) overload is set up. Is this a problem with the way Moq handles setup, or is there a different syntax I need to use in this case?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >