Search Results

Search found 5129 results on 206 pages for 'regina foo'.

Page 105/206 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Have you changed your coding style recently? It wasn't hard wasn't it?

    - by Ernelli
    I've used to write code in C-like languages using the Allman style, regarding the position of braces. void foo(int bar) { if(bar) { //... } else return; //... } Now the last two years I have been working mostly in JavaScript and when we adopted jslint as part of our QA process, I had to adopt to the Crockford way of doing things. So I had to change the coding style into: function foo(bar) { if (bar) { //... } else { return; } //... } Now apart from comparing a C/C++ example with JavaScript, I must say that my JavaScript-Crockford-coding style now has spread into my C/C++/Java coding when I revise old projects and work on code in those languages that for example has no problem with single line statements or ambiguous newline insertion. I used to consider the later format very awkward, I have never had any problems with adapting my coding style to the one chosen by my predecessors, except for when I was a Junior developer mostly being the solve developer on legacy projects and the first thing I did was to change the indenting style. But now after a couple of months I consider the Allman style a little bit too spacious and feel more comfortable with the K&R-like style. Have you changed your coding style during your career?

    Read the article

  • C IDE for Mac needed

    - by StasM
    I'm looking for heavy-duty C/C++ IDE for Mac that would satisfy the following criteria: Work with big projects (~5000 files, some of them 100K big) efficiently. Have good navigation both file-based and symbol-based - i.e. "go to file", "go to function" etc. with autocompletion support. Support for "go to declaration/definition" for symbols - functions, structures, etc. Auto-adding new files in folders already in the project. Support for code completion for values, function names, etc. At least rudimentary CPP macro understanding - i.e. #define foo bar then foo() should take me either to #define or to actual bar. I understand full CPP parsing may be hard, but I hope for at least the obvious cases. Support for displaying parameter names/types by function name, preferably - integrated with the previous item, for functions defined in the project. Support for libc would be nice too :) (optional) Cross-project search support (I can manage with grep -r if everything else works) (optional) SVN support, at least to some extent (update, commit, mark updated) Is there such editor around? Free would be nice, but I'm ready to part with some money if it's good enough. I'm using TextMate now but I'm not satisfied with it. Tried Xcode but it seems to not be able to handle a large project - it just crashed...

    Read the article

  • Commands don't have permission when using absolute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance. Edit: It seems that the root of the problem is acl set on perent folder /srv/samba, which indeed does not grant permissions to deluge (but neither denies it). getfacl /srv/samba # file: srv/samba # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:mask::rwx default:other::--- If I grant the permission also to this folder, it suddenly starts to work so I believe that the acl on /srv/samba is somehow denying the permissions to deluge. So the question is: how do I set acl to both /srv/samba and /srv/samba/video so that sambaclients have access to whole /srv/samba and subdirectories and deluge has access only to /srv/samba/video and subdirectories?

    Read the article

  • Is there a way to add unique items to an array without doing a ton of comparisons?

    - by hydroparadise
    Please bare with me, I want this to be as language agnostic as possible becuase of the languages I am working with (One of which is a language called PowerOn). However, most languanges support for loops and arrays. Say I have the following list in an aray: 0x 0 Foo 1x 1 Bar 2x 0 Widget 3x 1 Whatsit 4x 0 Foo 5x 1 Bar Anything with a 1 should be uniqely added to another array with the following result: 0x 1 Bar 1x 1 Whatsit Keep in mind this is a very elementry example. In reality, I am dealing with 10's of thousands of elements on the old list. Here is what I have so far. Pseudo Code: For each element in oldlist For each element in newlist Compare If values oldlist.element equals newlist.element, break new list loop If reached end of newlist with with nothing equal from oldlist, add value from old list to new list End End Is there a better way of doing this? Algorithmicly, is there any room for improvement? And as a bonus qeustion, what is the O notation for this type of algorithm (if there is one)?

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • Best practice with pyGTK and Builder XML files

    - by Phoenix87
    I usually design GUI with Glade, thus producing a series of Builder XML files (one such file for each application window). Now my idea is to define a class, e.g. MainWindow, that inherits from gtk.Window and that implements all the signal handlers for the application main window. The problem is that when I retrieve the main window from the containing XML file, it is returned as a gtk.Window instance. The solution I have adopted so far is the following: I have defined a class "Window" in the following way class Window(): def __init__(self, win_name): builder = gtk.Builder() self.builder = builder builder.add_from_file("%s.glade" % win_name) self.window = builder.get_object(win_name) builder.connect_signals(self) def run(self): return self.window.run() def show_all(self): return self.window.show_all() def destroy(self): return self.window.destroy() def child(self, name): return self.builder.get_object(name) In the actual application code I have then defined a new class, say MainWindow, that inherits frow Window, and that looks like class Main(Window): def __init__(self): Window.__init__(self, "main") ### Signal handlers ##################################################### def on_mnu_file_quit_activated(self, widget, data = None): ... The string "main" refers to the main window, called "main", which resides into the XML Builder file "main.glade" (this is a sort of convention I decided to adopt). So the question is: how can I inherit from gtk.Window directly, by defining, say, the class Foo(gtk.Window), and recast the return value of builder.get_object(win_name) to Foo?

    Read the article

  • Self-referencing anonymous closures: is JavaScript incomplete?

    - by Tom Auger
    Does the fact that anonymous self-referencing function closures are so prevelant in JavaScript suggest that JavaScript is an incomplete specification? We see so much of this: (function () { /* do cool stuff */ })(); and I suppose everything is a matter of taste, but does this not look like a kludge, when all you want is a private namespace? Couldn't JavaScript implement packages and proper classes? Compare to ActionScript 3, also based on EMACScript, where you get package com.tomauger { import bar; class Foo { public function Foo(){ // etc... } public function show(){ // show stuff } public function hide(){ // hide stuff } // etc... } } Contrast to the convolutions we perform in JavaScript (this, from the jQuery plugin authoring documentation): (function( $ ){ var methods = { init : function( options ) { // THIS }, show : function( ) { // IS }, hide : function( ) { // GOOD }, update : function( content ) { // !!! } }; $.fn.tooltip = function( method ) { // Method calling logic if ( methods[method] ) { return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 )); } else if ( typeof method === 'object' || ! method ) { return methods.init.apply( this, arguments ); } else { $.error( 'Method ' + method + ' does not exist on jQuery.tooltip' ); } }; })( jQuery ); I appreciate that this question could easily degenerate into a rant about preferences and programming styles, but I'm actually very curious to hear how you seasoned programmers feel about this and whether it feels natural, like learning different idiosyncrasies of a new language, or kludgy, like a workaround to some basic programming language components that are just not implemented?

    Read the article

  • Working with multiple interfaces on a single mock.

    - by mehfuzh
    Today , I will cover a very simple topic, which can be useful in cases we want to mock different interfaces on our expected mock object.  Our target interface is simple and it looks like:   public interface IFoo : IDisposable {     void Do(); } Now, as we can see that our target interface has implemented IDisposable and in normal cases if we have to implement it in class where language rules require use to implement that as well[no doubt about it] and whether or not there can be more complex cases, we want to ensure that rather having an extra call(..As()) or constructs to prepare it for us, we should do it in the simplest way possible. Therefore, keeping that in mind, first we create a mock of IFoo var foo = Mock.Create<IFooDispose>(); Then, as we are interested with IDisposable, we simply do: var iDisposable = foo as IDisposable;   Finally, we proceed with our existing mock code. Considering the current context, we I will check if the dispose method has invoked our mock code successfully.   bool called = false;   Mock.Arrange(() => iDisposable.Dispose()).DoInstead(() => called = true);     iDisposable.Dispose();   Assert.True(called);   Further, we assert our expectation as follows: Mock.Assert(() => iDisposable.Dispose(), Occurs.Once());   Hopefully that will help a bit and stay tuned. Enjoy!!

    Read the article

  • .Net oracle parameter order

    - by jkrebsbach
    Using the ODAC (Oracle Data Access Components) downloaded from Oracle to talk to a handfull of Oracle DBs - Was putting together my DAL to update the DB, and things weren't working as I hoped - UPDATE foo SET bar = :P_BAR WHERE bap = :P_BAP I assign my parameters - objCmd.Parameters.Add(objBap); objCmd.Parameters.Add(objBar);   Execute update command - int result = objCmd.ExecuteNonQuery() and result is zero! ...  Is my filter incorrect? SELECT count(*) FROM foo WHERE bap = :P_BAP ...result is one... Is my new value incorrect?  Am I using Char instead of Varchar somewhere and need an RTRIM?  Is there a transaction getting involved?  An error thrown and not caught? The answer: Order of parameters.   The order parameters are added to the Oracle Command object must match the order the parameters are referenced in the SQL statement.  I was adding the parameters for the WHERE clause before adding the SET value parameters, and for that reason although no error was being thrown, no value was updated either. Flip parameter collection around to match order of params in the SQL statement, and ExecuteNonQuery() is back to returning the number of rows affected.

    Read the article

  • Confused about javascript module pattern implementation

    - by Damon
    I have a class written on a project I'm working on that I've been told is using the module pattern, but it's doing things a little differently than the examples I've seen. It basically takes this form: (function ($, document, window, undefined) { var module = { foo : bar, aMethod : function (arg) { className.bMethod(arg); }, bMethod : function (arg) { console.log('spoons'); } }; window.ajaxTable = ajaxTable; })(jQuery, document, window); I get what's going on here. But I'm not sure how this relates to most of the definitions I've seen of the module (or revealing?) module pattern. like this one from briancray var module = (function () { // private variables and functions var foo = 'bar'; // constructor var module = function () { }; // prototype module.prototype = { constructor: module, something: function () { } }; // return module return module; })(); var my_module = new module(); Is the first example basically like the second except everything is in the constructor? I'm just wrapping my head around patterns and the little things at the beginnings and endings always make me not sure what I should be doing.

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

  • Commands don't have permission when using absplute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance.

    Read the article

  • Doubt related to PHP Cookies

    - by Richa
    Hey guys! I have a doubt, I will appreciate if you can clear it . COOKIES What are cookies? When described as entities, which is how cookies are often referenced in conversation, you can be easily misled. Cookies are actually just an extension of the HTTP protocol. Specifically, there are two additional HTTP headers: Set-Cookie and Cookie.The operation of these cookies is best described by the following series of events: Client sends an HTTP request to server. Server sends an HTTP response with Set-Cookie: foo=bar to client. Client sends an HTTP request with Cookie: foo=bar to server. Server sends an HTTP response to client. Thus, the typical scenario involves two complete HTTP transactions. In step 2, the server is asking the client to return a particular cookie in future requests. In step 3, if the user’s preferences are set to allow cookies, and if the cookie is valid for this particular request, the browser requests the resource again but includes the cookie. Now my question is....... why you cannot determine whether a user’s preferences are set to allow cookies during the first request????

    Read the article

  • Setting up group disk quotas

    - by Ray
    I am hoping to get some advice in setting up disk quotas. So, I know about: Adding usrquota and grpquota on to /etc/fstab for the file systems that need to be managed. Using edquota to assign disk quotas to users. However, I need to do the last step for multiple users and edquota seems to be a bit troublesome. One solution that I have found is that I can do: sudo edquota -u foo -p bar. This will copy the disk quota of bar to user foo. I was wondering if this is the best solution? I tried setting up group disk quotas but they don't seem to be working. Are group quotas meant to help in the assignment of the same quota to multiple users? Or are they suppose to give a total limit to a set of users? For example, if users A, B, C are in group X then assigning a quota of 20 GB gives each user 20 GB or does it give 20 GB to the entire group X to divide up? I'm interested in doing the former, but not the latter. Right now, I've assigned group disk quotas and they aren't working. So, I guess it is due to my misunderstanding of group disk quotas... My problem is I want to easily give the same quota to multiple users; any suggestions on the best way to do this out of what I've tried above or anything else I may not have thought of? Thank you!

    Read the article

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • Dynamic Query Generation : suggestion for better approaches

    - by Gaurav Parmar
    I am currently designing a functionality in my Web Application where the verified user of the application can execute queries which he wishes to from the predefined set of queries with where clause varying as per user's choice. For example,Table ABC contains the following Template query called SecretReport "Select def as FOO, ghi as BAR from MNO where " SecretReport can have parameters XYZ, ILP. Again XYZ can have values as 1,2 and ILP can have 3,4 so if the user chooses ILP=3, he will get the result of the following query on his screen "Select def as FOO, ghi as BAR from MNO where ILP=3" Again the user is allowed permutations of XYZ / ILP My initial thought is that User will be shown a list of Report names and each report will have parameters and corresponding values. But this approach although technically simple does not appear intuitive. I would like to extend this functionality to a more generic level. Such that the user can choose a table and query based on his requirements. Of course we do not want the end user to take complete control of DB. But only tables and fields that are relevant to him. At present we are defining what is relevant in the code. But I want the Admin to take over this functionality such that he can decide what is relevant and expose the same to the user. On user's side it should be intuitive what is available to him and what queries he can form. Please share your thoughts what is the most user friendly way to provide this feature to the end user.

    Read the article

  • Extension objects pattern

    - by voroninp
    In this MSDN Magazine article Peter Vogel describes Extension Objects partten. What is not clear is whether extensions can be later implemented by client code residing in a separate assembly. And if so how in this case can extension get acces to private members of the objet being extended? I quite often need to set different access levels for different classes. Sometimes I really need that descendants does not have access to the mebmer but separate class does. (good old friend classes) Now I solve this in C# by exposing callback properties in interface of the external class and setting them with private methods. This also alows to adjust access: read only or read|write depending on the desired interface. class Parent { private int foo; public void AcceptExternal(IFoo external) { external.GetFooCallback = () => this.foo; } } interface IFoo { Func<int> GetFooCallback {get;set;} } Other way is to explicitly implement particular interface. But I suspect more aspproaches exist.

    Read the article

  • Installing packages into local directory?

    - by Gili
    I'd like to install software packages, similar to apt-get install <foo> but: Without sudo, and Into a local directory The purpose of this exercise is to isolate independent builds in my continuous integration server. I don't mind compiling from source, if that's what it takes, but obviously I'd prefer the simplest approach possible. I tried apt-get source --compile <foo> as mentioned here but I can't get it working for packages like autoconf. I get the following error: dpkg-checkbuilddeps: Unmet build dependencies: help2man I've got help2man compiled in a local directory, but I don't know how to inform apt-get of that. Any ideas? UPDATE: I found an answer that almost works at http://askubuntu.com/a/350/23678. The problem with chroot is that it requires sudo. The problem with apt-get source is that I don't know how to resolve dependencies. I must say, chroot looks very appealing. Is there an equivalent command that doesn't require sudo?

    Read the article

  • How to pass dynamic information between a form and a service? [closed]

    - by qminator
    I have a design problem and hopefully the braintrust which is stack exchange can help. I have a generic form, which loads a dataset and displays it. It never has direct knowledge of what it contains but can pass it to a service for manipulation (via an Onclick event for example). However, the form might need to alter its behaviour based on the manipulation by the service. Example: The service realises this dataset requires sending of an email by the user and needs to send an instruction to the form to open up a mail form. My idea is thus: I'm thinking about passing back some type of key/name dictionary, filled with commands which the service requires. They could then be interpeted by the form without it need to reference something specific. Example: IF the service decides that the dataset needs to refresh it would send back a key/name pair, I might even be able to chain commands. Refreshing the dataset and sending a mail. Refresh / "Foo" Mail / "[email protected]" The form would reference an action explicitly (Refresh or Mail) but not the instructions themselves. Is this a valid idea or am I wasting time?

    Read the article

  • Is this JS code a good way for defining class with private methods?

    - by tigrou
    I was recently browsing a open source JavaScript project. The project is a straight port from another project in C language. It mostly use static methods, packed together in classes. Most classes are implemented using this pattern : Foo = (function () { var privateField = "bar"; var publicField = "bar";     function publicMethod() { console.log('this is public');     } function privateMethod() { console.log('this is private'); } return {   publicMethod : publicMethod, publicField : publicField }; })(); This was the first time I saw private methods implemented that way. I perfectly understand how it works, using a anonymous method. Here is my question : is this pattern a good practice ? What are the actual limitations or caveats ? Usually i declare my JavaScript classes like that : Foo = new function () { var privateField = "test"; this.publicField = "test";     this.publicMethod = function()     { console.log('this method is public'); privateMethod();     } function privateMethod() { console.log('this method is private'); } }; Other than syntax, is there any difference with the pattern show above ?

    Read the article

  • Does (should?) changing the URI scheme name change the semantics?

    - by Doug
    If we take: http://example.com/foo is it fair to say that: ftp://example.com/foo .. points to the same resource, just using a different mechanism for resolving it (and of course possibly a different representation, but perhaps not)? This came to light in a discussion we were having surrounding some internal tooling with Git. We have to process some Git repositories, and they come to use as "git@{authority}/{path}" , however the library we're using to interface with them doesn't support the git protocol. I suggested that we should make the service robust in of that it tries to use HTTP or SSH, in essence, discovering what protocols/schemes are supported for resolving the repository at {path} under each {authority}. This was met with some criticism: "We don't know if that's the same repository". My response was: "It had better be!" Looking at RFC 3986, I see this excerpt: URI "resolution" is the process of determining an access mechanism and the appropriate parameters necessary to dereference a URI; this resolution may require several iterations. To use that access mechanism to perform an action on the URI's resource is to "dereference" the URI. Which makes me think that the resolution process is permitted to try different protocols, because: Although many URI schemes are named after protocols, this does not imply that use of these URIs will result in access to the resource via the named protocol. The only concern I have, I guess, is that I only see reference to the notion of changing protocols when it comes to traversing relationships: it is possible for a single set of hypertext documents to be simultaneously accessible and traversable via each of the "file", "http", and "ftp" schemes if the documents refer to each other with relative references. I'm inclined to think I'm wrong in my initial beliefs, because the Normalization and Comparison section of said RFC doesn't mention any way of treating two URIs as equivalent if they use different schemes. It seems like schemes named/based on IP protocols ought to have this notion, at least?

    Read the article

  • Which order to define getters and setters in? [closed]

    - by N.N.
    Is there a best practice for the order to define getters and setters in? There seems to be two practices: getter/setter pairs first getters, then setters (or the other way around) To illuminate the difference here is a Java example of getter/setter pairs: public class Foo { private int var1, var2, var3; public int getVar1() { return var1; } public void setVar1(int var1) { this.var1 = var1; } public int getVar2() { return var2; } public void setVar2(int var2) { this.var2 = var2; } public int getVar3() { return var3; } public void setVar3(int var3) { this.var3 = var3; } } And here is a Java example of first getters, then setters: public class Foo { private int var1, var2, var3; public int getVar1() { return var1; } public int getVar2() { return var2; } public int getVar3() { return var3; } public void setVar1(int var1) { this.var1 = var1; } public void setVar2(int var2) { this.var2 = var2; } public void setVar3(int var3) { this.var3 = var3; } } I think the latter type of ordering is clearer both in code and in class diagrams but I do not know if that is enough to rule out the other type of ordering.

    Read the article

  • C# 4.0: Named And Optional Arguments

    - by Paulo Morgado
    As part of the co-evolution effort of C# and Visual Basic, C# 4.0 introduces Named and Optional Arguments. First of all, let’s clarify what are arguments and parameters: Method definition parameters are the input variables of the method. Method call arguments are the values provided to the method parameters. In fact, the C# Language Specification states the following on §7.5: The argument list (§7.5.1) of a function member invocation provides actual values or variable references for the parameters of the function member. Given the above definitions, we can state that: Parameters have always been named and still are. Parameters have never been optional and still aren’t. Named Arguments Until now, the way the C# compiler matched method call definition arguments with method parameters was by position. The first argument provides the value for the first parameter, the second argument provides the value for the second parameter, and so on and so on, regardless of the name of the parameters. If a parameter was missing a corresponding argument to provide its value, the compiler would emit a compilation error. For this call: Greeting("Mr.", "Morgado", 42); this method: public void Greeting(string title, string name, int age) will receive as parameters: title: “Mr.” name: “Morgado” age: 42 What this new feature allows is to use the names of the parameters to identify the corresponding arguments in the form: name:value Not all arguments in the argument list must be named. However, all named arguments must be at the end of the argument list. The matching between arguments (and the evaluation of its value) and parameters will be done first by name for the named arguments and than by position for the unnamed arguments. This means that, for this method definition: public static void Method(int first, int second, int third) this call declaration: int i = 0; Method(i, third: i++, second: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = i++; int CS$0$0001 = ++i; Method(i, CS$0$0001, CS$0$0000); which will give the method the following parameter values: first: 2 second: 2 third: 0 Notice the variable names. Although invalid being invalid C# identifiers, they are valid .NET identifiers and thus avoiding collision between user written and compiler generated code. Besides allowing to re-order of the argument list, this feature is very useful for auto-documenting the code, for example, when the argument list is very long or not clear, from the call site, what the arguments are. Optional Arguments Parameters can now have default values: public static void Method(int first, int second = 2, int third = 3) Parameters with default values must be the last in the parameter list and its value is used as the value of the parameter if the corresponding argument is missing from the method call declaration. For this call declaration: int i = 0; Method(i, third: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = ++i; Method(i, 2, CS$0$0000); which will give the method the following parameter values: first: 1 second: 2 third: 1 Because, when method parameters have default values, arguments can be omitted from the call declaration, this might seem like method overloading or a good replacement for it, but it isn’t. Although methods like this: public static StreamReader OpenTextFile( string path, Encoding encoding = null, bool detectEncoding = true, int bufferSize = 1024) allow to have its calls written like this: OpenTextFile("foo.txt", Encoding.UTF8); OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4096); OpenTextFile( bufferSize: 4096, path: "foo.txt", detectEncoding: false); The complier handles default values like constant fields taking the value and useing it instead of a reference to the value. So, like with constant fields, methods with parameters with default values are exposed publicly (and remember that internal members might be publicly accessible – InternalsVisibleToAttribute). If such methods are publicly accessible and used by another assembly, those values will be hard coded in the calling code and, if the called assembly has its default values changed, they won’t be assumed by already compiled code. At the first glance, I though that using optional arguments for “bad” written code was great, but the ability to write code like that was just pure evil. But than I realized that, since I use private constant fields, it’s OK to use default parameter values on privately accessed methods.

    Read the article

  • A C# implementation of the CallStream pattern

    - by Bertrand Le Roy
    Dusan published this interesting post a couple of weeks ago about a novel JavaScript chaining pattern: http://dbj.org/dbj/?p=514 It’s similar to many existing patterns, but the syntax is extraordinarily terse and it provides a new form of friction-free, plugin-less extensibility mechanism. Here’s a JavaScript example from Dusan’s post: CallStream("#container") (find, "div") (attr, "A", 1) (css, "color", "#fff") (logger); The interesting thing here is that the functions that are being passed as the first argument are arbitrary, they don’t need to be declared as plug-ins. Compare that with a rough jQuery equivalent that could look something like this: $.fn.logger = function () { /* ... */ } $("selector") .find("div") .attr("A", 1) .css("color", "#fff") .logger(); There is also the “each” method in jQuery that achieves something similar, but its syntax is a little more verbose. Of course, that this pattern can be expressed so easily in JavaScript owes everything to the extraordinary way functions are treated in that language, something Douglas Crockford called “the very best part of JavaScript”. One of the first things I thought while reading Dusan’s post was how I could adapt that to C#. After all, with Lambdas and delegates, C# also has its first-class functions. And sure enough, it works really really well. After about ten minutes, I was able to write this: CallStreamFactory.CallStream (p => Console.WriteLine("Yay!")) (Dump, DateTime.Now) (DumpFooAndBar, new { Foo = 42, Bar = "the answer" }) (p => Console.ReadKey()); Where the Dump function is: public static void Dump(object options) { Console.WriteLine(options.ToString()); } And DumpFooAndBar is: public static void DumpFooAndBar(dynamic options) { Console.WriteLine("Foo is {0} and bar is {1}.", options.Foo, options.Bar); } So how does this work? Well, it really is very simple. And not. Let’s say it’s not a lot of code, but if you’re like me you might need an Advil after that. First, I defined the signature of the CallStream method as follows: public delegate CallStream CallStream (Action<object> action, object options = null); The delegate define a call stream as something that takes an action (a function of the options) and an optional options object and that returns a delegate of its own type. Tricky, but that actually works, a delegate can return its own type. Then I wrote an implementation of that delegate that calls the action and returns itself: public static CallStream CallStream (Action<object> action, object options = null) { action(options); return CallStream; } Pretty nice, eh? Well, yes and no. What we are doing here is to execute a sequence of actions using an interesting novel syntax. But for this to be actually useful, you’d need to build a more specialized call stream factory that comes with some sort of context (like Dusan did in JavaScript). For example, you could write the following alternate delegate signature that takes a string and returns itself: public delegate StringCallStream StringCallStream(string message); And then write the following call stream (notice the currying): public static StringCallStream CreateDumpCallStream(string dumpPath) { StringCallStream str = null; var dump = File.AppendText(dumpPath); dump.AutoFlush = true; str = s => { dump.WriteLine(s); return str; }; return str; } (I know, I’m not closing that stream; sure; bad, bad Bertrand) Finally, here’s how you use it: CallStreamFactory.CreateDumpCallStream(@".\dump.txt") ("Wow, this really works.") (DateTime.Now.ToLongTimeString()) ("And that is all."); Next step would be to combine this contextual implementation with the one that takes an action parameter and do some really fun stuff. I’m only scratching the surface here. This pattern could reveal itself to be nothing more than a gratuitous mind-bender or there could be applications that we hardly suspect at this point. In any case, it’s a fun new construct. Or is this nothing new? You tell me… Comments are open :)

    Read the article

  • links for 2011-03-15

    - by Bob Rhubart
    Dr. Frank Munz: Resize AWS EC2 Cloud Instances Dr Munz says: "You cannot dynamically resize a running cloud instance. E.g. there is no API call to ask for 2.2 GHz CPU speed instead of 1.8 GHz or to dynamically add another 3.5 GB of RAM." (tags: oracle cloud amazon ec2) Roddy Rodstein: Oracle VM Manager Architecture and Scalability Rodstein says: "Oracle VM Manager can be installed in an all-in-one configuration using the default Oracle 10g Express Database or in a more traditional two tier architecture with an OC4J web tier and a 10 or 11g database tier." (tags: oracle otn virtualization oraclevm) Mark Nelson: Getting started with Continuous Integration for SOA projects Nelson says: "I am exploring how to use Maven and Hudson to create a continuous integration capability for SOA and BPM projects. This will be the first post of several on this topic, and today we will look at setting up some simple continuous integration for a single SOA project." (tags: oracle maven hudson soa bpm) 5 New Java Champions (The Java Source) Tori Wieldt shares the big news. Congratulations to new Java Champs Jonas Bonér, James Strachan, Rickard Oberg, Régina ten Bruggencate, and Clara Ko. (tags: oracle java) Alert for Forms customers running Oracle Forms 10g (Grant Ronald's Blog) Ronald says: "While you might have been happily running your Forms 10g applications for about 5 years or so now, the end of premier support is creeping up and you need to start planning for a move to Oracle Forms 11g." (tags: oracle oracleforms) Brenda Michelson: Enterprise Architecture Rant #4,892 "I’m increasingly concerned about the macro-direction of our field, as we continue to suffer ivory tower enterprise architecture punditry, rigid frameworks and endless philosophical waxing." - Brenda Michelson (tags: entarch enterprisearchitecture ivorytower) Amitabh Apte: Enterprise Architecture - Different Perspectives "Business does not need Enterprise Architecture," says Apte, "it needs value and outcomes from the EA function." (tags: entarch enterprisearchitecture) First Ever MySQL on Windows Online Forum - March 16, 2011 (Oracle's MySQL Blog) Monica Kumar shares the details. (tags: oracle mysql mswindows) Jeff Davies: Running Multiple WebLogic and OSB Domains "There is a small 'gotcha' if you want to create multiple domains on a devevelopment machine," says Jeff Davies. But don't worry - there's a solution. (tags: oracle soa osb weblogic servicebus) The Arup Nanda Blog: Good Engineering "Engineering is not about being superficially creative," Nanda says, "it's about reliability and trustworthiness." (tags: oracle engineering software technology) Welcome to the SOA & E2.0 Partner Community Forum (SOA Partner Community Blog) (tags: ping.fm)

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >