Search Results

Search found 3518 results on 141 pages for 'arguments'.

Page 35/141 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Partial generic type inference possible in C#?

    - by Lasse V. Karlsen
    I am working on rewriting my fluent interface for my IoC class library, and when I refactored some code in order to share some common functionality through a base class, I hit upon a snag. Note: This is something I want to do, not something I have to do. If I have to make do with a different syntax, I will, but if anyone has an idea on how to make my code compile the way I want it, it would be most welcome. I want some extension methods to be available for a specific base-class, and these methods should be generic, with one generic type, related to an argument to the method, but the methods should also return a specific type related to the particular descendant they're invoked upon. Better with a code example than the above description methinks. Here's a simple and complete example of what doesn't work: using System; namespace ConsoleApplication16 { public class ParameterizedRegistrationBase { } public class ConcreteTypeRegistration : ParameterizedRegistrationBase { public void SomethingConcrete() { } } public class DelegateRegistration : ParameterizedRegistrationBase { public void SomethingDelegated() { } } public static class Extensions { public static ParameterizedRegistrationBase Parameter<T>( this ParameterizedRegistrationBase p, string name, T value) { return p; } } class Program { static void Main(string[] args) { ConcreteTypeRegistration ct = new ConcreteTypeRegistration(); ct .Parameter<int>("age", 20) .SomethingConcrete(); // <-- this is not available DelegateRegistration del = new DelegateRegistration(); del .Parameter<int>("age", 20) .SomethingDelegated(); // <-- neither is this } } } If you compile this, you'll get: 'ConsoleApplication16.ParameterizedRegistrationBase' does not contain a definition for 'SomethingConcrete' and no extension method 'SomethingConcrete'... 'ConsoleApplication16.ParameterizedRegistrationBase' does not contain a definition for 'SomethingDelegated' and no extension method 'SomethingDelegated'... What I want is for the extension method (Parameter<T>) to be able to be invoked on both ConcreteTypeRegistration and DelegateRegistration, and in both cases the return type should match the type the extension was invoked on. The problem is as follows: I would like to write: ct.Parameter<string>("name", "Lasse") ^------^ notice only one generic argument but also that Parameter<T> returns an object of the same type it was invoked on, which means: ct.Parameter<string>("name", "Lasse").SomethingConcrete(); ^ ^-------+-------^ | | +---------------------------------------------+ .SomethingConcrete comes from the object in "ct" which in this case is of type ConcreteTypeRegistration Is there any way I can trick the compiler into making this leap for me? If I add two generic type arguments to the Parameter method, type inference forces me to either provide both, or none, which means this: public static TReg Parameter<TReg, T>( this TReg p, string name, T value) where TReg : ParameterizedRegistrationBase gives me this: Using the generic method 'ConsoleApplication16.Extensions.Parameter<TReg,T>(TReg, string, T)' requires 2 type arguments Using the generic method 'ConsoleApplication16.Extensions.Parameter<TReg,T>(TReg, string, T)' requires 2 type arguments Which is just as bad. I can easily restructure the classes, or even make the methods non-extension-methods by introducing them into the hierarchy, but my question is if I can avoid having to duplicate the methods for the two descendants, and in some way declare them only once, for the base class. Let me rephrase that. Is there a way to change the classes in the first code example above, so that the syntax in the Main-method can be kept, without duplicating the methods in question? The code will have to be compatible with both C# 3.0 and 4.0. Edit: The reason I'd rather not leave both generic type arguments to inference is that for some services, I want to specify a parameter value for a constructor parameter that is of one type, but pass in a value that is a descendant. For the moment, matching of specified argument values and the correct constructor to call is done using both the name and the type of the argument. Let me give an example: ServiceContainerBuilder.Register<ISomeService>(r => r .From(f => f.ConcreteType<FileService>(ct => ct .Parameter<Stream>("source", new FileStream(...))))); ^--+---^ ^---+----^ | | | +- has to be a descendant of Stream | +- has to match constructor of FileService If I leave both to type inference, the parameter type will be FileStream, not Stream.

    Read the article

  • Jscript help? What is the main purpose of this Javascript?

    - by user577363
    Dear All, I don't know Javascript, can you please show me the main purpose of this Javascript? Many Thanks!! <script> var QunarUtil=new function(){var prefix='/scripts/';var suffix='';var host='';if(location.host.indexOf('src.')!=-1){prefix='/scripts/src/';host='http://hstatic.qunar.com';suffix='.js';}else if(location.host.indexOf('enc.')!=-1){prefix='/scripts/';host='http://hstatic.qunar.com';suffix='.js';}else if(location.host.substring(0,10)=='sdev-'){prefix=location.host.substring(5,location.host.indexOf('.'));prefix='/'+prefix.replace(/\-/ig,'/');host='http://hstatic.qunar.com';suffix='.js';}else if(location.host.indexOf("h.qunar.com")!=-1){host='http://hstatic.qunar.com';suffix='';} this.getScriptURL=function(name,isList){if(name.charAt(0)!='/') return this.getScript(prefix+name,isList);else return this.getScript(name,isList);} this.getScript=function(src,isList){return'<'+'script type="text/javascript" src="'+host+src+(isList?suffix:'.js')+'?'+__QUNARVER__+'"></'+'script>';} this.writeScript=function(name,isList){document.write(this.getScriptURL(name,isList));} this.writeScriptList=function(list){for(var i=0;i<list.length;i++) document.write(this.getScriptURL(list[i]));} var cssRoot='/styles/';this.writeCSS=function(cssList){for(var i=0;i<cssList.length;i++){document.write('<link rel="stylesheet" href="'+cssRoot+cssList[i]+'?'+__QUNARVER__+'">');}} this.writeStaticScript=function(src){document.write('<scr'+'ipt type="text/javascript" src="'+src+'"></'+'scr'+'ipt>');} this.writeStaticList=function(src){document.write('<scr'+'ipt type="text/javascript" src="'+src+suffix+'?'+__QUNARVER__+'"></'+'scr'+'ipt>');}} $include=function(){for(var i=0;i<arguments.length;i++){QunarUtil.writeScript(arguments[i],true);}} </script> Uncompressed version: <script> var QunarUtil = new function() { var prefix = '/scripts/'; var suffix = ''; var host = ''; if (location.host.indexOf('src.') != -1) { prefix = '/scripts/src/'; host = 'http://hstatic.qunar.com'; suffix = '.js'; } else if (location.host.indexOf('enc.') != -1) { prefix = '/scripts/'; host = 'http://hstatic.qunar.com'; suffix = '.js'; } else if (location.host.substring(0, 10) == 'sdev-') { prefix = location.host.substring(5, location.host.indexOf('.')); prefix = '/' + prefix.replace(/\-/ig, '/'); host = 'http://hstatic.qunar.com'; suffix = '.js'; } else if (location.host.indexOf("h.qunar.com") != -1) { host = 'http://hstatic.qunar.com'; suffix = ''; } this.getScriptURL = function(name, isList) { if (name.charAt(0) != '/') return this.getScript(prefix + name, isList); else return this.getScript(name, isList); } this.getScript = function(src, isList) { return '<' + 'script type="text/javascript" src="' + host + src + (isList ? suffix : '.js') + '?' + __QUNARVER__ + '"></' + 'script>'; } this.writeScript = function(name, isList) { document.write(this.getScriptURL(name, isList)); } this.writeScriptList = function(list) { for (var i = 0; i < list.length; i++) document.write(this.getScriptURL(list[i])); } var cssRoot = '/styles/'; this.writeCSS = function(cssList) { for (var i = 0; i < cssList.length; i++) { document.write('<link rel="stylesheet" href="' + cssRoot + cssList[i] + '?' + __QUNARVER__ + '">'); } } this.writeStaticScript = function(src) { document.write('<scr' + 'ipt type="text/javascript" src="' + src + '"></' + 'scr' + 'ipt>'); } this.writeStaticList = function(src) { document.write('<scr' + 'ipt type="text/javascript" src="' + src + suffix + '?' + __QUNARVER__ + '"></' + 'scr' + 'ipt>'); } } $include = function() { for (var i = 0; i < arguments.length; i++) { QunarUtil.writeScript(arguments[i], true); } } </script>

    Read the article

  • execute script after desktop loaded?

    - by Andre
    I want to execute bash script on startup that opens several terminals in different workspaces. Script works just fine if I call it from terminal, but it doesn't work if executed from crontab using @reboot: #!/usr/bin/env bash #1 make sure we have enough workspaces gconftool-2 --set -t int /apps/metacity/general/num_workspaces 7 #2. Launch programs in these terminals wmctrl -s 6 gnome-terminal --full-screen --execute bash -c "tmux attach; bash" wmctrl -s 5 gnome-terminal --full-screen --execute bash -c "weechat-curses; bash" wmctrl -s 4 gnome-terminal --full-screen --execute bash -c "export TERM=xterm-256color; mutt; bash" wmctrl -s 3 gnome-terminal --full-screen wmctrl -s 2 gnome-terminal --full-screen wmctrl -s 1 gnome-terminal --full-screen wmctrl -s 0 google-chrome --start-maximized I think it's because crontab job triggers before desktop environment is loaded...maybe...? How can I execute this script after desktop environment is loaded? thanks:) Update 1: i've started it from crontab initially like this: @reboot $HOME/andreiscripts/startup.sh >> $HOME/andreiscripts/testlog.txt 2>&1 and was getting these errors: Cannot open display. Failed to parse arguments: Cannot open display: Cannot open display. Failed to parse arguments: Cannot open display: Cannot open display. ..... Update 2 I've tried to launch script from System Preferences Startup Applications /home/andrei/andreiscripts/startup.sh >> /home/andrei/Desktop/out.txt 2>&1 but script only opened first gnome-terminal in workspace 6... and wouldn't continue executing the rest of the script until I close that gnome-terminal and so on....

    Read the article

  • Function currying in Javascript

    - by kerry
    Do you catch yourself doing something like this often? 1: Ajax.request('/my/url', {'myParam': paramVal}, function() { myCallback(paramVal); }); Creating a function which calls another function asynchronously is a bad idea because the value of paramVal may change before it is called.  Enter the curry function: 1: Function.prototype.curry = function(scope) { 2: var args = []; 3: for (var i=1, len = arguments.length; i < len; ++i) { 4: args.push(arguments[i]); 5: } 6: var m = this; 7: return function() { 8: m.apply(scope, args); 9: }; 10: } This function creates a wrapper around the function and ‘locks in’ the method parameters.  The first parameter is the scope of the function call (usually this or window).  Any remaining parameters will be passed to the method call.  Using the curry method the above call changes to: 1: Ajax.request('/my/url', {'myParam': paramVal}, myCallback.curry(window,paramVal)); Remember when passing objects to the curry method that the objects members may still change.

    Read the article

  • A Simple Approach For Presenting With Code Samples

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/07/31/a-simple-approach-for-presenting-with-code-samples.aspxI’ve been getting ready for a presentation and have been struggling a bit with the best way to show and execute code samples. I don’t present often (hardly ever), but when I do I like the presentation to have a lot of succinct and executable code snippets to help illustrate the points that I’m making. Depending on what the presentation is about, I might just want to build an entire sample application that I would run during the presentation. In other cases, however, building a full-blown application might not really be the best way to present the code. The presentation I’m working on now is for an open source utility library for dealing with dates and times. I could have probably cooked up a sample app for accepting date and time input and then contrived ways in which it could put the library through its paces, but I had trouble coming up with one app that would illustrate all of the various features of the library that I wanted to highlight. I finally decided that what I really needed was an approach that met the following criteria: Simple: I didn’t want the user interface or overall architecture of a sample application to serve as a distraction from the demonstration of the syntax of the library that the presentation is about. I want to be able to present small bits of code that are focused on accomplishing a single task. Several of these examples will look similar, and that’s OK. I want each sample to “stand on its own” and not rely much on external classes or methods (other than the library that is being presented, of course). “Debuggable” (not really a word, I know): I want to be able to easily run the sample with the debugger attached in Visual Studio should I want to step through any bits of code and show what certain values might be at run time. As far as I know this rules out something like LinqPad, though using LinqPad to present code samples like this is actually a very interesting idea that I might explore another time. Flexible and Selectable: I’m going to have lots of code samples to show, and I want to be able to just package them all up into a single project or module and have an easy way to just run the sample that I want on-demand. Since I’m presenting on a .NET framework library, one of the simplest ways in which I could execute some code samples would be to just create a Console application and use Console.WriteLine to output the pertinent info at run time. This gives me a “no frills” harness from which to run my code samples, and I just hit ‘F5’ to run it with the debugger. This satisfies numbers 1 and 2 from my list of criteria above, but item 3 is a little harder. By default, just running a console application is going to execute the ‘main’ method, and then terminate the program after all code is executed. If I want to have several different code samples and run them one at a time, it would be cumbersome to keep swapping the code I want in and out of the ‘main’ method of the console application. What I really want is an easy way to keep the console app running throughout the whole presentation and just have it run the samples I want when I want. I could setup a simple Windows Forms or WPF desktop application with buttons for the different samples, but then I’m getting away from my first criteria of keeping things as simple as possible. Infinite Loops To The Rescue I found a way to have a simple console application satisfy all three of my requirements above, and it involves using an infinite loop and some Console.ReadLine calls that will give the user an opportunity to break out and exit the program. (All programs that need to run until they are closed explicitly (or crash!) likely use similar constructs behind the scenes. Create a new Windows Forms project, look in the ‘Program.cs’ that gets generated, and then check out the docs for the Application.Run method that it calls.). Here’s how the main method might look: 1: static void Main(string[] args) 2: { 3: do 4: { 5: Console.Write("Enter command or 'exit' to quit: > "); 6: var command = Console.ReadLine(); 7: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 8: { 9: Console.WriteLine("Quitting."); 10: break; 11: } 12: 13: } while (true); 14: } The idea here is the app prompts me for the command I want to run, or I can type in ‘exit’ to break out of the loop and let the application close. The only trick now is to create a set of commands that map to each of the code samples that I’m going to want to run. Each sample is already encapsulated in a single public method in a separate class, so I could just write a big switch statement or create a hashtable/dictionary that maps command text to an Action that will invoke the proper method, but why re-invent the wheel? CLAP For Your Own Presentation I’ve blogged about the CLAP library before, and it turns out that it’s a great fit for satisfying criteria #3 from my list above. CLAP lets you decorate methods in a class with an attribute and then easily invoke those methods from within a console application. CLAP was designed to take the arguments passed into the console app from the command line and parse them to determine which method to run and what arguments to pass to that method, but there’s no reason you can’t re-purpose it to accept command input from within the infinite loop defined above and invoke the corresponding method. Here’s how you might define a couple of different methods to contain two different code samples that you want to run during your presentation: 1: public static class CodeSamples 2: { 3: [Verb(Aliases="one")] 4: public static void SampleOne() 5: { 6: Console.WriteLine("This is sample 1"); 7: } 8:   9: [Verb(Aliases="two")] 10: public static void SampleTwo() 11: { 12: Console.WriteLine("This is sample 2"); 13: } 14: } A couple of things to note about the sample above: I’m using static methods. You don’t actually need to use static methods with CLAP, but the syntax ends up being a bit simpler and static methods happen to lend themselves well to the “one self-contained method per code sample” approach that I want to use. The methods are decorated with a ‘Verb’ attribute. This tells CLAP that they are eligible targets for commands. The “Aliases” argument lets me give them short and easy-to-remember aliases that can be used to invoke them. By default, CLAP just uses the full method name as the command name, but with aliases you can simply the usage a bit. I’m not using any parameters. CLAP’s main feature is its ability to parse out arguments from a command line invocation of a console application and automatically pass them in as parameters to the target methods. My code samples don’t need parameters ,and honestly having them would complicate giving the presentation, so this is a good thing. You could use this same approach to invoke methods with parameters, but you’d have a couple of things to figure out. When you invoke a .NET application from the command line, Windows will parse the arguments and pass them in as a string array (called ‘args’ in the boilerplate console project Program.cs). The parsing that gets done here is smart enough to deal with things like treating strings in double quotes as one argument, and you’d have to re-create that within your infinite loop if you wanted to use parameters. I plan on either submitting a pull request to CLAP to add this capability or maybe just making a small utility class/extension method to do it and posting that here in the future. So I now have a simple class with static methods to contain my code samples, and an infinite loop in my ‘main’ method that can accept text commands. Wiring this all up together is pretty easy: 1: static void Main(string[] args) 2: { 3: do 4: { 5: try 6: { 7: Console.Write("Enter command or 'exit' to quit: > "); 8: var command = Console.ReadLine(); 9: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 10: { 11: Console.WriteLine("Quitting."); 12: break; 13: } 14:   15: Parser.Run<CodeSamples>(new[] { command }); 16: Console.WriteLine("---------------------------------------------------------"); 17: } 18: catch (Exception ex) 19: { 20: Console.Error.WriteLine("Error: " + ex.Message); 21: } 22:   23: } while (true); 24: } Note that I’m now passing the ‘CodeSamples’ class into the CLAP ‘Parser.Run’ as a type argument. This tells CLAP to inspect that class for methods that might be able to handle the commands passed in. I’m also throwing in a little “----“ style line separator and some basic error handling (because I happen to know that some of the samples are going to throw exceptions for demonstration purposes) and I’m good to go. Now during my presentation I can just have the console application running the whole time with the debugger attached and just type in the alias of the code sample method that I want to run when I want to run it.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • xcopy file, suppress &ldquo;Does xxx specify a file name&hellip;&rdquo; message

    - by MarkPearl
    Today we had an interesting problem with file copying. We wanted to use xcopy to copy a file from one location to another and rename the copied file but do this impersonating another user. Getting the impersonation to work was fairly simple, however we then had the challenge of getting xcopy to work. The problem was that xcopy kept prompting us with a prompt similar to the following… Does file.xxx specify a file name or directory name on the target (F = file, D = directory)? At which point we needed to press ‘Y’. This seems to be a fairly common challenge with xcopy, as illustrated by the following stack overflow link… One of the solutions was to do the following… echo f | xcopy /f /y srcfile destfile This is fine if you are running from the command prompt, but if you are triggering this from c# how could we daisy chain a bunch of commands…. The solution was fairly simple, we eventually ended up with the following method… public void Copy(string initialFile, string targetFile) { string xcopyExe = @"C:\windows\system32\xcopy.exe"; string cmdExe = @"C:\windows\system32\cmd.exe"; ProcessStartInfo p = new ProcessStartInfo(); p.FileName = cmdExe; p.Arguments = string.Format(@"/c echo f | {2} {0} {1} /Y", initialFile, targetFile, xcopyExe); Process.Start(p); } Where we wrapped the commands we wanted to chain as arguments and instead of calling xcopy directly, we called cmd.exe passing xcopy as an argument.

    Read the article

  • Using JavaScript for basic HTML layout

    - by Slobaum
    I've been doing HTML layout as well as programming for many years and I'm seeing a growing issue recently. Folks who primarily do HTML layout are becoming increasingly more comfortable using JavaScript to solve basic page layout problems. Rather than consider what HTML is capable of doing (to hit their target browsers), they're slapping on bloated JS frameworks that "fix" fairly basic problems. Let's get this out of the way right here: I find this practice annoying and often inconsiderate of those with special accessibility needs. Unfortunately, when you try to tell these folks that what they're doing isn't semantic, ideal, or possibly even a good idea, they always counter with the same old arguments: "JavaScript has a market saturation of 98%, we don't care about the other 2%." or "Who doesn't have JavaScript enabled these days?" or simply "We don't care about those users." I find that remarkably short-sighted. I would really like the opinion of the community at large. What do you think, am I holding too fast to a dying ideal? I am willing to accept that, but would like to be given good arguments as to why I should disregard 2%+ of my user base (among others). Is JavaScript's prevalence a good excuse to use a programmatic language to do basic layout, thus mucking up your behavior and layout? jQuery and similar "behavior" based frameworks are blurring the lines, especially for those who don't realize the difference. Honestly (and probably most importantly), I would like some "argument ammo" to use against these folks when the "it's the right way to do it" argument is unacceptable. Can you cite sources outlining your stance, please? Thanks everybody, please be civil :)

    Read the article

  • DTLoggedExec 1.0 Stable Released!

    - by Davide Mauri
    After serveral years of development I’ve finally released the first non-beta version of DTLoggedExec! I’m now very confident that the product is stable and solid and has all the feature that are important to have (at least for me). DTLoggedExec 1.0 http://dtloggedexec.codeplex.com/releases/view/44689 Here’s the release notes: FIRST NON-BETA RELEASE! :) Code cleaned up Added SetPackageInfo method to ILogProvider interface to make easier future improvements Deprecated the arguments 'ProfileDataFlow', 'ProfilePath', 'ProfileFileName' Added the new argument 'ProfileDataFlowFileName' that replaces the old 'ProfileDataFlow', 'ProfilePath', 'ProfileFileName' arguments Updated database scripts to support new reports Split releases in three different packages for easier maintenance and updates: DTLoggedExec Executable, Samples & Reports Fixed Issue #25738 (http://dtloggedexec.codeplex.com/WorkItem/View.aspx?WorkItemId=25738) Fixed Issue #26479 (http://dtloggedexec.codeplex.com/WorkItem/View.aspx?WorkItemId=26479) To make things easier to maintain I’ve divided the original package in three different releases. One is the DTLoggedExec executable; samples and reports are now available in separate packages so that I can update them more frequently without having to touch the engine. Source code of everything is available through Source Code Control: http://dtloggedexec.codeplex.com/SourceControl/list/changesets As usual, comments and feebacks are more than welcome! (Just use Codeplex, please, so it will be easier for me to keep track of requests and issues) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to implement early exit / return in Haskell?

    - by Giorgio
    I am porting a Java application to Haskell. The main method of the Java application follows the pattern: public static void main(String [] args) { if (args.length == 0) { System.out.println("Invalid number of arguments."); System.exit(1); } SomeDataType d = getData(arg[0]); if (!dataOk(d)) { System.out.println("Could not read input data."); System.exit(1); } SomeDataType r = processData(d); if (!resultOk(r)) { System.out.println("Processing failed."); System.exit(1); } ... } So I have different steps and after each step I can either exit with an error code, or continue to the following step. My attempt at porting this to Haskell goes as follows: main :: IO () main = do a <- getArgs if ((length args) == 0) then do putStrLn "Invalid number of arguments." exitWith (ExitFailure 1) else do -- The rest of the main function goes here. With this solution, I will have lots of nested if-then-else (one for each exit point of the original Java code). Is there a more elegant / idiomatic way of implementing this pattern in Haskell? In general, what is a Haskell idiomatic way to implement an early exit / return as used in an imperative language like Java?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • What is the best approach for inline code comments?

    - by d1egoaz
    We are doing some refactoring to a 20 years old legacy codebase, and I'm having a discussion with my colleague about the comments format in the code (plsql, java). There is no a default format for comments, but in most cases people do something like this in the comment: // date (year, year-month, yyyy-mm-dd, dd/mm/yyyy), (author id, author name, author nickname) and comment the proposed format for future and past comments that I want is: // {yyyy-mm-dd}, unique_author_company_id, comment My colleague says that we only need the comment, and must reformat all past and future comments to this format: // comment My arguments: I say for maintenance reasons, it's important to know when and who did a change (even this information is in the SCM). The code is living, and for that reason has a history. Because without the change dates it's impossible to know when a change was introduced without open the SCM tool and search in the long object history. because the author is very important, a change of authors is more credible than a change of authory Agility reasons, no need to open and navigate through the SCM tool people would be more afraid to change something that someone did 15 years ago, than something that was recently created or changed. etc. My colleague's arguments: The history is in the SCM Developers must not be aware of the history of the code directly in the code Packages gets 15k lines long and unstructured comments make these packages harder to understand What do you think is the best approach? Or do you have a better approach to solve this problem?

    Read the article

  • Preffered lambda syntax?

    - by Roger Alsing
    I'm playing around a bit with my own C like DSL grammar and would like some oppinions. I've reserved the use of "(...)" for invocations. eg: foo(1,2); My grammar supports "trailing closures" , pretty much like Ruby's blocks that can be passed as the last argument of an invocation. Currently my grammar support trailing closures like this: foo(1,2) { //parameterless closure passed as the last argument to foo } or foo(1,2) [x] { //closure with one argument (x) passed as the last argument to foo print (x); } The reason why I use [args] instead of (args) is that (args) is ambigious: foo(1,2) (x) { } There is no way in this case to tell if foo expects 3 arguments (int,int,closure(x)) or if foo expects 2 arguments and returns a closure with one argument(int,int) - closure(x) So thats pretty much the reason why I use [] as for now. I could change this to something like: foo(1,2) : (x) { } or foo(1,2) (x) -> { } So the actual question is, what do you think looks best? [...] is somewhat wrist unfriendly. let x = [a,b] { } Ideas?

    Read the article

  • Self-referencing anonymous closures: is JavaScript incomplete?

    - by Tom Auger
    Does the fact that anonymous self-referencing function closures are so prevelant in JavaScript suggest that JavaScript is an incomplete specification? We see so much of this: (function () { /* do cool stuff */ })(); and I suppose everything is a matter of taste, but does this not look like a kludge, when all you want is a private namespace? Couldn't JavaScript implement packages and proper classes? Compare to ActionScript 3, also based on EMACScript, where you get package com.tomauger { import bar; class Foo { public function Foo(){ // etc... } public function show(){ // show stuff } public function hide(){ // hide stuff } // etc... } } Contrast to the convolutions we perform in JavaScript (this, from the jQuery plugin authoring documentation): (function( $ ){ var methods = { init : function( options ) { // THIS }, show : function( ) { // IS }, hide : function( ) { // GOOD }, update : function( content ) { // !!! } }; $.fn.tooltip = function( method ) { // Method calling logic if ( methods[method] ) { return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 )); } else if ( typeof method === 'object' || ! method ) { return methods.init.apply( this, arguments ); } else { $.error( 'Method ' + method + ' does not exist on jQuery.tooltip' ); } }; })( jQuery ); I appreciate that this question could easily degenerate into a rant about preferences and programming styles, but I'm actually very curious to hear how you seasoned programmers feel about this and whether it feels natural, like learning different idiosyncrasies of a new language, or kludgy, like a workaround to some basic programming language components that are just not implemented?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • Understanding and memorizing git rebase parameters

    - by Robert Dailey
    So far the most confusing portion of git is rebasing onto another branch. Specifically, it's the command line arguments that are confusing. Each time I want to rebase a small piece of one branch onto the tip of another, I have to review the git rebase documentation and it takes me about 5-10 minutes to understand what each of the 3 main arguments should be. git rebase <upstream> <branch> --onto <newbase> What is a good rule of thumb to help me memorize what each of these 3 parameters should be set to, given any kind of rebase onto another branch? Bear in mind I have gone over the git-rebase documentation again, and again, and again, and again (and again), but it's always difficult to understand (like a boring scientific white-paper or something). So at this point I feel I need to involve other people to help me grasp it. My goal is that I should never have to review the documentation for these basic parameters. I haven't been able to memorize them so far, and I've done a ton of rebases already. So it's a bit unusual that I've been able to memorize every other command and its parameters so far, but not rebase with --onto.

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • How to market R at your institute?

    - by ran2
    Okay, I admit there are lots of threads R vs. something. The strengths of R are obvious to most people here. Still though advertising R in an environment that has been preferring various kinds of other software for quite some time is not easy. Moreover, even in the limited time I´ve been dealing with R, it improved so dramatically that I would mention things among its strengths that I would not have listed when I started my personal R-evolution. So, what I am trying to do here is to collect the most recent and striking arguments that can be put in nutshell and be presented easily. What I got on my list so far is: the Springer useR series ggplot2 and its documentation open source CRAN Rapache rcpp rsocket What can you add to this list? SO threads are also very welcome as answers. EDIT: so far, though indeed very helpful, most answers are arguments (pros) why one would want to use R. Do you have some specific hints that I could include in some kind of overview presentation? EDIT2: I wanted to add this link about R's future to the list...

    Read the article

  • Preferred lambda syntax?

    - by Roger Alsing
    I'm playing around a bit with my own C like DSL grammar and would like some oppinions. I've reserved the use of "(...)" for invocations. eg: foo(1,2); My grammar supports "trailing closures" , pretty much like Ruby's blocks that can be passed as the last argument of an invocation. Currently my grammar support trailing closures like this: foo(1,2) { //parameterless closure passed as the last argument to foo } or foo(1,2) [x] { //closure with one argument (x) passed as the last argument to foo print (x); } The reason why I use [args] instead of (args) is that (args) is ambigious: foo(1,2) (x) { } There is no way in this case to tell if foo expects 3 arguments (int,int,closure(x)) or if foo expects 2 arguments and returns a closure with one argument(int,int) - closure(x) So thats pretty much the reason why I use [] as for now. I could change this to something like: foo(1,2) : (x) { } or foo(1,2) (x) -> { } So the actual question is, what do you think looks best? [...] is somewhat wrist unfriendly. let x = [a,b] { } Ideas?

    Read the article

  • The idea of functionN in Scala / Functionaljava

    - by Luke Murphy
    From brain driven development It turns out, that every Function you’ll ever define in Scala, will become an instance of an Implementation which will feature a certain Function Trait. There is a whole bunch of that Function Traits, ranging from Function1 up to Function22. Since Functions are Objects in Scala and Scala is a statically typed language, it has to provide an appropriate type for every Function which comes with a different number of arguments. If you define a Function with two arguments, the compiler picks Function2 as the underlying type. Also, from Michael Froh's blog You need to make FunctionN classes for each number of parameters that you want? Yes, but you define the classes once and then you use them forever, or ideally they're already defined in a library (e.g. Functional Java defines classes F, F2, ..., F8, and the Scala standard library defines classes Function1, ..., Function22) So we have a list of function traits (Scala), and a list of interfaces (Functional-java) to enable us to have first class funtions. I am trying to understand exactly why this is the case. I know, in Java for example, when I write a method say, public int add(int a, int b){ return a + b; } That I cannot go ahead and write add(3,4,5); ( error would be something like : method add cannot be applied to give types ) We simply have to define an interface/trait for functions with different parameters, because of static typing?

    Read the article

  • C++ and SDL Trouble Creating a STL Vector of a Game Object

    - by Jackson Blades
    I am trying to create a Space Invaders clone using C++ and SDL. The problem I am having is in trying to create Waves of Enemies. I am trying to model this by making my Waves a vector of 8 Enemy objects. My Enemy constructor takes two arguments, an x and y offset. My Wave constructor also takes two arguments, an x and y offset. What I am trying to do is have my Wave constructor initialize a vector of Enemies, and have each enemy given a different x offset so that they are spaced out appropriately. Enemy::Enemy(int x, int y) { box.x = x; box.y = y; box.w = ENEMY_WIDTH; box.h = ENEMY_HEIGHT; xVel = ENEMY_WIDTH / 2; } Wave::Wave(int x, int y) { box.x = x; box.y = y; box.w = WAVE_WIDTH; box.y = WAVE_HEIGHT; xVel = (-1)*ENEMY_WIDTH; yVel = 0; std::vector<Enemy> enemyWave; for (int i = 0; i < enemyWave.size(); i++) { Enemy temp(box.x + ((ENEMY_WIDTH + 16) * i), box.y); enemyWave.push_back(temp); } } I guess what I am asking is if there is a cleaner, more elegant way to do this sort of initialization with vectors, or if this is right at all. Any help is greatly appreciated.

    Read the article

  • How to convince an employer to move to VB.Net for new development?

    - by Dabblernl
    Some history:For the last six months I have been employed at a small firm with just three programmers, my employer among them. The firm maintains two programs written in VB6. I am asssigned as the lead programmer to one of these. In the last six months I did some maintenance and bug hunting, but created some new functionality too. I had an interview last december, which was favorable, and my contract was prolonged. I am very happy with this course of events as I only obtained a .Net certification a year ago and have no other qualifications (in the field of coding, that is). It is my strong opinion that, while migration of the existing program to .Net is advisable, it is paramount that from now on the new functionality should be written in VB.Net class libraries. After some study I found out how simple it is to integrate .Net class libraries into the VB6 development environment and how easy it is to add their functionality to existing installations by using application manifests. So, I have decided that now is the moment to roll up my sleeves and try and convince my employer that he should let me develop new code in VB.Net, using VB6 for maintenance only. We get along quite well, but I think I am going to need all the ammunition I can get to convince him. Any arguments, preferably backed up up ones, are very welcome, even arguments to dissuade me ;-)

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • Call DB Stored Procedure using @NamedStoredProcedureQuery Injection

    - by anwilson
    Oracle Database Stored Procedure can be called from EJB business layer to perform complex DB specific operations. This approach will avoid overhead from frequent network hits which could impact end-user result. DB Stored Procedure can be invoked from EJB Session Bean business logic using org.eclipse.persistence.queries.StoredProcedureCall API. Using this approach requires more coding to handle the Session and Arguments of the Stored Procedure, thereby increasing effort on maintenance. EJB 3.0 introduces @NamedStoredProcedureQuery Injection to call Database Stored Procedure as NamedQueries. This blog will take you through the steps to call Oracle Database Stored Procedure using @NamedStoredProcedureQuery.EMP_SAL_INCREMENT procedure available in HR schema will be used in this sample.Create Entity from EMPLOYEES table.Add @NamedStoredProcedureQuery above @NamedQueries to Employees.java with definition as given below - @NamedStoredProcedureQuery(name="Employees.increaseEmpSal", procedureName = "EMP_SAL_INCREMENT", resultClass=void.class, resultSetMapping = "", returnsResultSet = false, parameters = { @StoredProcedureParameter(name = "EMP_ID", queryParameter = "EMPID"), @StoredProcedureParameter(name = "SAL_INCR", queryParameter = "SALINCR")} ) Observe how Stored Procedure's arguments are handled easily in  @NamedStoredProcedureQuery using @StoredProcedureParameter.Expose Entity Bean by creating a Session Facade.Business method need to be added to Session Bean to access the Stored Procedure exposed as NamedQuery. public void salaryRaise(Long empId, Long salIncrease) throws Exception { try{ Query query = em.createNamedQuery("Employees.increaseEmpSal"); query.setParameter("EMPID", empId); query.setParameter("SALINCR", salIncrease); query.executeUpdate(); } catch(Exception ex){ throw ex; } } Expose business method through Session Bean Remote Interface. void salaryRaise(Long empId, Long salIncrease) throws Exception; Session Bean Client is required to invoke the method exposed through remote interface.Call exposed method in Session Bean Client main method. final Context context = getInitialContext(); SessionEJB sessionEJB = (SessionEJB)context.lookup("Your-JNDI-lookup"); sessionEJB.salaryRaise(new Long(200), new Long(1000)); Deploy Session BeanRun Session Bean Client.Salary of Employee with Id 200 will be increased by 1000.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >