Search Results

Search found 28744 results on 1150 pages for 'higher order functions'.

Page 272/1150 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • Using windows command line from Pascal

    - by Jordan
    I'm trying to use some windows command line tools from within a short Pascal program. To make it easier, I'm writing a function called DoShell which takes a command line string as an argument and returns a record type called ShellResult, with one field for the process's exitcode and one field for the process's output text. I'm having major problems with some standard library functions not working as expected. The DOS Exec() function is not actually carrying out the command i pass to it. The Reset() procedure gives me a runtime error RunError(2) unless i set the compiler mode {I-}. In that case i get no runtime error, but the Readln() functions that i use on that file afterwards don't actually read anything, and furthermore the Writeln() functions used after that point in the code execution do nothing as well. Here's the source code of my program so far. I'm using Lazarus 0.9.28.2 beta, with Free Pascal Compiler 2.24 program project1; {$mode objfpc}{$H+} uses Classes, SysUtils, StrUtils, Dos { you can add units after this }; {$IFDEF WINDOWS}{$R project1.rc}{$ENDIF} type ShellResult = record output : AnsiString; exitcode : Integer; end; function DoShell(command: AnsiString): ShellResult; var exitcode: Integer; output: AnsiString; exepath: AnsiString; exeargs: AnsiString; splitat: Integer; F: Text; readbuffer: AnsiString; begin //Initialize variables exitcode := 0; output := ''; exepath := ''; exeargs := ''; splitat := 0; readbuffer := ''; Result.exitcode := 0; Result.output := ''; //Split command for processing splitat := NPos(' ', command, 1); exepath := Copy(command, 1, Pred(splitat)); exeargs := Copy(command, Succ(splitat), Length(command)); //Run command and put output in temporary file Exec(FExpand(exepath), exeargs + ' __output'); exitcode := DosExitCode(); //Get output from file Assign(F, '__output'); Reset(F); Repeat Readln(F, readbuffer); output := output + readbuffer; readbuffer := ''; Until Eof(F); //Set Result Result.exitcode := exitcode; Result.output := output; end; var I : AnsiString; R : ShellResult; begin Writeln('Enter a command line to run.'); Readln(I); R := DoShell(I); Writeln('Command Exit Code:'); Writeln(R.exitcode); Writeln('Command Output:'); Writeln(R.output); end.

    Read the article

  • How to use objects as modules/functors in Scala?

    - by Jeff
    Hi. I want to use object instances as modules/functors, more or less as shown below: abstract class Lattice[E] extends Set[E] { val minimum: E val maximum: E def meet(x: E, y: E): E def join(x: E, y: E): E def neg(x: E): E } class Calculus[E](val lat: Lattice[E]) { abstract class Expr case class Var(name: String) extends Expr {...} case class Val(value: E) extends Expr {...} case class Neg(e1: Expr) extends Expr {...} case class Cnj(e1: Expr, e2: Expr) extends Expr {...} case class Dsj(e1: Expr, e2: Expr) extends Expr {...} } So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them. For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function. Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough). def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = { if (level > MAX_LEVEL) { val select = util.Random.nextInt(2) select match { case 0 => genRndVar(c) case 1 => genRndVal(c) } } else { val select = util.Random.nextInt(3) select match { case 0 => new c.Neg(genRndExpr(c, level+1)) case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1)) case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1)) } } } Now, if I try to compile the above code I get lots of error: type mismatch; found : plg.mvfml.Calculus[E]#Expr required: c.Expr case 0 = new c.Neg(genRndExpr(c, level+1)) And the same happens if I try to do something like: val boolCalc = new Calculus(Bool) val e1: boolCalc.Expr = genRndExpr(boolCalc) Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system. Am I doing something wrong? Is it possible to do what I want to do? Help on this matter is highly needed and appreciated. Thanks a lot in advance. After receiving an answer from Apocalisp and trying it. Thanks a lot for the answer, but there are still some issues. The proposed solution was to change the signature of the function to: def genRndExpr[E, C <: Calculus[E]](c: C, level: Int): C#Expr I changed the signature for all the functions involved: getRndExpr, getRndVal and getRndVar. And I got the same error message everywhere I call these functions and got the following error message: error: inferred type arguments [Nothing,C] do not conform to method genRndVar's type parameter bounds [E,C genRndVar(c) Since the compiler seemed to be unable to figure out the right types I changed all function call to be like below: case 0 => new c.Neg(genRndExpr[E,C](c, level+1)) After this, on the first 2 function calls (genRndVal and genRndVar) there were no compiling error, but on the following 3 calls (recursive calls to genRndExpr), where the return of the function is used to build a new Expr object I got the following error: error: type mismatch; found : C#Expr required: c.Expr case 0 = new c.Neg(genRndExpr[E,C](c, level+1)) So, again, I'm stuck. Any help will be appreciated.

    Read the article

  • JQuery get attr value from ajax loaded page

    - by Paolo Rossi
    In my index.php I've a ajax pager that load a content page (paginator.php). index.php <script type='text/javascript' src='js/jquery-1.11.1.min.js'></script> <script type='text/javascript' src='js/paginator.js'></script> <script type='text/javascript' src='js/functions.js'></script> <!-- Add fancyBox --> <script type="text/javascript" src="js/jquery.fancybox.pack.js?v=2.1.5"></script> <html> <div id="content"></div> paginator.js $(function() { var pageurl = 'paginator.php'; //Initialise the table at the first run $( '#content' ).load( pageurl, { pag: 1 }, function() { //animation(); }); //Page navigation click $( document ).on( 'click', 'li.goto', function() { //Get the id of clicked li var pageNum = $( this ).attr( 'id' ); $( '#content' ).load( pageurl, { pag: pageNum }, function() { //animation(); }); }); }); /* END */ paginator.php <ul><li>..etc </ul> ... ... <a id='test' vid='123'>get the id</a> <a id='test2' url='somepage.php'>open fancybox</a> I would like to take a variable of the loaded page (paginator.php) in this way but unfortunately I can not. functions.js $(function() { $('#test' ).click(function(e) { var id = $('#test').attr('vid'); alert ("vid is " + vid); }); }); /* END */ In this case popup not appear. I tried with add fancybox in my functions.js. Dialog box appear but the url attr is not taken. var url = $('#test2').attr('url'); $('#test').fancybox({ 'href' : url, 'width' : 100, 'height' : 100, 'autoScale' : false, 'transitionIn' : 'none', 'transitionOut' : 'none', 'type' : 'iframe', 'fitToView' : true, 'autoSize' : false }); In other scenarios, without having the page loaded, I can do it, but in this case the behavior is different. How could I do this? thanks EDIT my goal is to open a dialog (fancybox) taking a variable from the loaded page EDIT2 I tried in this way paginator.php <a id="test" vid="1234">click me</a> paginator.js //Initialise the table at the first run $( '#content' ).load( pageurl, { pag: 1 }, function() { hide_anim(); popup(); }); //Page navigation click $( document ).on( 'click', 'li.goto', function() { //Get the id of clicked li var pageNum = $( this ).attr( 'id' ); $( '#content' ).load( pageurl, { pag: pageNum }, function() { hide_anim(); popup(); }); }) function.js function popup() { $('#test' ).click(function(e) { e.preventDefault(); var vid = $('#test').attr('vid'); alert ("vid is " + vid); }); } But popup not appear...

    Read the article

  • Converting OCaml to F#: F# equivelent of Pervasives at_exit

    - by Guy Coder
    I am converting the OCaml Format module to F# and tracked a problem back to a use of the OCaml Pervasives at_exit. val at_exit : (unit -> unit) -> unit Register the given function to be called at program termination time. The functions registered with at_exit will be called when the program executes exit, or terminates, either normally or because of an uncaught exception. The functions are called in "last in, first out" order: the function most recently added with at_exit is called first. In the process of conversion I commented out the line as the compiler did not flag it as being needed and I was not expecting an event in the code. I checked the FSharp.PowerPack.Compatibility.PervasivesModule for at_exit using VS Object Browser and found none. I did find how to run code "at_exit"? and How do I write an exit handler for an F# application? The OCaml line is at_exit print_flush with print_flush signature: val print_flush : (unit -> unit) Also in looking at the use of it during a debug session of the OCaml code, it looks like at_exit is called both at the end of initialization and at the end of each use of a call to the module. Any suggestions, hints on how to do this. This will be my first event in F#. EDIT Here is some of what I have learned about the Format module that should shed some light on the problem. The Format module is a library of functions for basic pretty printer commands of simple OCaml values such as int, bool, string. The format module has commands like print_string, but also some commands to say put the next line in a bounded box, think new set of left and right margins. So one could write: print_string "Hello" or open_box 0; print_string "<<"; open_box 0; print_string "p \/ q ==> r"; close_box(); print_string ">>"; close_box() The commands such as open_box and print_string are handled by a loop that interprets the commands and then decides wither to print on the current line or advance to the next line. The commands are held in a queue and there is a state record to hold mutable values such as left and right margin. The queue and state needs to be primed, which from debugging the test cases against working OCaml code appears to be done at the end of initialization of the module but before the first call is made to any function in the Format module. The queue and state is cleaned up and primed again for the next set of commands by the use of mechanisms for at_exit that recognize that the last matching frame for the initial call to the format modules has been removed thus triggering the call to at_exit which pushes out any remaining command in the queue and re-initializes the queue and state. So the sequencing of the calls to print_flush is critical and appears to be at more than what the OCaml documentation states.

    Read the article

  • Dynamic obfuscation by self-modifying code

    - by Fallout2
    Hi all, Here what's i am trying to do: assume you have two fonction void f1(int *v) { *v = 55; } void f2(int *v) { *v = 44; } char *template; template = allocExecutablePages(...); char *allocExecutablePages (int pages) { template = (char *) valloc (getpagesize () * pages); if (mprotect (template, getpagesize (), PROT_READ|PROT_EXEC|PROT_WRITE) == -1) { perror (“mprotect”); } } I would like to do a comparison between f1 and f2 (so tell what is identical and what is not) (so get the assembly lines of those function and make a line by line comparison) And then put those line in my template. Is there a way in C to do that? THanks Update Thank's for all you answers guys but maybe i haven't explained my need correctly. basically I'm trying to write a little obfuscation method. The idea consists in letting two or more functions share the same location in memory. A region of memory (which we will call a template) is set up containing some of the machine code bytes from the functions, more specifically, the ones they all have in common. Before a particular function is executed, an edit script is used to patch the template with the necessary machine code bytes to create a complete version of that function. When another function assigned to the same template is about to be executed, the process repeats, this time with a different edit script. To illustrate this, suppose you want to obfuscate a program that contains two functions f1 and f2. The first one (f1) has the following machine code bytes Address Machine code 0 10 1 5 2 6 3 20 and the second one (f2) has Address Machine code 0 10 1 9 2 3 3 20 At obfuscation time, one will replace f1 and f2 by the template Address Machine code 0 10 1 ? 2 ? 3 20 and by the two edit scripts e1 = {1 becomes 5, 2 becomes 6} and e2 = {1 becomes 9, 2 becomes 3}. #include <stdlib.h> #include <string.h> typedef unsigned int uint32; typedef char * addr_t; typedef struct { uint32 offset; char value; } EDIT; EDIT script1[200], script2[200]; char *template; int template_len, script_len = 0; typedef void(*FUN)(int *); int val, state = 0; void f1_stub () { if (state != 1) { patch (script1, script_len, template); state = 1; } ((FUN)template)(&val); } void f2_stub () { if (state != 2) { patch (script2, script_len, template); state = 2; } ((FUN)template)(&val); } int new_main (int argc, char **argv) { f1_stub (); f2_stub (); return 0; } void f1 (int *v) { *v = 99; } void f2 (int *v) { *v = 42; } int main (int argc, char **argv) { int f1SIZE, f2SIZE; /* makeCodeWritable (...); */ /* template = allocExecutablePages(...); */ /* Computed at obfuscation time */ diff ((addr_t)f1, f1SIZE, (addr_t)f2, f2SIZE, script1, script2, &script_len, template, &template_len); /* We hide the proper code */ memset (f1, 0, f1SIZE); memset (f2, 0, f2SIZE); return new_main (argc, argv); } So i need now to write the diff function. that will take the addresses of my two function and that will generate a template with the associated script. So that is why i would like to compare bytes by bytes my two function Sorry for my first post who was not very understandable! Thank you

    Read the article

  • C++ Serial Port Question

    - by Pfeffer
    Problem: I have a hand held device that scans those graphic color barcodes on all packaging. There is a track device that I can use that will slide the device automatically. This track device functions by taking ascii code through a serial port. I need to get this thing to work in FileMaker on a Mac. So no terminal programs, etc... What I've got so far: I bought a Keyspan USB/Serial adapter. Using a program called ZTerm I was successful in sending commands to the device. Example: "C,7^M^J" I was also able to do the same thing in Terminal using this command: screen /dev/tty.KeySerial1 57600 and then type in the same command above(but when I typed in I just hit Control-M and Control-J for the carriage return and line feed) Now I'm writing a plug-in for FileMaker(in C++ of course). I want to get what I did above happen in C++ so when I install that plug-in in FileMaker I can just call one of those functions and have the whole process take place right there. I'm able to connect to the device, but I can't talk to it. It is not responding to anything. I've tried connecting to the device(successfully) using these: FILE *comport; if ((comport = fopen("/dev/tty.KeySerial1", "w")) == NULL){...} and int fd; fd = open("/dev/tty.KeySerial1", O_RDWR | O_NOCTTY | O_NDELAY); This is what I've tried so far in way of talking to the device: fputs ("C,7^M^J",comport); or fprintf(comport,"C,7^M^J"); or char buffer[] = { 'C' , ',' , '7' , '^' , 'M' , '^' , 'J' }; fwrite (buffer , 1 , sizeof(buffer) , comport ); or fwrite('C,7^M^J', 1, 1, comport); Questions: When I connected to the device from Terminal and using ZTerm, I was able to set my baud rate of 57600. I think that may be why it isn't responding here. But I don't know how to do it here.... Does any one know how to do that? I tried this, but it didn't work: comport->BaudRate = 57600; There are a lot of class solutions out there but they all call these include files like termios.h and stdio.h. I don't have these and, for whatever reason, I can't find them to download. I've downloaded a few examples but there are like 20 files in them and they're all calling other files I can't find(like the ones listed above). Do I need to find these and if so where? I just don't know enough about C++ Is there a website where I can download libraries?? Another solution might be to put those terminal commands in C++. Is there a way to do that? So this has been driving me crazy. I'm not a C++ guy, I only know basic programming concepts. Is anyone out there a C++ expert? I ideally I'd like this to just work using functions I already have, like those fwrite, fputs stuff. Thanks!

    Read the article

  • How to determine if CNF formula is satisfiable in Scheme?

    - by JJBIRAN
    Program a SCHEME function sat that takes one argument, a CNF formula represented as above. If we had evaluated (define cnf '((a (not b) c) (a (not b) (not d)) (b d))) then evaluating (sat cnf) would return #t, whereas (sat '((a) (not a))) would return (). You should have following two functions to work: (define comp (lambda (lit) ; This function takes a literal as argument and returns the complement literal as the returning value. Examples: (comp 'a) = (not a), and (comp '(not b)) = b. (define consistent (lambda (lit path) This function takes a literal and a list of literals as arguments, and returns #t whenever the complement of the first argument is not a member of the list represented by the 2nd argument; () otherwise. . Now for the sat function. The real searching involves the list of clauses (the CNF formula) and the path that has currently been developed. The sat function should merely invoke the real "workhorse" function, which will have 2 arguments, the current path and the clause list. In the initial call, the current path is of course empty. Hints on sat. (Ignore these at your own risk!) (define sat (lambda (clauselist) ; invoke satpath (define satpath (lambda (path clauselist) ; just returns #t or () ; base cases: ; if we're out of clauses, what then? ; if there are no literals to choose in the 1st clause, what then? ; ; then in general: ; if the 1st literal in the 1st clause is consistent with the ; current path, and if << returns #t, ; then return #t. ; ; if the 1st literal didn't work, then search << ; the CNF formula in which the 1st clause doesn't have that literal Don't make this too hard. My program is a few functions averaging about 2-8 lines each. SCHEME is consise and elegant! The following expressions may help you to test your programs. All but cnf4 are satisfiable. By including them along with your function definitions, the functions themselves are automatically tested and results displayed when the file is loaded. (define cnf1 '((a b c) (c d) (e)) ) (define cnf2 '((a c) (c))) (define cnf3 '((d e) (a))) (define cnf4 '( (a b) (a (not b)) ((not a) b) ((not a) (not b)) ) ) (define cnf5 '((d a) (d b c) ((not a) (not d)) (e (not d)) ((not b)) ((not d) (not e)))) (define cnf6 '((d a) (d b c) ((not a) (not d) (not c)) (e (not c)) ((not b)) ((not d) (not e)))) (write-string "(sat cnf1) ") (write (sat cnf1)) (newline) (write-string "(sat cnf2) ") (write (sat cnf2)) (newline) (write-string "(sat cnf3) ") (write (sat cnf3)) (newline) (write-string "(sat cnf4) ") (write (sat cnf4)) (newline) (write-string "(sat cnf5) ") (write (sat cnf5)) (newline)

    Read the article

  • Running game, leaving game and continuing animation

    - by Madrusec
    I have been trying to learn some Actionscript recently and have been trying to run an interactive story that at one point turns into an extremel simple shooter game. After the player either wins or looses, then he/she is taken to the rest of the animated story. So I have everything up to the point where the games runs (successfully) but for some reason I am unable to have flash run the rest of the frames, most of which have no code at all. This is the code for scene 1: stop (); import flash.display.MovieClip; import flash.events.MouseEvent; import flash.utils.Timer; import flash.events.TimerEvent; var stoneInGame:uint; var stoneMaker: Timer; var container_mc: MovieClip; var cursor:MovieClip; var score:int; var anxiety:int; var anxiety_mc :MovieClip = new mcAnxiety(); //stage.addChild( anxiety_mc ); function initializeGame():void { stoneInGame = 10; stoneMaker = new Timer(1000, stoneInGame); container_mc = new MovieClip(); addChild(container_mc); container_mc.addChild(anxiety_mc); anxiety_mc.x = 497; anxiety_mc.y = 360; stoneMaker.addEventListener(TimerEvent.TIMER, createStones); stoneMaker.start(); cursor = new Cursor(); addChild(cursor); cursor.enabled = false; Mouse.hide(); stage.addEventListener(MouseEvent.MOUSE_MOVE, dragCursor); score = 0; anxiety = anxiety_mc.totalFrames; anxiety_mc.gotoAndStop(anxiety); } function dragCursor(event:MouseEvent):void { cursor.x = this.mouseX; cursor.y = this.mouseY; } function createStones(event:TimerEvent):void { var stone:MovieClip; stone = new Stone(); stone.x = Math.random() * stage.stageWidth; stone.y = Math.random() * stage.stageHeight; container_mc.addChild(stone); } function increaseScore():void { score ++; if(score >= stoneInGame) { dragCursor.stop(); createStones.stop(); stoneMaker.stop(); trace("you wind!"); } } function decreaseAnxiety():void { anxiety--; if(anxiety <= 0) { stoneMaker.stop(); trace("you lose!"); } else { anxiety_mc.gotoAndStop(anxiety); } increaseScore(); } initializeGame(); So what I tried to do was adding gotoAndPlay() inside both the decreaseAnxiety and increaseScore functions after the trace statements and referenced a frame where I have more keyframes that continue a story. However, Flash just goes back to the beginning of the timeline and I even the functions that change and control the cursor seem to be running. This leads me to believe that I need to make sure that I tell flash o stup running certain functions before jumping to another frame. However, it seems to me that I would still have the same issue and not be able to continue in the timeline. Is there something I am missing? How can I jump out of all this code once the game finishes and simply continue playing the rest of the frames? Any pointers would be greatly appreciated. Thank you

    Read the article

  • Testing for Adjacent Cells In a Multi-level Grid

    - by Steve
    I'm designing an algorithm to test whether cells on a grid are adjacent or not. The catch is that the cells are not on a flat grid. They are on a multi-level grid such as the one drawn below. Level 1 (Top Level) | - - - - - | | A | B | C | | - - - - - | | D | E | F | | - - - - - | | G | H | I | | - - - - - | Level 2 | -Block A- | -Block B- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | | -Block D- | -Block E- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | . . . . . . This diagram is simplified from my actual need but the concept is the same. There is a top level block with many cells within it (level 1). Each block is further subdivided into many more cells (level 2). Those cells are further subdivided into level 3, 4 and 5 for my project but let's just stick to two levels for this question. I'm receiving inputs for my function in the form of "A8, A9, B7, D3". That's a list of cell Ids where each cell Id has the format (level 1 id)(level 2 id). Let's start by comparing just 2 cells, A8 and A9. That's easy because they are in the same block. private static RelativePosition getRelativePositionInTheSameBlock(String v1, String v2) { RelativePosition relativePosition; if( v1-v2 == -1 ) { relativePosition = RelativePosition.LEFT_OF; } else if (v1-v2 == 1) { relativePosition = RelativePosition.RIGHT_OF; } else if (v1-v2 == -BLOCK_WIDTH) { relativePosition = RelativePosition.TOP_OF; } else if (v1-v2 == BLOCK_WIDTH) { relativePosition = RelativePosition.BOTTOM_OF; } else { relativePosition = RelativePosition.NOT_ADJACENT; } return relativePosition; } An A9 - B7 comparison could be done by checking if A is a multiple of BLOCK_WIDTH and whether B is (A-BLOCK_WIDTH+1). Either that or just check naively if the A/B pair is 3-1, 6-4 or 9-7 for better readability. For B7 - D3, they are not adjacent but D3 is adjacent to A9 so I can do a similar adjacency test as above. So getting away from the little details and focusing on the big picture. Is this really the best way to do it? Keeping in mind the following points: I actually have 5 levels not 2, so I could potentially get a list like "A8A1A, A8A1B, B1A2A, B1A2B". Adding a new cell to compare still requires me to compare all the other cells before it (seems like the best I could do for this step is O(n)) The cells aren't all 3x3 blocks, they're just that way for my example. They could be MxN blocks with different M and N for different levels. In my current implementation above, I have separate functions to check adjacency if the cells are in the same blocks, if they are in separate horizontally adjacent blocks or if they are in separate vertically adjacent blocks. That means I have to know the position of the two blocks at the current level before I call one of those functions for the layer below. Judging by the complexity of having to deal with mulitple functions for different edge cases at different levels and having 5 levels of nested if statements. I'm wondering if another design is more suitable. Perhaps a more recursive solution, use of other data structures, or perhaps map the entire multi-level grid to a single-level grid (my quick calculations gives me about 700,000+ atomic cell ids). Even if I go that route, mapping from multi-level to single level is a non-trivial task in itself.

    Read the article

  • jqgrid with asp.net webmethod and json working with sorting, paging, searching and LINQ

    - by aimlessWonderer
    THIS WORKS! Most topics covering jqgrid and asp.net seem to relate to just receiving JSON, or working in the MVC framework, or utilizing other handlers or web services... but not many dealt with actually passing parameters back to an actual webmethod in the codebehind. Furthermore, scarce are the examples that contain successful implementation the AJAX paging, sorting, or searching along with LINQ to SQL for asp.net jqGrid. Below is a working example that may help others who need help to pass parameters to jqGrid in order to have correct paging, sorting, filtering.. it uses pieces from here and there... ================================================== First, THE JAVASCRIPT <script type="text/javascript"> $(document).ready(function() { var grid = $("#list"); $("#list").jqGrid({ // setup custom parameter names to pass to server prmNames: { search: "isSearch", nd: null, rows: "numRows", page: "page", sort: "sortField", order: "sortOrder" }, // add by default to avoid webmethod parameter conflicts postData: { searchString: '', searchField: '', searchOper: '' }, // setup ajax call to webmethod datatype: function(postdata) { mtype: "GET", $.ajax({ url: 'PageName.aspx/getGridData', type: "POST", contentType: "application/json; charset=utf-8", data: JSON.stringify(postdata), dataType: "json", success: function(data, st) { if (st == "success") { var grid = jQuery("#list")[0]; grid.addJSONData(JSON.parse(data.d)); } }, error: function() { alert("Error with AJAX callback"); } }); }, // this is what jqGrid is looking for in json callback jsonReader: { root: "rows", page: "page", total: "totalpages", records: "totalrecords", cell: "cell", id: "id", //index of the column with the PK in it userdata: "userdata", repeatitems: true }, colNames: ['Id', 'First Name', 'Last Name'], colModel: [ { name: 'id', index: 'id', width: 55, search: false }, { name: 'fname', index: 'fname', width: 200, searchoptions: { sopt: ['eq', 'ne', 'cn']} }, { name: 'lname', index: 'lname', width: 200, searchoptions: { sopt: ['eq', 'ne', 'cn']} } ], rowNum: 10, rowList: [10, 20, 30], pager: jQuery("#pager"), sortname: "fname", sortorder: "asc", viewrecords: true, caption: "Grid Title Here" }).jqGrid('navGrid', '#pager', { edit: false, add: false, del: false }, {}, // default settings for edit {}, // add {}, // delete { closeOnEscape: true, closeAfterSearch: true}, //search {} ) }); </script> ================================================== Second, THE C# WEBMETHOD [WebMethod] public static string getGridData(int? numRows, int? page, string sortField, string sortOrder, bool isSearch, string searchField, string searchString, string searchOper) { string result = null; MyDataContext db = null; try { //--- retrieve the data db = new MyDataContext("my connection string path"); var query = from u in db.TBL_USERs select u; //--- determine if this is a search filter if (isSearch) { searchOper = getOperator(searchOper); // need to associate correct operator to value sent from jqGrid string whereClause = String.Format("{0} {1} {2}", searchField, searchOper, "@" + searchField); //--- associate value to field parameter Dictionary<string, object> param = new Dictionary<string, object>(); param.Add("@" + searchField, searchString); query = query.Where(whereClause, new object[1] { param }); } //--- setup calculations int pageIndex = page ?? 1; //--- current page int pageSize = numRows ?? 10; //--- number of rows to show per page int totalRecords = query.Count(); //--- number of total items from query int totalPages = (int)Math.Ceiling((decimal)totalRecords / (decimal)pageSize); //--- number of pages //--- filter dataset for paging and sorting IQueryable<TBL_USER> orderedRecords = query.OrderBy(sortfield); IEnumerable<TBL_USER> sortedRecords = orderedRecords.ToList(); if (sortorder == "desc") sortedRecords= sortedRecords.Reverse(); sortedRecords= sortedRecords .Skip((pageIndex - 1) * pageSize) //--- page the data .Take(pageSize); //--- format json var jsonData = new { totalpages = totalPages, //--- number of pages page = pageIndex, //--- current page totalrecords = totalRecords, //--- total items rows = ( from row in sortedRecords select new { i = row.USER_ID, cell = new string[] { row.USER_ID.ToString(), row.FNAME.ToString(), row.LNAME } } ).ToArray() }; result = Newtonsoft.Json.JsonConvert.SerializeObject(jsonData); } catch (Exception ex) { Debug.WriteLine(ex); } finally { if (db != null) db.Dispose(); } return result; } ================================================== Third, NECESSITIES In order to have dynamic OrderBy clauses in the LINQ, I had to pull in a class to my AppCode folder called 'Dynamic.cs'. You can retrieve the file from downloading here. You will find the file in the "DynamicQuery" folder. That file will give you the ability to utilized dynamic ORDERBY clause since we don't know what column we're filtering by except on the initial load. To serialize the JSON back from the C-sharp to the JS, I incorporated the James Newton-King JSON.net DLL found here : http://json.codeplex.com/releases/view/37810. After downloading, there is a "Newtonsoft.Json.Compact.dll" which you can add in your Bin folder as a reference Here's my USING's block using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Web.UI.WebControls; using System.Web.Services; using System.Linq.Dynamic; For the Javascript references, I'm using the following scripts in respective order in case that helps some folks: 1) jquery-1.3.2.min.js ... 2) jquery-ui-1.7.2.custom.min.js ... 3) json.min.js ... 4) i18n/grid.locale-en.js ... 5) jquery.jqGrid.min.js For the CSS, I'm using jqGrid's necessities as well as the jQuery UI Theme: 1) jquery_theme/jquery-ui-1.7.2.custom.css ... 2) ui.jqgrid.css The key to getting the parameters from the JS to the WebMethod without having to parse an unserialized string on the backend or having to setup some JS logic to switch methods for different numbers of parameters was this block postData: { searchString: '', searchField: '', searchOper: '' }, Those parameters will still be set correctly when you actually do a search and then reset to empty when you "reset" or want the grid to not do any filtering Hope this helps some others!!!! Please reply if you find major issues or ways of refactoring or doing better that I haven't considered.

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Drag and Drop in MVVM with ScatterView

    - by Rich McGuire
    I'm trying to implement drag and drop functionality in a Surface Application that is built using the MVVM pattern. I'm struggling to come up with a means to implement this while adhering to the MVVM pattern. Though I'm trying to do this within a Surface Application I think the solution is general enough to apply to WPF as well. I'm trying to produce the following functionality: User contacts a FrameworkElement within a ScatterViewItem to begin a drag operation (a specific part of the ScatterViewItem initiates the drag/drop functionality) When the drag operation begins a copy of that ScatterViewItem is created and imposed upon the original ScatterViewItem, the copy is what the user will drag and ultimately drop The user can drop the item onto another ScatterViewItem (placed in a separate ScatterView) The overall interaction is quite similar to the ShoppingCart application provided in the Surface SDK, except that the source objects are contained within a ScatterView rather than a ListBox. I'm unsure how to proceeded in order to enable the proper communication between my ViewModels in order to provide this functionality. The main issue I've encountered is replicating the ScatterViewItem when the user contacts the FrameworkElement.

    Read the article

  • WPF Delay Refresh of UI Elements in ItemsControl

    - by Wonko the Sane
    Hello All, The short question is - how can I prevent (delay) a bound UI element from refreshing until I want it to? The longer explanation: I have a process that adds a number of items to an ItemsControl, and then performs some additional calculations on those items using a background thread. This (correctly) updates the items as it goes along. However, I would like to prevent the ItemsControl from updating during a particular calculation, as it performs some reordering of the items before it does additional calculations based on that order, and then returns the items to their original order. I do show a "Please Wait" animation on top of the ItemsControl, with the ItemsControl dimmed down as it is working, but I'd rather not hide the ItemsControl entirely because it gives the user an indication that progress is being made. Thanks, wTs

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • Disk Search / Sort Algorithm

    - by AlgoMan
    Given a Range of numbers say 1 to 10,000, Input is in random order. Constraint: At any point only 1000 numbers can be loaded to memory. Assumption: Assuming unique numbers. I propose the following efficient , "When-Required-sort Algorithm". We write the numbers into files which are designated to hold particular range of numbers. For example, File1 will have 0 - 999 , File2 will have 1000 - 1999 and so on in random order. If a particular number which is say "2535" is being searched for then we know that the number is in the file3 (Binary search over range to find the file). Then file3 is loaded to memory and sorted using say Quick sort (which is optimized to add insertion sort when the array size is small ) and then we search the number in this sorted array using Binary search. And when search is done we write back the sorted file. So in long run all the numbers will be sorted. Please comment on this proposal.

    Read the article

  • Submitting from from Colorbox iframe to parent window

    - by user281867
    I'm pretty new to colorbox and lovin-it. I've been trying to submit a form from Colorbox iframe to parent window but haven't had any luck. Any suggestions would be greatly appreciated! Here's my code. $('#CustomizeBuy').click(function(event){ event.preventDefault(); $(this).attr('action','customize-order.cfm'); parent.location.submit(); parent.$.fn.colorbox.close(); }); or $('#CustomizeBuy').click(function(event){ event.preventDefault(); document.QuickOrderForm.action ="customize-order.cfm"; $('#QuickOrderForm').submit(); parent.$.fn.colorbox.close(); });

    Read the article

  • How to access the FirstData web service integration WSDL file?

    - by rcampbell
    FirstData has horrendous customer support, but I have to integrate with their Global Gateway web service for a project I'm working on. I'm simply trying to run the Axis2 wsdl2java tool according to the instructions in their manual. This basically consists of adding the keyStore and keyStorePassword JVM parameter. I've done both, but I continue to get Connection reset errors when trying to run: wsdl2java.bat -uri https://www.staging.linkpointcentral.com/fdggwsapi/order.wsdl -S C:\ When I try to access the URL with my browser, I get Error 101 (net::ERR_CONNECTION_RESET): Unknown error. I assume there are developers out there who have completed a FirstData web service integration. What am I doing wrong? I've also tried connecting via cURL: C:\curl-7.19.7-ssl-sspi-zlib-static-bin-w32>curl --cert C:\FDGGWS\WSXXXXXXXXXX._.1.pem --key C:\FDGGWS\WSXXXXXXXXXX._.1.key --insecure https://www.staging.linkpointcentral.com/fdggwsapi/order.wsdl Enter PEM pass phrase: curl: (52) SSL read: error:00000000:lib(0):func(0):reason(0), errno 10054 I know I'm entering the correct key password because when I enter a fake one I get: curl: (58) unable to set private key file: 'C:\FDGGWS\WSXXXXXXXXXX._.1.key' type PEM

    Read the article

  • Runge-Kutta (RK4) integration for game physics

    - by Kai
    Gaffer on Games has a great article about using RK4 integration for better game physics. The implementation is straightforward but the math behind it confuses me. I understand derivatives and integrals on a conceptual level but I haven't manipulated equations in a long time. Here's the brunt of Gaffer's implementation: void integrate(State &state, float t, float dt) { Derivative a = evaluate(state, t, 0.0f, Derivative()); Derivative b = evaluate(state, t+dt*0.5f, dt*0.5f, a); Derivative c = evaluate(state, t+dt*0.5f, dt*0.5f, b); Derivative d = evaluate(state, t+dt, dt, c); const float dxdt = 1.0f/6.0f * (a.dx + 2.0f*(b.dx + c.dx) + d.dx); const float dvdt = 1.0f/6.0f * (a.dv + 2.0f*(b.dv + c.dv) + d.dv) state.x = state.x + dxdt * dt; state.v = state.v + dvdt * dt; } Can anybody explain in simple terms how RK4 works? Specifically, why are we averaging the derivatives at 0.0f, 0.5f, 0.5f, and 1.0f? How is averaging derivatives up to the 4th order different from doing a simple euler integration with a smaller timestep? After reading the accepted answer below, and several other articles, I have a grasp on how RK4 works. To answer my own questions: Can anybody explain in simple terms how RK4 works? RK4 takes advantage of the fact that we can get a much better approximation of a function if we use its higher-order derivatives rather than just the first or second derivative. That's why the Taylor series converges much faster than Euler approximations. (take a look at the animation on the right side of that page) Specifically, why are we averaging the derivatives at 0.0f, 0.5f, 0.5f, and 1.0f? The Runge-Kutta method is an approximation of a function that samples derivatives of several points within a timestep, unlike the Taylor series which only samples derivatives of a single point. After sampling these derivatives we need to know how to weigh each sample to get the closest approximation possible. An easy way to do this is to pick constants that coincide with the Taylor series, which is how the constants of a Runge-Kutta equation are determined. This article made it clearer for me: http://web.mit.edu/10.001/Web/Course%5FNotes/Differential%5FEquations%5FNotes/node5.html. Notice how (15) is the Taylor series expansion while (17) is the Runge-Kutta derivation. How is averaging derivatives up to the 4th order different from doing a simple euler integration with a smaller timestep? Mathematically it converges much faster than doing many Euler approximations. Of course, with enough Euler approximations we can gain equal accuracy to RK4, but the computational power needed doesn't justify using Euler.

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • Linq where clause with multiple conditions and null check

    - by SocialAddict
    I'm trying to check if a date is not null in linq and if it isn't check its a past date. QuestionnaireRepo.FindAll(q => !q.ExpiredDate.HasValue || q.ExpiredDate >DateTime.Now).OrderByDescending(order => order.CreatedDate); I need the second check to only apply if the first is true. I am using a single repository pattern and FindAll accepted a where clause ANy ideas? There are lots of similar questions on here but not that give the answer, I'm very new to Linq as you may of guessed :) Edit: I get the results I require now but it will be checking the conditional on null values in some cases. Is this not a bad thing?

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • Knockout.js mapping plugin with require.js

    - by Ravi
    What is the standard way of loading mapping plugin in require.js ? Below is my config.js (require.js config file) require.config({ // Initialize the application with the main application file. deps:["app"], paths:{ // JavaScript folders. libs:"lib", plugins:"lib/plugin", templates:"../templates", // Libraries. jquery:"lib/jquery-1.7.2.min", underscore:"lib/lodash", text:'text', order:'order', knockout:"lib/knockout", knockoutmapping:"lib/plugin/knockout-mapping" }, shim:{ underscore:{ exports:'_' }, knockout:{ deps:["jquery"], exports:"knockout" } } } In my view model define(['knockout', 'knockoutmapping'], function(ko, mapping) { } However, mapping is not bound to ko.mapping. Any pointers/suggestions would be appreciated. Thanks, Ravi

    Read the article

  • Android "Hello, MapView" Tutorial - Map Tiles Do Not Load

    - by Onyx
    I am new to Android software development and new to this site. I am hoping someone might have some experience with the problem I am having. I've been following the Hello, MapView tutorial in order to not only learn the Android framework, but also the Google Maps library. I've tried my best to implement things exactly as the tutorial has instructed. My problem is that the application does load in my emulator (or even on my phone for that matter), but the map tiles do not load. Searching Google I found a post by someone else on another site having the same issue, but his/her problem was that the important elements added to the AndroidManifest.xml file were not in the right order. I double-checked this in mine, but everything seems to be right. So, I am not sure what the issue is and was hoping others have seen this before. I can provide any snippets of code, if that would help. Thank you.

    Read the article

  • Pluralsight Meet the Author Podcast on Structuring JavaScript Code

    - by dwahlin
    I had the opportunity to talk with Fritz Onion from Pluralsight about one of my recent courses titled Structuring JavaScript Code for one of their Meet the Author podcasts. We talked about why JavaScript patterns are important for building more re-useable and maintainable apps, pros and cons of different patterns, and how to go about picking a pattern as a project is started. The course provides a solid walk-through of converting what I call “Function Spaghetti Code” into more modular code that’s easier to maintain, more re-useable, and less susceptible to naming conflicts. Patterns covered in the course include the Prototype Pattern, Revealing Module Pattern, and Revealing Prototype Pattern along with several other tips and techniques that can be used. Meet the Author:  Dan Wahlin on Structuring JavaScript Code   The transcript from the podcast is shown below: [Fritz]  Hello, this is Fritz Onion with another Pluralsight author interview. Today we’re talking with Dan Wahlin about his new course, Structuring JavaScript Code. Hi, Dan, it’s good to have you with us today. [Dan]  Thanks for having me, Fritz. [Fritz]  So, Dan, your new course, which came out in December of 2011 called Structuring JavaScript Code, goes into several patterns of usage in JavaScript as well as ways of organizing your code and what struck me about it was all the different techniques you described for encapsulating your code. I was wondering if you could give us just a little insight into what your motivation was for creating this course and sort of why you decided to write it and record it. [Dan]  Sure. So, I got started with JavaScript back in the mid 90s. In fact, back in the days when browsers that most people haven’t heard of were out and we had JavaScript but it wasn’t great. I was on a project in the late 90s that was heavy, heavy JavaScript and we pretty much did what I call in the course function spaghetti code where you just have function after function, there’s no rhyme or reason to how those functions are structured, they just kind of flow and it’s a little bit hard to do maintenance on it, you really don’t get a lot of reuse as far as from an object perspective. And so coming from an object-oriented background in JAVA and C#, I wanted to put something together that highlighted kind of the new way if you will of writing JavaScript because most people start out just writing functions and there’s nothing with that, it works, but it’s definitely not a real reusable solution. So the course is really all about how to move from just kind of function after function after function to the world of more encapsulated code and more reusable and hopefully better maintenance in the process. [Fritz]  So I am sure a lot of people have had similar experiences with their JavaScript code and will be looking forward to seeing what types of patterns you’ve put forth. Now, a couple I noticed in your course one is you start off with the prototype pattern. Do you want to describe sort of what problem that solves and how you go about using it within JavaScript? [Dan]  Sure. So, the patterns that are covered such as the prototype pattern and the revealing module pattern just as two examples, you know, show these kind of three things that I harp on throughout the course of encapsulation, better maintenance, reuse, those types of things. The prototype pattern specifically though has a couple kind of pros over some of the other patterns and that is the ability to extend your code without touching source code and what I mean by that is let’s say you’re writing a library that you know either other teammates or other people just out there on the Internet in general are going to be using. With the prototype pattern, you can actually write your code in such a way that we’re leveraging the JavaScript property and by doing that now you can extend my code that I wrote without touching my source code script or you can even override my code and perform some new functionality. Again, without touching my code.  And so you get kind of the benefit of the almost like inheritance or overriding in object oriented languages with this prototype pattern and it makes it kind of attractive that way definitely from a maintenance standpoint because, you know, you don’t want to modify a script I wrote because I might roll out version 2 and now you’d have to track where you change things and it gets a little tricky. So with this you just override those pieces or extend them and get that functionality and that’s kind of some of the benefits that that pattern offers out of the box. [Fritz]  And then the revealing module pattern, how does that differ from the prototype pattern and what problem does that solve differently? [Dan]  Yeah, so the prototype pattern and there’s another one that’s kind of really closely lined with revealing module pattern called the revealing prototype pattern and it also uses the prototype key word but it’s very similar to the one you just asked about the revealing module pattern. [Fritz]  Okay. [Dan]  This is a really popular one out there. In fact, we did a project for Microsoft that was very, very heavy JavaScript. It was an HMTL5 jQuery type app and we use this pattern for most of the structure if you will for the JavaScript code and what it does in a nutshell is allows you to get that encapsulation so you have really a single function wrapper that wraps all your other child functions but it gives you the ability to do public versus private members and this is kind of a sort of debate out there on the web. Some people feel that all JavaScript code should just be directly accessible and others kind of like to be able to hide their, truly their private stuff and a lot of people do that. You just put an underscore in front of your field or your variable name or your function name and that kind of is the defacto way to say hey, this is private. With the revealing module pattern you can do the equivalent of what objective oriented languages do and actually have private members that you literally can’t get to as an external consumer of the JavaScript code and then you can expose only those members that you want to be public. Now, you don’t get the benefit though of the prototype feature, which is I can’t easily extend the revealing module pattern type code if you don’t like something I’m doing, chances are you’re probably going to have to tweak my code to fix that because we’re not leveraging prototyping but in situations where you’re writing apps that are very specific to a given target app, you know, it’s not a library, it’s not going to be used in other apps all over the place, it’s a pattern I actually like a lot, it’s very simple to get going and then if you do like that public/private feature, it’s available to you. [Fritz]  Yeah, that’s interesting. So it’s almost, you can either go private by convention just by using a standard naming convention or you can actually enforce it by using the prototype pattern. [Dan]  Yeah, that’s exactly right. [Fritz]  So one of the things that I know I run across in JavaScript and I’m curious to get your take on is we do have all these different techniques of encapsulation and each one is really quite different when you’re using closures versus simply, you know, referencing member variables and adding them to your objects that the syntax changes with each pattern and the usage changes. So what would you recommend for people starting out in a brand new JavaScript project? Should they all sort of decide beforehand on what patterns they’re going to stick to or do you change it based on what part of the library you’re working on? I know that’s one of the points of confusion in this space. [Dan]  Yeah, it’s a great question. In fact, I just had a company ask me about that. So which one do I pick and, of course, there’s not one answer fits all. [Fritz]  Right. [Dan]  So it really depends what you just said is absolutely in my opinion correct, which is I think as a, especially if you’re on a team or even if you’re just an individual a team of one, you should go through and pick out which pattern for this particular project you think is best. Now if it were me, here’s kind of the way I think of it. If I were writing a let’s say base library that several web apps are going to use or even one, but I know that there’s going to be some pieces that I’m not really sure on right now as I’m writing I and I know people might want to hook in that and have some better extension points, then I would look at either the prototype pattern or the revealing prototype. Now, really just a real quick summation between the two the revealing prototype also gives you that public/private stuff like the revealing module pattern does whereas the prototype pattern does not but both of the prototype patterns do give you the benefit of that extension or that hook capability. So, if I were writing a library that I need people to override things or I’m not even sure what I need them to override, I want them to have that option, I’d probably pick a prototype, one of the prototype patterns. If I’m writing some code that is very unique to the app and it’s kind of a one off for this app which is what I think a lot of people are kind of in that mode as writing custom apps for customers, then my personal preference is the revealing module pattern you could always go with the module pattern as well which is very close but I think the revealing module patterns a little bit cleaner and we go through that in the course and explain kind of the syntax there and the differences. [Fritz]  Great, that makes a lot of sense. [Fritz]  I appreciate you taking the time, Dan, and I hope everyone takes a chance to look at your course and sort of make these decisions for themselves in their next JavaScript project. Dan’s course is, Structuring JavaScript Code and it’s available now in the Pluralsight Library. So, thank you very much, Dan. [Dan]  Thanks for having me again.

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >